id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.17540
Fractional-linear integrals of geodesic flows on surfaces and Nakai's geodesic 4-webs
We prove that if the geodesic flow on a surface has an integral, fractional-linear in momenta, then the dimension of the space of such integrals is either 3 or 5, the latter case corresponding to constant gaussian curvature. We give also a geometric criterion for existence of fractional-linear integrals: such integral exists if and only if the surface carries a geodesic 4-web with constant cross-ratio of the four directions tangent to the web leaves.
Sergey I. Agafonov, Thaís G. P. Alves
2023-06-30T10:52:47Z
http://arxiv.org/abs/2306.17540v1
# Fractional-linear integrals of geodesic flows on surfaces and Nakai's geodesic 4-webs ###### Abstract We prove that if the geodesic flow on a surface has an integral, fractional-linear in momenta, then the dimension of the space of such integrals is either 3 or 5, the latter case corresponding to constant gaussian curvature. We give also a geometric criterion for existence of fractional-linear integrals: such integral exists if and only if the surface carries a geodesic 4-web with constant cross-ratio of the four directions tangent to the web leaves. ## 1 Introduction Integrable geodesic flow on surfaces is a classical subject of differential geometry and analytic mechanics since the 19th century works of Jacobi [1], Darboux [2], Dini [3], Koenigs [4] and many others. Special attention was payed to integrals polynomial in momenta. This problem is well motivated even in the local setting due to the following observations: 1) if there is a locally defined analytic integral then each homogeneous in momenta part of its Taylor expansion is an integral polynomial in momenta, 2) the surface of zero velocities in the tangent bundle of surface is the set of singular points of the geodesic flow, the existence of regular integrals at singular point being a nontrivial restriction. Since the geodesic flow is Hamiltonian, it is enough to find one first integral independent of the Hamiltonian function to integrate the flow due to the Liouville Theorem. Polynomial integrals of fixed degree constitute a vector space of finite dimension. Establishing this dimension turned out to be a non-trivial task for degrees grater then two, in fact, the list of possible dimensions is not known already for cubic integrals (see [4]). The case of linear integrals is equivalent to listing the possible dimensions of isometry group due to the Noether Theorem: the dimension is either 0, or 1, or 3. The case of quadratic integrals was settled by Koenings: the possible dimensions are 1, 2, 3, 4, and 6, the largest dimension in both cases corresponding to constant gaussian curvature. Modern interests to this topic is due to 1) a new discovered relation to infinite-dimensional integrable systems of PDE and 2) a better understanding of geometry behind the integrability of geodesic flows on surfaces. For a fixed metric, the system of partial differential equations (PDE) for the coefficients of the integral is over-determined but if one considers the metric also as unknown then the system turns out to be of hydrodynamic type with remarkable properties: it is diagonalizable, it possesses infinitely many conservation laws, and it is linearizable by Tsarev's _generalized hodograph method_ (see [10, 11, 12, 13, 14, 15]). On the other hand, the existence of quadratic and cubic integrals can be described in geometric terms via geodesic webs. For example, consider a cubic integral, which can be rewritten also as polynomial in velocity \(v=(\xi,\eta)\) due to the canonical isomorphism between the tangent and cotangent bundles of the surface with local coordinates \(x,y\): \[I_{3}=a_{0}(x,y)\xi^{3}+a_{1}(x,y)\xi^{2}\eta+a_{2}(x,y)\xi\eta^{2}+a_{3}(x,y) \eta^{3}.\] Equation it to zero, one gets an implicit cubic ordinary differential equation (ODE). If all its 3 roots \([\xi:\eta]\) are real, the integral curves of this ODE form a hexagonal geodesic 3-web. And conversely, the existence of a hexagonal geodesic 3-web implies existence of a cubic integral [16]. There is also a geometric characterization of quadratic integrals [16]: existence of quadratic integrals is equivalent to existence of geodesic net \(\mathcal{G}\) such that any 3-subweb of 4-web, formed by \(\mathcal{G}\) and its bisector net \(\mathcal{N}\), is hexagonal. Geodesic webs is a classical chapter of differential geometry, such webs were a subject of intensive study, though without any relation to dynamics, see [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. In his last book on the web theory [10] Blaschke posed the problem of finding an intrinsic criterion for existence of hexagonal geodesic 3-webs. Thus, the result of [16] provides such criterion it dynamical terms and generalizes the Graf and Sauer Theorem [11] for metrics of non-constant gaussian curvature. Another natural class of integrals of geodesic flows are integrals rational in momenta, whose numerator and denominator are homogeneous polynomials in momenta of the same degree. The study of such integrals was initiated by Kozlov [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 12, 14, 16, 18, 19, 13, 17, 19, 14, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 61, 63, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 89, 90, 91, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 77, 78, 79, 80, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 83, 84, 85, 86, 87, 88, 89, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 83, 84, 85, 86, 87, 89, 94, 95, 96, 97, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 83, 84, 85, 86, 87, 89, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 83, 84, 85, 86, 87, 89, 94, 95, 96, 97, 99, 10, 11, 12, 13 where on the left-hand side is a fractional-linear integral written on the tangent bundle and \(\lambda\in\mathbb{R}\) is a parameter. Taking four such foliations one obtains a geodesic 4-web with constant cross-ratio of web tangent directions. Nakai studied 4-webs with constant cross-ratio of web tangent directions in [N-98] and showed that the Blaschke curvatures of all its 3-subwebs are equal. He excluded the case of parallelizable 4-web with hexagonal 3-subwebs as trivial, though we will tolerate this degeneracy in our definition, as it occurs for metrics with constant gaussian curvatures. Dufour e Lehmann proved [DL-22] that Nakai's webs can have rank 0 or 1 and gave examples of webs with rank 1. Our second main results claims that there is a fractional linear integral if and only if the surface carries Nakai's geodesic 4-web. All the results of this paper are local, all functions, surfaces, fields, and other objects are smooth. ## 2 Fractional-linear integrals of geodesic flow Geodesic flow on a surface \(S\) with local coordinates \((x,y)\) is Hamiltonian with respect to canonical symplectic form \(\Omega=dp\wedge dx+dq\wedge dy\) on the cotangent bundle \(T^{*}S\), where \(f=(p,q)\) is the momentum. The Hamiltonian \(\frac{1}{2}g(v,v)\), where \(v=(\xi,\eta)\in T_{s}S\) is the velocity and \(g\) metric, can be rewritten in momentum due to the canonical isomorphism between \(T_{s}S\) e \(T_{s}^{*}S\) via \(g(v,*)=\langle f,*\rangle\). For metric \(g=\Lambda(x,y)(dx^{2}+dy^{2})\) in conformal coordinates, we get the Hamiltonian \(H=\frac{p^{2}+q^{2}}{2\Lambda}\) and \(p,q\) are conjugated to the coordinates \((x,y)\). **Lemma 1**: _[_K-14_]_ _Geodesic flow of the metric \(g=\Lambda(x,y)(dx^{2}+dy^{2})\) admits a fractional-linear integral_ \[I=\frac{A(x,y)p+B(x,y)q}{C(x,y)p+E(x,y)q} \tag{1}\] _if and only if_ \[\begin{array}{l}2\Lambda(CA_{x}-AC_{x})-\Lambda_{y}(AE-BC)=0,\\ 2\Lambda(EA_{x}-AE_{x}+CA_{y}-AC_{y}+CB_{x}-BC_{x})+\Lambda_{x}(AE-BC)=0,\\ 2\Lambda(EA_{y}-AE_{y}+CB_{y}-BC_{y}+EB_{x}-BE_{x})-\Lambda_{y}(AE-BC)=0,\\ 2\Lambda(EB_{y}-BE_{y})+\Lambda_{x}(AE-BC)=0.\end{array} \tag{2}\] _Proof:_ The condition \(\{I,H\}=0\) writes as an equation, cubic and homogeneous in \(p,q\): \[(2\Lambda CA_{x}-2\Lambda AC_{x}-\Lambda_{y}AE+\Lambda_{y}BC)p^{3}\] \[+ (2\Lambda EA_{x}-2\Lambda AE_{x}+2\Lambda CA_{y}-2\Lambda AC_{y} +2\Lambda CB_{x}-2\Lambda BC_{x}+\Lambda_{x}AE-\Lambda_{x}BC)p^{2}q\] \[+ (2\Lambda EA_{y}-2\Lambda AE_{y}+2\Lambda CB_{y}-2\Lambda BC_{y }+2\Lambda EB_{x}-2\Lambda BE_{x}-\Lambda_{y}AE-\Lambda_{y}BC)pq^{2}\] \[+ (2\Lambda EB_{y}-2\Lambda BE_{y}+\Lambda_{x}AE-\Lambda_{x}BC)q^{3 }=0.\] Since \(A,B,C\) e \(E\) are independent of \(p,q\), the equation splits into four given in (2). \(\square\) ## 3 Differential invariants Let us normalize a fractional linear integral (1) to \(A(x,y)E(x,y)-B(x,y)C(x,y)=1\). Thus normalized integral defines the map \[\begin{array}{c}m:S\longrightarrow SL_{2}(\mathbb{R}),\\ \\ (x,y)\mapsto m(x,y)=\left(\begin{array}{cc}A(x,y)&B(x,y)\\ C(x,y)&E(x,y)\end{array}\right).\end{array} \tag{3}\] The matrix Lie group \(SL_{2}(\mathbb{R})=\left\{\left(\begin{array}{cc}\alpha&\beta\\ \gamma&\delta\end{array}\right):\ \alpha,\beta,\gamma,\delta\in\mathbb{R},\alpha \delta-\beta\gamma=1\right\}\) naturally acts on the set of fractional-linear integrals: \[h\cdot I=\frac{\alpha I+\beta}{\gamma I+\delta}=\frac{(\alpha A+\beta C)p+( \alpha B+\beta E)q}{(\gamma A+\delta C)p+(\gamma B+\delta E)q}, \tag{4}\] where \(h=\left(\begin{array}{cc}\alpha&\beta\\ \gamma&\delta\end{array}\right)\in SL_{2}(\mathbb{R})\). Note that \[\tilde{I}=\frac{(\alpha A+\beta C)p+(\alpha B+\beta E)q}{(\gamma A+\delta C)p +(\gamma B+\delta E)q}\] is also normalized \[(\alpha A+\beta C)(\gamma B+\delta E)-(\alpha B+\beta E)(\gamma A+\delta C)=1.\] **Definition 1**: _We say that two fractional-linear integrals \(I\) e \(\tilde{I}\) are equivalent if there exists an element \(h\in SL_{2}(\mathbb{R})\) such that \(h\cdot I=\tilde{I}\)._ Thus the system (2) is invariant by the above defined action of \(SL_{2}(\mathbb{R})\) and can be rewritten in differential invariants. Since the action is just multiplying \(m\) by \(h\) on the left, differential invariants can be obtained via _Darboux derivative_ of the map \(m\) defined by (3). Recall that the Darboux derivative of the map \(m:S\to SL_{2}(\mathbb{R})\) is the pull-back of the Maurer-Cartan form of \(SL_{2}(\mathbb{R})\): \(\omega=m^{-1}\cdot dm\) (see [S-96]). In fact, it is invariant as \[(hm)^{-1}\cdot d(hm)=m^{-1}\cdot h^{-1}\cdot h\cdot d(m)=m^{-1}\cdot d(m).\] We have \[m^{-1}\cdot dm=\left(\begin{array}{cc}E&-B\\ -C&A\end{array}\right)\left(\begin{array}{cc}dA&dB\\ dC&dE\end{array}\right)=\left(\begin{array}{cc}EdA-BdC&EdB-BdE\\ -CdA+AdC&-CdB+AdE\end{array}\right), \tag{5}\] where \[dA=A_{x}dx+A_{y}dy,\quad dB=B_{x}dx+B_{y}dy,\quad dC=C_{x}dx+C_{y}dy,\quad dE =E_{x}dx+E_{y}dy.\] Substituting into (5) we obtain \[m^{-1}\cdot dm=\left(\begin{array}{cc}EA_{x}-BC_{x}&EB_{x}-BE_{x}\\ AC_{x}-CA_{x}&AE_{x}-CB_{x}\end{array}\right)dx+\left(\begin{array}{cc}EA_{y }-BC_{y}&EB_{y}-BE_{y}\\ AC_{y}-CA_{y}&AE_{y}-CB_{y}\end{array}\right)dy.\] The group does not act on independent coordinates \(x,y\), therefore all the elements of the matrix coefficients of \(dx\) and \(dy\) are scalar diferential invariants. Since the trace of \(\omega\) vanishes we have 6 invariants \[P:=EA_{x}-BC_{x},\ \ Q:=EB_{x}-BE_{x},\ \ R:=AC_{x}-CA_{x},\] \[X:=EA_{y}-BC_{y},\ \ Y:=EB_{y}-BE_{y},\ \ Z:=AC_{y}-CA_{y}.\] Using the normalization \(det(m)=1\) we exclude \(E=\frac{BC+1}{A}\) and its derivatives \[E_{x}=\frac{BC_{x}+CB_{x}}{A}-\frac{(BC+1)A_{x}}{A^{2}},\ \ \ \ E_{y}=\frac{BC_{y}+ CB_{y}}{A}-\frac{(BC+1)A_{y}}{A^{2}}, \tag{6}\] and rewrite the invariants \[P=\frac{BC+1}{A}A_{x}-BC_{x},\ \ Q=\frac{B^{2}C+B}{A^{2}}A_{x}+\frac{1}{A}B_{x }-\frac{B^{2}}{A}C_{x},\ \ R=AC_{x}-CA_{x},\] \[X=\frac{BC+1}{A}A_{y}-BC_{y},\ \ Y=\frac{B^{2}C+B}{A^{2}}A_{y}+\frac{1}{A}B_{y }-\frac{B^{2}}{A}C_{y},\ \ Z=AC_{y}-CA_{y}.\] We can express derivatives \(A_{x},B_{x},C_{x},A_{y},B_{y},C_{y}\) in terms of the invariants as follows: \[\begin{array}{lll}A_{x}=PA+RB,&B_{x}=QA-PB,&C_{x}=\frac{PAC+ RBC+R}{A},\\ \\ A_{y}=XA+ZB,&B_{y}=YA-XB,&C_{y}=\frac{XAC+ZBC+Z}{A}.\end{array} \tag{7}\] Calculation of compatibility conditions is greatly simplified for the metric form \[g=L(x,y)dxdy, \tag{8}\] related by formal complex substitution \(x=u+iv\), \(y=u-iv\) to the conformal form: \[g=L(x,y)dxdy=\Lambda(u+iv,u-iv)(du^{2}+dv^{2}).\] Since all the expressions are rational, this substitution does not affect our compatibility results obtained in a purely algebraic way (compare with calculations of Lie in [L-82] and Koenigs in [K-96]). **Lemma 2**: _Geodesic flow on a surface admits a fractional-linear integral (1) if and only if the invariants \(P,Q,R,X,Y,Z\) are related to the metric (8) as follows:_ \[P=-\frac{Y}{2}-\frac{L_{x}}{2L},\ \ \ X=\frac{R}{2}+\frac{L_{y}}{2L},\ \ Q=Z=0. \tag{9}\] _Proof:_ For the metric (8) the Hamiltonian assumes the form \(H=\frac{2pq}{L}\), then the condition \(\{I,H\}=0\), writes as \[\begin{array}{l}L(AC_{y}-CA_{y})=0,\\ L(CA_{x}-AE_{y}-AC_{x}+CB_{y}-BC_{y}+EA_{y})-L_{y}(AE-BC)=0,\\ L(EA_{x}-BE_{y}-BC_{x}+EB_{y}-AE_{x}+CB_{x})+L_{x}(AE-BC)=0,\\ L(BE_{x}-EB_{x})=0.\end{array}\] Using the normalization \(AE-BC=1,\) substituting \(E,E_{x},E_{y}\) from (6) and \(A_{x},B_{x},C_{x},A_{y},B_{y},C_{y}\) from (7) we get \[LZ=LQ=0,\ \ 2LX-LR-L_{y}=0,\ \ 2LP+LY+L_{x}=0,\] hence (9). \(\square\) ## 4 Dimension of the space of integrals The gaussian curvature of metric (8) is \[K=-\frac{LL_{xy}-L_{x}L_{y}}{L^{3}}, \tag{10}\] its partial derivatives are \[K_{x}=-\frac{L_{xxy}}{L^{2}}+\frac{L_{xx}L_{y}+3L_{xy}L_{x}}{L^{3}}-\frac{3L_{ x}^{2}L_{y}}{L^{4}},\] \[K_{y}=-\frac{L_{xyy}}{L^{2}}+\frac{L_{yy}L_{x}+3L_{xy}L_{y}}{L^{3}}-\frac{3L_{ y}^{2}L_{x}}{L^{4}}.\] The Darboux derivative \(\omega\) of \(m\) satisfies the structure equation \(d\omega+\omega\wedge\omega=0,\) which is the local condition for existence of \(m\) with the Darboux derivative \(\omega\) (see [S-96]). The matrix structure equation rewrites as three scalar equations \[2RY+2LK-R_{x}-Y_{y}=0,\] \[LY^{2}+L_{x}Y-LY_{x}=0,\] \[LR^{2}+L_{y}R-LR_{y}=0.\] Resolving for \(Y_{x}\) e \(R_{y}\) we obtain \(Y_{x}=Y^{2}+\frac{L_{x}}{L}\) and \(R_{y}=R^{2}+\frac{L_{y}}{L}.\) Let us introduce \(F\) by \[2F:=R_{x}-Y_{y}, \tag{11}\] then \(R_{x}=RY+LK+F,\ \ Y_{y}=RY+LK-F.\) Thinking of \(F\) as an unknown function of \(x,y,\) we get the following exterior differential system in \(\mathbb{R}^{8}\) with coordinates \(A,B,C,R,Y,F,x,y\): \[dA = \left(RB-\frac{L_{x}}{2L}A-\frac{YA}{2}\right)dx+\left(\frac{L_{y} }{2L}A+\frac{RA}{2}\right)dy, \tag{12}\] \[dB = \left(\frac{L_{x}}{2L}B+\frac{YB}{2}\right)dx+\left(YA-\frac{L_{ y}}{2L}B-\frac{RB}{2}\right)dy,\] (13) \[dC = \left(\frac{BRC}{A}+\frac{R}{A}-\frac{L_{x}}{2L}C-\frac{YC}{2} \right)dx+\left(\frac{L_{y}}{2L}C+\frac{RC}{2}\right)dy,\] (14) \[dR = (YR+LK+F)dx+\left(R^{2}+\frac{L_{y}}{L}R\right)dy,\] (15) \[dY = \left(Y^{2}+\frac{L_{x}}{L}Y\right)dx+(RY+LK-F)dy. \tag{16}\] **Lemma 3**: _If the system (12-16) is compatible (i.e. has solutions) then_ \[F_{x}=3FY+\frac{L_{x}}{L}F+LK_{x},\] \[F_{y}=3FR+\frac{L_{y}}{L}F-LK_{y},\] \[F^{2}=\frac{1}{3}LK_{xy}-\frac{1}{2}LK_{x}R-\frac{1}{2}LK_{y}Y.\] _Proof:_ The condition \(d(dY)=0\) gives \[F_{x}-3FY+\left(\frac{LL_{xy}-L_{x}L_{y}}{L^{2}}\right)Y+LKY-\frac{L_{x}}{L}F- LK_{x}=0,\] and after using (10) \[F_{x}=3FY+\frac{L_{x}}{L}F+LK_{x}. \tag{17}\] Similarly from \(d(dR)=0\) one gets \[F_{y}=3FR+\frac{L_{y}}{L}F-LK_{y}. \tag{18}\] Thus the forms for \(dR\) and \(dY\) are closed if and only if \[dF=\left(3FY+\frac{L_{x}}{L}F+LK_{x}\right)dx+\left(3FR+\frac{L_{y}}{L}F-LK_{y }\right)dy. \tag{19}\] Finally \(d(dF)=0\) results in \[F^{2}=\frac{1}{3}LK_{xy}-\frac{1}{2}LK_{x}R-\frac{1}{2}LK_{y}Y. \tag{20}\] \(\Box\) Now if we define \(F\) in (15,16) by (20) we get a system of finite type: all the derivatives of \(R,Y\) are given. **Corollary 4**: _The system (15,16) with \(F\) given by (20) is in involution if and only if holds (17) and (18) due to (15,16,20)_ If the system (15,16) with \(F\) given by (20) is in involution then system (12-16) with \(F\) given by (20) is also in involution and, by the Frobenius Theorem, the initial values \(A_{0},B_{0},C_{0},R_{0},Y_{0}\) at point \((x_{0},y_{0})\) define uniquely a (germ of) solution to the system (12-16). Thus one can think of \(A_{0},B_{0},C_{0},R_{0},Y_{0}\) as local coordinates on the space of integrals, even though this space may be not well defined globally. Note that the subsystem (15,16) does not involve \(A,B\) and \(C\). Therefore any solution to this subsystem fixes an orbit of \(PSL_{2}(\mathbb{R})\)-action, since at a point \((x_{0},y_{0})\) any two triples of initial values for \(A,B,C\) are Mobius equivalent. Given a solution \(R\) e \(Y\) to (15,16), the system (12 - 14) is _automorphic_: any two solutions \(A,B,C\) are Mobius equivalent and the system (15,16) is _resolving_ for (12-14), i.e. it describes the Mobius orbits on the space of solutions to (12-14) (see [O-82, V-04] for more detail). **Lemma 5**: _Suppose that the geodesic flow on a surface with metric (8) admits a fractional linear integral (1). Then the gaussian curvature is constant if and only if_ \[F=\frac{R_{x}-Y_{y}}{2}=0.\] Proof: If the gaussian curvature \(K\) is constant then \(K_{xy}=K_{x}=K_{y}=0\) and therefore \(F=0\) by (20). On the other hand, if \(F=0\), then also \(F_{x}=F_{y}=0\) and equations (17) and (18) imply \(LK_{x}=0\), \(LK_{y}=0\), hence \(K_{x}=K_{y}=0\) and \(K\) is constant. \(\Box\) **Lemma 6**: _If \(K=const\) then the system (15,16) is in involution._ _Proof:_ For \(K=const\) we have \(F=0\) and the claim follows by Corollary 4. \(\Box\) For further analysis we need the following claim. **Proposition 7**: _Suppose that the geodesic flow on a surface admits a fractional linear integral. Then if one partial derivative of the gaussian curvature vanishes then the gaussian curvature is constant._ _Proof:_ We can suppose that \(K_{x}=0\) and \(K_{y}\neq 0\). Then from equation (20) we have \(Y=-\frac{2F^{2}}{LK_{y}}\). Differentiating by \(x\) we get \[Y_{x}=-\frac{12F^{2}}{LK_{y}}Y-\frac{2L_{x}F^{2}}{L^{2}K_{y}}.\] Comparing with (16) we obtain \[L^{2}K_{y}Y^{2}+(12LF^{2}+LL_{x}K_{y})Y+2F^{2}L_{x}=0.\] Substituting \(Y=-\frac{2F^{2}}{LK_{y}}\) results in \(\frac{20F^{4}}{K_{y}}=0.\) Hence \(F=0\) and by Lemma 5 the curvature \(K\) is constant. \(\Box\) **Theorem 8**: _Suppose that the geodesic flow on a surface admits a fractional linear integral (1). Then the dimension of the space of such integrals is 3 if and only if the gaussian curvature is not constant._ _Proof:_ Suppose that the gaussian curvature \(K\) is not constant. Then \(F\neq 0\) and by Proposition 7 holds \(K_{x}\neq 0\) and \(K_{y}\neq 0\). Let us solve equation (20) for \(R\) \[R=\frac{2K_{xy}}{3K_{x}}-\frac{K_{y}}{K_{x}}Y-\frac{2F^{2}}{LK_{x}} \tag{21}\] and consider the system of 2 equations (16,19) with \(R\) given by (21). Then this system for \(F,Y\) is of finite type and its compatibility conditions reads as (15). Lets us show that \(F\) and \(Y\) are fixed. Differentiating (21) we determine \[R_{x}=\left(\frac{K_{xx}K_{y}}{K_{x}^{2}}-\frac{K_{xy}}{K_{x}}\right)Y-\frac{K_{y} }{K_{x}}Y_{x}+\left(\frac{2L_{x}}{L^{2}K_{x}}+\frac{2K_{xx}}{LK_{x}^{2}}\right)F ^{2}-\frac{4F_{x}F}{LK_{x}}+\frac{2K_{xxy}}{3K_{x}}-\frac{2K_{xx}K_{xy}}{3K_{x} ^{2}},\] \[R_{y}=\left(\frac{K_{xy}K_{y}}{K_{x}^{2}}-\frac{K_{yy}}{K_{x}}+\right)Y-\frac{K_ {y}}{K_{x}}Y_{y}+\left(\frac{2L_{y}}{L^{2}K_{x}}+\frac{2K_{xy}}{LK_{x}^{2}} \right)F^{2}-\frac{4F_{y}F}{LK_{x}}+\frac{2K_{xyy}}{3K_{x}}-\frac{2K_{xy}^{2}} {3K_{x}^{2}}.\] Comparing with (15) and substituting the above expression for \(R\), we get \[\left(\frac{2L_{x}}{L^{2}K_{x}}-\frac{2K_{xx}}{LK_{x}^{2}}\right)F^{2}+\left( \frac{10F^{2}}{LK_{x}}+a_{1}\right)Y+5F-a_{2}=0, \tag{22}\] where \[a_{1}=-\frac{K_{xx}K_{y}}{K_{x}^{2}}+\frac{5K_{xy}}{3K_{x}}+\frac{L_{x}K_{y}} {LK_{x}};\] \[a_{2}=\frac{2K_{xxy}}{3K_{x}}-\frac{2K_{xx}K_{xy}}{3K_{x}^{2}}+\frac{L_{xy}}{L }-\frac{L_{x}L_{y}}{L^{2}}.\] and \[\frac{10K_{xy}}{3LK_{x}^{2}}F^{2}-\frac{20}{L^{2}K_{x}^{2}}F^{4}-\left(\frac{ 10K_{y}F^{2}}{LK_{x}^{2}}+b_{1}\right)Y-\frac{5K_{y}}{K_{x}}F+b_{2}=0, \tag{23}\] where \[b_{1}=\frac{5K_{xy}K_{y}}{3K_{x}^{2}}-\frac{K_{yy}}{K_{x}}+\frac{L_{y}K_{y}}{ LK_{x}},\] \[b_{2}=-\frac{2K_{xyy}}{3K_{x}}+\frac{10K_{xy}^{2}}{9K_{x}^{2}}+\frac{2L_{y}K_{ xy}}{3LK_{x}}-\frac{L_{xy}K_{y}}{LK_{x}}+\frac{L_{x}L_{y}K_{y}}{L^{2}K_{x}}.\] If \(\frac{10F^{2}}{LK_{x}}+a_{1}\neq 0\) then we can resolve equation (22) for \(Y\). As the equation (23) has a non-vanishing coefficient of \(F^{4}\), substituting the found expression for \(Y\) into (23) we get a nontrivial equation for \(F\). Thus \(F\) is fixed and consequently \(Y\) is eventually fixed by (22). If \(\frac{10F^{2}}{LK_{x}}+a_{1}=0\) then \(F^{2}=-\frac{a_{1}LK_{x}}{10}\) and \(F\) is fixed as well as \(F_{x}\). Finally, from equation (17) we see that \(Y\) is fixed. Thus, locally there is only one orbit of the Mobius group and the dimension is 3. The converse claim follows from Lemma 6. \(\Box\) **Theorem 9**: _Suppose that the geodesic flow on a surface admits a fractional linear integral (1). Then the dimension of the space of linear fractional integrals is 5 if and only if the gaussian curvature is constant._ _Proof:_ Follows from Lemma 6 and Theorem 8. \(\Box\) Geodesic Nakai's 4-webs In this section we consider geometric questions and therefore return to conformal coordinates for the metric \(g=\Lambda(x,y)\left(dx^{2}+dy^{2}\right)\). Suppose there is a fractional-linear integral (1) of the geodesic flow. It can be written as a function of local coordinates and the tangent vector \(v=(\xi,\eta)\). By the canonical isomorphism \(\psi:TM\to T^{*}M\), the momentum \((p,q)\) and the tangent vector \((\xi,\eta)\) are related as follows: \[(\xi,\eta)=\left(\frac{p}{\Lambda},\frac{q}{\Lambda}\right).\] So, in conformal coordinates, the integral is \[I=\frac{A\xi+B\eta}{C\xi+E\eta}.\] Given such integral \(I\), we construct a one-parameter family of foliations as follows: for any \(\lambda\in\mathbb{R}\), the equation \[I=\frac{A\xi+B\eta}{C\xi+E\eta}=\lambda,\] gives an ODE \[(A-C\lambda)+(B-E\lambda)\frac{dy}{dx}=0\] on our surface after \(\xi=dx\) e \(\eta=dy\). Integral curves of this ODE form a foliation on the surface. **Proposition 10**: _The constructed foliation is geodesic._ _Proof:_ Follows from the uniqueness of an integral curve passing through \((x_{0},y_{0})\): the function \(I\) is constant and is equal to \(\lambda\) along the geodesic with the direction \((\xi,\eta)\) at \((x_{0},y_{0})\) fixed by \(I=\lambda\). \(\Box\) **Definition 2**: _Nakai's 4-web is a planar 4-web such that the cross-ratio of the four directions tangent to the web leaves is constant._ **Theorem 11**: _There exists a fractional linear integral (1) of the geodesic flow on a surface if and only if the surface carries a geodesic Nakai's 4-web._ _Proof:_ Suppose there is an integral. The equation defining the geodesic foliations of Proposition 10 can be written as \[\frac{A+BP}{C+EP}=\lambda, \tag{24}\] where \(P=\frac{dy}{dx}\) is the inclination of the foliation leaf. The integral gives the map \(m:S\to SL_{2}(\mathbb{R})\), \[m(x,y)=\left(\begin{array}{cc}A&B\\ C&E\end{array}\right).\] Then \(m\cdot P=\lambda\) and \(P=m^{-1}\cdot\lambda\). Consider a 4-web, whose foliations are fixed by four values \(\lambda_{i}\), \(i=1,2,3,4\). Then the cross-ratio of the inclinations \(P_{i}\) is equal to the cross-ratio of \(\lambda\)-s and therefore is constant. Now suppose that there are 4 geodesic foliations tangent to 4 direction fields \(\partial_{x}+J\partial_{y}\), \(\partial_{x}+M\partial_{y}\), \(\partial_{x}+N\partial_{y}\) e \(\partial_{x}+T\partial_{y}\), such that \[\frac{J-M}{M-N}\cdot\frac{N-T}{T-J}=r \tag{25}\] with constant \(r\). Resolving for \(T\) we get \[T=\frac{rJM-(r-1)JN-MN}{J+(r-1)M-rN}\] and \[T_{x}=\frac{r(r-1)(M-N)^{2}J_{x}+r(J-N)^{2}M_{x}-(r-1)(J-M)^{2}N_{x}}{(J+(r-1)M -rN)^{2}},\] \[T_{y}=\frac{r(r-1)(M-N)^{2}J_{y}+r(J-N)^{2}M_{y}-(r-1)(J-M)^{2}N_{y}}{(J+(r-1)M -rN)^{2}}.\] The total differentiation by \(x\) along an integral curve \(y=y(x)\) of the field \(\partial_{x}+P\partial_{y}\) is \[\frac{d^{2}y}{dx^{2}}=P_{x}+PP_{y}. \tag{26}\] If these curves are geodesics then \(\partial_{x}+P\partial_{y}\) is the Jacobi field. Using the equation \[\frac{d^{2}y}{dx^{2}}=-\Gamma_{11}^{2}+(\Gamma_{11}^{1}-2\Gamma_{12}^{2}) \frac{dy}{dx}-(\Gamma_{22}^{2}-2\Gamma_{12}^{1})\left(\frac{dy}{dx}\right)^{2} +\Gamma_{22}^{1}\left(\frac{dy}{dx}\right)^{3},\] for unparametrized geodesics, we get \[P_{x}+PP_{y}=-\Gamma_{11}^{2}+(\Gamma_{11}^{1}-2\Gamma_{12}^{2})P-(\Gamma_{22} ^{2}-2\Gamma_{12}^{1})P^{2}+\Gamma_{22}^{1}P^{3},\] where \(\Gamma_{jk}^{i}\) are Christoffel symbols for the Levi-Civita connection. Therefore for the fields \(\partial_{x}+J\partial_{y}\), \(\partial_{x}+M\partial_{y}\), \(\partial_{x}+N\partial_{y}\) e \(\partial_{x}+T\partial_{y}\) we have \[J_{x}+JJ_{y}-\frac{\Lambda_{y}}{2\Lambda}+\frac{\Lambda_{x}}{2\Lambda}J-\frac {\Lambda_{y}}{2\Lambda}J^{2}+\frac{\Lambda_{x}}{2\Lambda}J^{3}=0, \tag{27}\] \[M_{x}+MM_{y}-\frac{\Lambda_{y}}{2\Lambda}+\frac{\Lambda_{x}}{2\Lambda}M-\frac {\Lambda_{y}}{2\Lambda}M^{2}+\frac{\Lambda_{x}}{2\Lambda}M^{3}=0, \tag{28}\] \[N_{x}+NN_{y}-\frac{\Lambda_{y}}{2\Lambda}+\frac{\Lambda_{x}}{2\Lambda}N-\frac {\Lambda_{y}}{2\Lambda}N^{2}+\frac{\Lambda_{x}}{2\Lambda}N^{3}=0, \tag{29}\] \[T_{x}+TT_{y}-\frac{\Lambda_{y}}{2\Lambda}+\frac{\Lambda_{x}}{2\Lambda}T-\frac {\Lambda_{y}}{2\Lambda}T^{2}+\frac{\Lambda_{x}}{2\Lambda}T^{3}=0. \tag{30}\] Resolving (27,28,29) for \(J_{x},M_{x},N_{x}\), we get \[J_{x}=-\frac{\Lambda_{x}J^{3}-\Lambda_{y}J^{2}+(2\Lambda J_{y}+\Lambda_{x})J- \Lambda_{y}}{2\Lambda}, \tag{31}\] \[M_{x}=-\frac{\Lambda_{x}M^{3}-\Lambda_{y}M^{2}+(2\Lambda M_{y}+\Lambda_{x})M- \Lambda_{y}}{2\Lambda}, \tag{32}\] \[N_{x}=-\frac{\Lambda_{x}N^{3}-\Lambda_{y}N^{2}+(2\Lambda N_{y}+\Lambda_{x})N- \Lambda_{y}}{2\Lambda}. \tag{33}\] Substituting \(J_{x},M_{x},N_{x},T,T_{x}\), and \(T_{y}\) into (30), we obtain \[\frac{r(r-1)(J-M)(J-N)(M-N)u}{2\Lambda(J+(r-1)M-rN)^{3}}=0, \tag{34}\] where \[u=\Lambda_{x}(J-M)(J-N)(M-N)+2\Lambda((J-M)N_{y}-(J-N)M_{y}+(M-N)J_{y}).\] Each factor in the product \(r(r-1)(J-M)(J-N)(M-N)\) is not zero hence \(u=0\). Note also that \(J+(r-1)M-rN\neq 0\), since otherwise from (25) follows \(J=N\). Now we construct a map \(m:S\to SL_{2}(\mathbb{R})\), \(m=\left(\begin{array}{cc}A&B\\ C&E\end{array}\right),\) resolving \[\frac{A+BJ}{C+EJ}=0,\quad\frac{A+BM}{C+EM}=1,\quad C+EN=0,\quad AE-BC=1, \tag{35}\] for \(A,B,C,E\), and show that \(I=\frac{Ap+Bq}{Cp+Eq}\) is a first integral. To this end we verify that \[\begin{array}{l}2\Lambda(CA_{x}-AC_{x})-\Lambda_{y}=0,\\ \\ 2\Lambda(EA_{x}-AE_{x}+CA_{y}-AC_{y}+CB_{x}-BC_{x})+\Lambda_{x}=0,\\ \\ 2\Lambda(EA_{y}-AE_{y}+CB_{y}-BC_{y}+EB_{x}-BE_{x})-\Lambda_{y}=0,\\ \\ 2\Lambda(EB_{y}-BE_{y})+\Lambda_{x}=0.\end{array} \tag{36}\] Then \(I\) is a first integral by Lemma 1. From (35) we obtain \[A=\frac{J(M-N)E}{J-M},\quad B=-\frac{(M-N)E}{J-M},\quad C=-NE.\] Differentiating, we have \[A_{x}=-\frac{EM(M-N)J_{x}}{(J-M)^{2}}+\frac{EJ(J-N)M_{x}}{(J-M)^{2}}+\frac{E_ {x}J(M-N)}{J-M}-\frac{EJN_{x}}{J-M},\] \[A_{y}=-\frac{EM(M-N)J_{y}}{(J-M)^{2}}+\frac{EJ(J-N)M_{y}}{(J-M)^{2}}+\frac{E_ {y}J(M-N)}{J-M}-\frac{EJN_{y}}{J-M},\] as well as \[B_{x}=\frac{E(M-N)J_{x}}{(J-M)^{2}}-\frac{E(J-N)M_{x}}{(J-M)^{2}}-\frac{E_{x} (M-N)}{J-M}+\frac{EN_{x}}{J-M},\] \[B_{y}=\frac{E(M-N)J_{y}}{(J-M)^{2}}-\frac{E(J-N)M_{y}}{(J-M)^{2}}-\frac{E_{y} (M-N)}{J-M}+\frac{EN_{y}}{J-M},\] and \[\begin{array}{l}C_{x}=-EN_{x}-E_{x}N,\\ C_{y}=-EN_{y}-E_{y}N.\end{array}\] Now the expression \(2\Lambda(CA_{x}-AC_{x})-\Lambda_{y}(AE-BC)\) writes as \[\frac{2\Lambda E^{2}MN(M-N)J_{x}}{(J-M)^{2}}-\frac{2\Lambda E^{2}JN(J-N)M_{x}}{( J-M)^{2}}+\frac{2\Lambda E^{2}JMN_{x}}{J-M}-\frac{\Lambda_{y}E^{2}(M-N)(J-N)}{J-M}.\] One checks that \[E^{2}=\frac{J-M}{(M-N)(J-N)},\] and has \[2\Lambda(CA_{x}-AC_{x})-\Lambda_{y}(AE-BC)=-\frac{uJMN}{(J-M)(M-N)(J-N)}.\] Similarly we get \[2\Lambda(EA_{x}-AE_{x}+CA_{y}-AC_{y}+CB_{x}-BC_{x})+\Lambda_{x}(AE-BC)=\frac{u( JN+JM+MN)}{(J-M)(M-N)(J-N)}\] and \[2\Lambda(EA_{y}-AE_{y}+CB_{y}-BC_{y}+EB_{x}-BE_{x})-\Lambda_{y}(AE-BC)=-\frac{u (J+M+N)}{(J-M)(M-N)(J-N)},\] and \[2\Lambda(EB_{y}-BE_{y})+\Lambda_{x}(AE-BC)=\frac{u}{(J-M)(M-N)(J-N)}.\] Since \(u=0\) and \(AE-BC=1\), the last 4 equations give (36). \(\square\) ## 6 Concluding remarks One easily checks that the explicit examples of metrics constructed by Agapov and Shubin [AS-21] admit _projective_ vector fields, whose local flows maps geodesics into geodesics but do not have to respect metrics (see [L-82]). It would be interesting to understand relation between existence conditions for projective vector fields and for fractional-linear integrals. ## Acknowledgments This research was supported by FAPESP grant # 2022/12813-5 (S.I.A) and CAPES grant # 88882.434346/2019-01 (T.G.P.A).
2309.14814
First measurements with monolithic active pixel test structures produced in a 65 nm CMOS process
The Inner Tracking System (ITS) of the ALICE experiment at CERN will undergo an upgrade during the LHC long shutdown 3, in which the three innermost tracking layers will be replaced. This upgrade, named the Inner Tracking System 3 (ITS3), employs stitched wafer-scale Monolithic Active Pixel Sensors fabricated in a 65 nm CMOS process. The sensors are 260 mm in length and thinned to less than 50 um then bent to form truly half-cylindrical half-barrels. The feasibility of this process for the ITS3 was explored with the first test production run (MLR1) in 2021, whose goal was to evaluate the charged particle detection efficiency and the sensor performance under non-ionising and ionising radiation up to the expected levels for ALICE ITS3 of $10^{13}$ $1$ MeV n$_{\mathrm{eq}}$ cm$^{-2}$ (NIEL) and 10 kGy (TID). Three sensor flavours were produced to investigate this process: Analog Pixel Test Structure (APTS), Circuit Exploratoire 65 (CE65) and Digital Pixel Test Structure (DPTS). This contribution gives an overview of the MLR1 submission and test results, describing the different sensor flavours and presenting the results of the performance measurements done with particle beams for various chip variants and irradiation levels.
M. Buckland
2023-09-26T10:27:46Z
http://arxiv.org/abs/2309.14814v2
# First measurements with monolithic active pixel test structures produced in a 65 nm CMOS process ###### Abstract The Inner Tracking System (ITS) of the ALICE experiment at CERN will undergo an upgrade during the LHC long shutdown 3, in which the three innermost tracking layers will be replaced. This upgrade, named the Inner Tracking System 3 (ITS3), employs stitched wafer-scale Monolithic Active Pixel Sensors fabricated in a 65 nm CMOS process. The sensors are 260 mm in length and thinned to less than 50 um then bent to form truly half-cylindrical half-barrels. The feasibility of this process for the ITS3 was explored with the first test production run (MLR1) in 2021, whose goal was to evaluate the charged particle detection efficiency and the sensor performance under non-ionising and ionising radiation up to the expected levels for ALICE ITS3 of \(10^{13}\,1\,\mathrm{MeV}\)\(\mathrm{n}_{\mathrm{eq}}\)\(\mathrm{cm}^{-2}\) (NIEL) and 10 kGy (TID). Three sensor flavours were produced to investigate this process: Analog Pixel Test Structure (APTS), Circuit Exploratoire 65 (CE65) and Digital Pixel Test Structure (DPTS). This contribution gives an overview of the MLR1 submission and test results, describing the different sensor flavours and presenting the results of the performance measurements done with particle beams for various chip variants and irradiation levels. Particle tracking detectors (Solid-state detectors), Front-end electronics for detector readout, Radiation-hard detectors + ## 1 Introduction The importance of Monolithic Active Pixel Sensors (MAPS) for use in high energy physics experiment vertex and tracking detectors has been established in the last decade. The most recent implementation of a large-area MAPS detector was for the ALICE Inner Tracking System 2 (ITS2) [1] at CERN, which used the ALPIDE chip [2; 3]. This chip was fabricated in the TowerJazz 180 nm CMOS process and showed excellent performance in terms of detection efficiency (\(\gg\)99%) and spatial resolution (about 5 um). During the LHC Long Shutdown 3 (2026-2028), the ITS2 will undergo an upgrade called the ITS3, where the three innermost tracking layers will be replaced. The ITS3 employs wafer-scale MAPS with a length of 260 mm that are thinned to \(<\)50 um and bent to radii of 18 mm, 24 mm, and 30 mm to form cylindrical half-barrels. To obtain sensors of this length, a process called stitching is employed. In this process the reticles of the CMOS imaging process are joined together to produce a larger single sensor, removing the need for electrical services in the detector. By utilising the natural stiffness of the cylindrical geometry, the majority of the mechanical support can also be removed, using only carbon foam spacers as support structures. Finally, the very low power consumption (\(<\)20 mW/cm\({}^{2}\)) of the chip will allow the detector to be cooled by air. These reductions in the material within the sensitive volume will enable the ITS3 to have a very low material budget of \(<\)0.05% X\({}_{0}\) per layer. Altogether, these upgrades will provide exceptional tracking and vertexing capabilities leading to an improvement in the pointing resolution of the current detector by a factor of two. Due to the challenging design that the ITS3 poses, the Tower Partners Semiconductor Co. (TPSCo) 65 nm CMOS imaging process [4] was chosen as the starting point. The first submission in the 65 nm CMOS process, in conjunction with the CERN EP R&D on monolithic pixel sensors [5], was called MLR1 and contains many different test structures to fully explore the CMOS process. MAPS in the 65 nm CMOS imaging process By using a smaller CMOS process node compared to the 180 nm CMOS process of the ITS2, more possibilities will be open, such as: * the ability to have complete coverage along the beam axis (\(z\)-direction) in the detector with a single stitched sensor produced on 300 mm wafers; * lower power consumption when moving to deeper submicron processes. However, moving to a new process also provides challenges, such as optimising the design for sensor yield, the charge collection, and testing and verifying the radiation harness. Three main sensor flavours produced are the Analog Pixel Test Structure (APTS), Circuit Exploratoire 65 (CE65), and Digital Pixel Test Structure (DPTS), each measuring 1.5 mm\(\times\)1.5 mm in size and highlighted in Fig. 1. In addition to three sensor flavours, three process options were also explored to investigate the charge collection properties of the CMOS process. These processes, shown in Fig. 2, are called standard, modified, and modified-with-gap and are similar to those used in the 180 nm CMOS process [6]. In the standard process, the epitaxial layer is only partially depleted, so some of the charges will undergo diffusion, and this process is expected to have the largest charge sharing. For the modified process, a low-dose n-type implant is added across the length of the pixel. This enables the epitaxial layer to be fully depleted and increases the lateral electric field to the collection diode. Thus, it better collects the signal charge and accelerates the carriers towards the collection diode. This lateral electric field is further increased in the modified-with-gap process where a gap in the low-dose n-implant is added at the edges of the pixel. The main goals of the MLR1 submission were to verify that the detection efficiency was 99% and that the radiation hardness could reach the expected levels for the ITS3 (10\({}^{13}\) 1 MeV n\({}_{\rm eq}\) cm\({}^{-2}\) and 10 kGy). Figure 1: The MLR1 reticle floor plan highlighting the APTS (orange), CE65 (green) and DPTS (blue) chips. ## 3 Analog Pixel Test Structure The APTS incorporates a \(6\times 6\) pixel matrix with direct analogue readout on the central \(4\times 4\) pixels. Two versions of the output buffer were implemented: a source-follower (APTS-SF) and a fast operational amplifier (APTS-OA) whose focus was on time resolution. In addition, the sensor was produced in four different pixel pitches ranging from \(10\,\mathrm{\SIUnitSymbolMicro m}\) to \(25\,\mathrm{\SIUnitSymbolMicro m}\). The goal of the APTS was to explore the different sensor designs and processes. The in-beam measurements of the APTS-SF shown in Fig. 3 (left) demonstrate the impact of the different process types on the detection efficiency. While all three can reach the desired 99%, it is clear that the standard process has a reduced performance at larger thresholds compared to the other two due to the improved charge collection in the modified processes. The modified-with-gap process shows the largest detection efficiency over the whole measured threshold range. Comparing the detection efficiency for different pitches, Fig. 3 (right), it can be seen that below a threshold of \(200\,\mathrm{e}^{\mathrm{\text{-}}}\), there is minimal difference among the pitches. However, above this value, larger pitches result in higher efficiencies. Timing measurements were also performed with in-beam measurements of two APTS-OA, resulting in a timing resolution of \((77\pm 5)\) ps, Fig. 4. ## 4 Circuit Exploratoire 65 The CE65 is a "large"-area chip consisting of an analogue rolling shutter readout with an integration time of \(50\,\mathrm{\SIUnitSymbolMicro s}\). One type of the pixel matrix consists of \(64\times 32\) pixels implemented with a pixel pitch of \(15\,\mathrm{\SIUnitSymbolMicro m}\) and split into three subvariants that differ by their in-pixel amplifier: AC, DC or SF. Figure 3: Comparison of the detection efficiency vs. threshold for an APTS-SF sensor for the three processes (left, see text for details) and different pixel pitches (right). Figure 2: The three process options implemented in the MLR1 chips. he other type of pixel matrix contains \(48\times 32\) pixels implemented with a pitch of \(25\,\mathrm{\SIUnitSymbolMicro m}\). The goal of the CE65 was to study the pixel matrix uniformity. Figure 5 shows the seed pixel distributions obtained from in-beam measurements of the different CE65 variants. There is a clear distinction between the standard and the modified-with-gap processes, with the latter having a larger most probable value (MPV), signifying a larger charge collection depth. The difference between the amplifiers is minimal for the modified-with-gap process. However, for the standard process, it can be seen that the AC-coupled amplifier has a larger MPV. ## 5 Digital Pixel Test Structure The DPTS features a \(32\times 32\) pixel matrix with a pitch of \(15\,\mathrm{\SIUnitSymbolMicro m}\) implemented in the modified-with-gap process and contains a full digital front-end with asynchronous readout [7]. The sensor is controlled by a set of external reference currents and voltages and read out via a current mode logic (CML) output [8, 9]. All the pixels are read out simultaneously via a differential digital output that Figure 4: The time residual distribution of two APTS-OA sensors fitted with a Gaussian function to extract the timing resolution. Figure 5: The in-beam seed pixel distribution comparing the response of the different CE65 variants. time encodes the pixel position and Time-over-Threshold (ToT). The goal of the DPTS was to study the in-pixel full-digital front end. The performance of DPTS chips for various irradiation levels taken at a temperature of +20 \({}^{\circ}\)C was evaluated using in-beam measurements, the results of which are shown in Fig. 6. For all irradiation levels, the sensor shows an excellent detection efficiency of 99% and a spatial resolution below the binary resolution (pixel pitch / \(\sqrt{12}\)) while preserving a fake-hit rate below 10 pixel\({}^{-1}\) s\({}^{-1}\). For the detection efficiency, it can be seen that non-ionising irradiation leads to a decrease in detection efficiency while ionising irradiation leads to an increase in the fake-hit rate. For the spatial resolution, there is a negligible impact of the irradiation on the performance of the tested irradiation levels. Whereas the average cluster size shows a slight decrease with increasing non-ionising dose. To investigate the cause of the detection efficiency loss in the sensor, the detection efficiency was studied as a function of the in-pixel position as shown in Fig. 7 for a sensor irradiated to \(10^{15}\) 1 MeV \(\mathrm{n_{eq}}\)\(\mathrm{cm^{-2}}\). It can be seen that the further the track is away from the collection diode, in the centre of the pixel, the smaller the detection efficiency is, an effect becoming particularly acute in the corners of the pixel. ## 6 Summary The performance of the MLR1 chips was evaluated through extensive characterisation in the laboratory and with in-beam measurement. The measurements show that the MLR1 was a success Figure 6: The detection efficiency and fake-hit rate vs. threshold (top) and the spatial resolution and average cluster size vs. threshold (bottom) for DPTS chips irradiated to various levels [7]. hanks to the large number of operational prototypes that allow the parameter space of the CMOS process to be mapped out. Furthermore, the MLR1 structures exhibit excellent performance in terms of detection efficiency (\(>\)99%) and spatial resolution (3-4.5 \(\mathrm{\SIUnitSymbolMicro m}\)) from the in-beam measurements for all three sensor flavours considered. The radiation hardness is demonstrated by the sensors maintaining a detection efficiency of 99% for chips irradiated with a dose at the expected ITS3 levels, \(10^{13}\) 1 MeV \(\mathrm{n_{eq}}\)\(\mathrm{cm^{-2}}\) (NIEL) and 10 kGy (TID). The radiation hardness actually exceeds this goal as the desired performance is maintained even for sensors irradiated up to \(10^{15}\) 1 MeV \(\mathrm{n_{eq}}\)\(\mathrm{cm^{-2}}\) and operated at +20 \({}^{\circ}\)C. In addition, the APTS-OA has demonstrated a time resolution of (\(77\pm 5\)) ps. From the results of the MLR1, the detection efficiency and radiation hardness have been validated and represent an important milestone in the R&D for ALICE ITS3. The next step towards a wafer-scale bent sensor is the second submission in the 65 nm process designated ER1, whose goal is the validation of stitching and yield via the full-scale sensor prototypes.
2309.14800
3D Density-Gradient based Edge Detection on Neural Radiance Fields (NeRFs) for Geometric Reconstruction
Generating geometric 3D reconstructions from Neural Radiance Fields (NeRFs) is of great interest. However, accurate and complete reconstructions based on the density values are challenging. The network output depends on input data, NeRF network configuration and hyperparameter. As a result, the direct usage of density values, e.g. via filtering with global density thresholds, usually requires empirical investigations. Under the assumption that the density increases from non-object to object area, the utilization of density gradients from relative values is evident. As the density represents a position-dependent parameter it can be handled anisotropically, therefore processing of the voxelized 3D density field is justified. In this regard, we address geometric 3D reconstructions based on density gradients, whereas the gradients result from 3D edge detection filters of the first and second derivatives, namely Sobel, Canny and Laplacian of Gaussian. The gradients rely on relative neighboring density values in all directions, thus are independent from absolute magnitudes. Consequently, gradient filters are able to extract edges along a wide density range, almost independent from assumptions and empirical investigations. Our approach demonstrates the capability to achieve geometric 3D reconstructions with high geometric accuracy on object surfaces and remarkable object completeness. Notably, Canny filter effectively eliminates gaps, delivers a uniform point density, and strikes a favorable balance between correctness and completeness across the scenes.
Miriam Jäger, Boris Jutzi
2023-09-26T09:56:27Z
http://arxiv.org/abs/2309.14800v1
D density-gradient based edge detection on neural radiance fields (Nerfs) for geometric reconstruction ###### Abstract Generating geometric 3D reconstructions from Neural Radiance Fields (NeRFs) is of great interest. However, accurate and complete reconstructions based on the density values are challenging. The network output depends on input data, NeRF network configuration and hyperparameter. As a result, the direct usage of density values, e.g. via filtering with global density thresholds, usually requires empirical investigations. Under the assumption that the density increases from non-object to object area, the utilization of density gradients from relative values is evident. As the density represents a position-dependent parameter it can be handled anisotropically, therefore processing of the voxelized 3D density field is justified. In this regard, we address geometric 3D reconstructions based on density gradients, whereas the gradients result from 3D edge detection filters of the first and second derivatives, namely Sobel, Canny and Laplacian of Gaussian. The gradients rely on relative neighboring density values in all directions, thus are independent from absolute magnitudes. Consequently, gradient filters are able to extract edges along a wide density range, almost independent from assumptions and empirical investigations. Our approach demonstrates the capability to achieve geometric 3D reconstructions with high geometric accuracy on object surfaces and remarkable object completeness. Notably, Canny filter effectively eliminates gaps, delivers a uniform point density, and strikes a favorable balance between correctness and completeness across the scenes. Neural Radiance Fields, Density Field, Density Gradient, Sobel, Canny, Laplacian of Gaussian, 3D Reconstruction ## 1 Introduction Neural Radiance Fields (NeRFs) (Mildenhall et al., 2020) pioneered computer graphics and computer vision by enabling the rendering of novel views through view synthesis from neural networks. These networks estimate density and color values for each position in 3D space based on input image data and camera poses. Generating accurate and complete 3D reconstructions from Neural Radiance Fields (NeRFs) is of interest in the field of photogrammetry. Through the utilization of estimated density values, NeRF based 3D reconstructions are possible. More precisely, by considering the density as a kind of pseudo-probability for the occurrence of an object in 3D space (Jager et al., 2023). Nevertheless, the filtering with global density thresholds is empirical and requires sufficient analysis of its geometric correctness. Accordingly, the 3D reconstruction depends on the chosen density threshold and often yields noisy and incomplete surfaces (Li et al., 2023; Wang et al., 2021). The assumption that the density increases from non-object to object area, motivates the processing of the 3D scene in terms of its density gradients. As the density represents a position-dependent parameter, it can be addressed anisotropically, justifying a ray-independent sampling. For this reason, we propose to perform geometric 3D reconstruction with respect to density gradients, while the key aspect is the utilization of 3D gradient filter for 3D edge detection. This allows the extraction of edges along a wide density range based on gradients of relative neighboring values. We introduce a straightforward workflow for enabling geometric 3D reconstruction from NeRFs with the 3D density gradients based on the first and second derivative, while using gradient filter for edge detection in the voxelized 3D density field. In order to evaluate the geometric accuracy and robustness of our framework, we address the DTU benchmark dataset (Jensen et al., 2014) with different types of real objects, which feature different sizes, structures, materials, textures and colors. Figure 1: Density Gradient e.g. for 1D on a ray. The illustrations display the characteristics of the density values exemplarily as they would occur during ray tracing into the direction of an object. From top to bottom: An ideal edge in a binary non-object to object space, the raw density values, the first derivative of the density values (edge at the maximum), the second derivative of the density values (edge at the zero crossing). Related Work In this section, we briefly summarize related work to our research. Firstly, we give an overview on basic, recent research and developments on NeRFs. Following this, we address recent research on neural surface reconstructions. Neural Radiance FieldsThe foundation for Neural Radiance Fields (NeRFs) was established by Scene Representation Networks (SRNs) (Sitzmann et al., 2019). Their underlying principle is modeling the scene as a function of 3D coordinates within it. It was followed by the groundbreaking research work of Neural Radiance Fields (Mildenhall et al., 2020). The network enables the estimation of color and density values for each 3D position through 6D camera poses and associated 2D images by training a neural network with multi-layer perceptrons (MLPs). The vanilla NeRF was followed by thousands of publications driving research and development in various domains. Scalability enhancements are demonstrated by Mega-NeRF (Turki et al., 2022) and Block-NeRF (Tancik et al., 2022), which employ data partitioning and the training of several NeRFs. Bundle Adjusting Radiance Fields (BaRF) (Lin et al., 2021) and Gaussian Activated Radiance Fields (GaRF) (Chng et al., 2022) address the task of a camera pose estimation. Dynamic contributions use time as an additional input dimension for time-dependent rendering (Pumarola et al., 2021) or for preventing the occurrence of artifacts due to dynamic pixels (Gao et al., 2021). Several Methods such as AdaNeRF (Kurz et al., 2022), FastNeRF (Garbin et al., 2021) and Instant NGP (Muller et al., 2022) focus on faster training or rendering. While Instant NGP uses a combination of small MLPs and spatial hash table encoding. Neuralangelo (Li et al., 2023) adapts Instant NGP and combines hash grids with neural surface rendering for high-fidelity surface reconstruction. Besides the neural methods, non-neural research like Plenoxels (Fridovich-Keil et al., 2022) have been introduced. Neural Surface ReconstructionsRegarding neural surface reconstructions Unisurf (Occhsle et al., 2021) learns implicit surfaces, by addressing the occupancy along rays. Several works such as NeuS (Wang et al., 2021) and VolSDF (Yariv et al., 2021) represent the scene by neural Signed Distance Functions (SDFs) (Park et al., 2019). Neuralwarp (Darmon et al., 2022) builds on VolSDF, whereas using Structure from Motion information to guide surface optimizations. ## 3 Methodology Firstly, in Section 3.1 describes the principal motivation for density gradients underlying our framework. Secondly, in Section 3.2 and Section 3.3 the first and second derivative calculation for density gradients is explained. Finally, Section 3.4 outlines the evaluation process, which focuses on completeness and correctness. ### Density Gradient Reconstructions based on filtering the NeRFs density output by global density thresholds requires adaptive adjustments, since the density values behavior differ for various NeRFs, datasets, hyperparameters and network configurations. Accordingly, the 3D reconstruction depends on the chosen threshold and does not provide optimal, noisy or incomplete reconstructions (Wang et al., 2021; Oechsle et al., 2021; Li et al., 2023). Several previous works consider ray-based 3D reconstruction with NeRFs or SDFs (Oechsle et al., 2021; Wang et al., 2021; Darmon et al., 2022). Nevertheless, the density values in principle are anisotropic and position-dependent. For this reason, we propose to process the geometric 3D reconstruction in the dense voxelized 3D density field. With the aim of performing position-dependent 3D reconstructions, regardless of global density thresholds, we introduce 3D gradient filters. To identify edges characterized by variations in magnitudes, hence density values, we extend from two to the three-dimensional edge filter among the 3D density field. In doing so, the density gradients instead of the raw density values from NeRFs are regarded, since the density value increases towards the object. The extraction of the edges can rely on the first as well as the second derivative of the density, see Figure 1. Thereby, we guarantee anisotropy as well as the consideration of neighborhoods in the reconstruction process. ### First Derivative Edges in images as well as in 3D voxel space can be detected based on the first derivatives, i.e. the corresponding density gradients in this case. Sobel filterWe address the well-established Sobel filter (Sobel and Feldman, 1973) for edge detection, which performs a smoothing orthogonal to the first derivative. As the processing is done in the 3D density field, the 3D Sobel filter is built up of the following components for each direction x, y and z, e.g., for the x-direction for the central element (Sobel and Feldman, 1973): \[s_{\text{-1}}\!=\!\begin{bmatrix}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Sobel filter and the extension of the Canny filter. The second derivative provides a basic approach to edge detection based on differences of neighboring values, while edges result from the zero crossings. Since the second derivative is usually sensitive to noises, a previous smoothing of the values is essential. Laplacian of GaussianFrom this point Laplacian of Gaussian filter (Marr and Hildreth, 1980) (LOG), also referred as Marr-Hildreth operator, is suitable. It combines the second derivative with a Gaussian filter in order to smooth the values. For fast implementation the Difference of Gaussians (DoG) can be applied, which approximates the LOG. Similar to the filter of the first derivative, we apply the filter on the voxelized 3D density field and refer it as \(\Delta_{\delta,\text{LOG}}^{2}\) in the following. ### Evaluation CompletenessIn general, we report qualitative completeness on the basis of the resulting 3D reconstructions. Furthermore, the completeness is measured quantitatively. The reconstructions from voxelized 3D density field include predicted points inside the object and the reference point cloud contains large gaps. We report the number of points and percentages covered by the NeRF reconstructions within a distance threshold of a maximum distance from reference. A higher score indicates higher object completeness. CorrectnessTo evaluate the geometric accuracy of the 3D reconstructions quantitative as well as qualitative, Chamfer cloud-to-cloud distance is applied from the DTU dataset evaluation script (Jensen et al., 2014). We report both the distance from data to reference (data-to-reference) and vice versa (reference-to-data). While the reference point cloud has gaps, the data to reference distance as well as the reference to data distance are interpreted as accuracy or correctness. ## 4 Experiments In this section, we conduct experiments on a challenging benchmark dataset with different types of real objects, which feature different sizes, structures, materials, textures and colors. ### Dataset For the evaluation of our framework, we use the DTU benchmark dataset (Jensen et al., 2014). The dataset consists of scenes featuring real objects, including images, corresponding camera poses, and reference point clouds obtained from a structured-light scanner (SLS). We specifically focus on six scenes within the dataset, the same as (Wang et al., 2021; Oechsle et al., 2021; Darmon et al., 2022; Li et al., 2023), each containing either 49 or 64 RGB images. ### Implementation For all investigations, Instant NGP (Muller et al., 2022) was taken into account as NeRF, since it enables real time training and rendering. Regarding the network architecture, the basic NeRF architecture with ReLu activations and hash encoding is selected, while the training incorporates 50 000 training steps on an NVIDIA RTX3090 GPU. ### Experiments We evaluate our framework with first derivative Sobel filter and Canny filter as well as second derivative Laplacian of Gaussian filter against different global density thresholds. Thereby qualitative as well as quantitative results based on completeness and correctness as described in the evaluation Section 3.4 are considered. The global density thresholds \(\delta_{\text{t}}\) are set to 25, 50, and 100 (Wang et al., 2021). The Sobel filter is used as described in Section 3.2 and the Canny filter is applied with a standard derivation of 0.1 and relative thresholds of 0.0005 and 4 times 0.0005. For the Laplacian of Gaussian, a filter mask of 7\(\times\)7\(\times\)7 and standard derivation of 7 is utilized. ## 5 Results In the following sections, we show qualitative (Section 5.1) and quantitative (Section 5.2) results of the geometric reconstructions on the used benchmark dataset by addressing completeness and correctness. ### Qualitative results As the following Figures 2 and 4 show, the density gradient-based approach with 3D edge detection filters yields promising results. Thus, the optimal global density threshold varies from scene to scene and requires adaptive adjustment. By addressing the density gradient, consistently accurate and complete results are generated across all scenes. CompletenessThe visual comparison of the colored geometric reconstructions in Figure 2 highlights the reconstruction quality and object completeness based on density gradients. The reconstructions exemplified for a global density threshold \(\delta_{\text{w}=50}\) exhibit different levels of gaps in the point clouds. In almost all scenes, gaps appear in the reconstructions along with areas of extremely high point density. Also the reconstructions resulting from the first derivative, the Sobel filter \(\Delta_{\delta,\text{Sobel}}\), performs slightly different depending on the scene. For certain scenes like scan40 and scan55 the detected edges seem to be located too far above the reference surface. For the other scenes, however, substantial gaps exist. The Canny filter \(\Delta_{\delta,\text{Canny}}\) provides the strongest visual results. Besides the colorful and smooth objects like scene scan63, also complex collections like in scene scan37 can be reconstructed almost completely. In general, the Canny filter effectively eliminates gaps and delivers a uniform point density. In addition, the subsurface of the scenes containing a colored ground are well captured. The second derivation with the Laplacian of Gaussian \(\Delta_{\delta,\text{LOG}}^{2}\) also reaches a complete reconstruction at first sight. However, especially at scene scan55 and partly scene scan40 the point cloud tends to be rather fuzzy and noisy. CorrectnessBesides the visually strong results, the density gradient applications also provide geometrically promising results (Figure 4). Both the global density threshold as well as the gradient filters enable results with accuracies up to 2.5 mm for most parts of the object surfaces. Nevertheless, especially in reconstructions from global density thresholds and Sobel filter \(\Delta_{\delta,\text{Sobel}}\), some artifacts up to 10 mm appear. In contrast, the Canny filter \(\Delta_{\delta,\text{Canny}}\) provides mainly consistent high accuracies to about 1.5 mm. The geometric accuracy of the Laplacian of Gaussian \(\Delta_{\delta,\text{LOG}}^{2}\) depends highly on the scene and appears quite noisy. Note that edged areas with a large deviation Figure 2: Qualitative comparison on the real DTU benchmark dataset. Comparison between the reference point clouds and the geometric reconstructions from 3D density field using a global density threshold \(\delta_{\text{s-50}}\), density gradients from Sobel filter \(\Delta_{\delta,\text{Sobel}}\), Canny filter \(\Delta_{\delta,\text{Canny}}\) and Laplacian of Gaussian \(\Delta_{\text{J,LOG}}^{2}\). are mainly due to the missing parts of the points in the SLS reference, which are depicted in the images and therefore in the reconstruction from 3D density field. ### Quantitative results CompletenessThe reported completeness reached by the different NeRF reconstructions is shown in Table 1 and specified by absolute number of points as well as percentage. Altogether, the completeness values for the distance thresholds up to 1 and 1.5 mm exceed 60 \(\%\) for all methods. As expected, an increase of the density threshold \(\delta_{\rm t}\) causes a decrease of the completeness, due to the fact that a higher number of points are removed. Accordingly, using a density threshold of 25 results in the highest completeness for this dataset, with a mean across scenes of approximately 96 \(\%\) for points below 1.5 mm and 93 \(\%\) up to 1 mm accuracy. The results from density gradients through the Canny filter also stands out strongly. On average, 96 \(\%\) completeness is achieved for 1.5 mm and 86 \(\%\) for 1 mm. Taking a more detailed look, the methods perform variably for each scene. Canny performs particularly well on complex, specular and smooth objects as in scene scan37, scan63 or scan114. However, at fine detail levels and rough objects like in scene scan24 and scan55, the completeness quickly weakens for highly accurate reconstructions up to 0.5 mm. Nonetheless, the completeness decreases significantly using global density thresholds starting from \(\delta_{\rm u=50}\) and can not compete with the Canny filter. CorrectnessSince NeRFs estimate values in the entire 3D space and consequently inside the objects, there may be artifactual points within the objects, which affect the quantitative accuracy results in terms of correctness. Respectively, the results (Table 2) are mostly in the same, rather coarse, range of accuracies up to 6 mm, from the NeRF reconstructions to the reference (data-to-reference). While using a global threshold performs differently depending on the scene, using density gradients remains consistently stable across all scenes. Nevertheless, the points within the object undermine the interpretability of the results. To emphasize the accuracy potential of density gradients, considering the reconstruction surface points, Figure 3 shows the surface points below 0.5, 1.0 and 1.5 mm for a result based on the Canny filter. It illustrates that the Canny filter approach generates a densely sampled scene whose surface points exhibit high geometric accuracies. When viewing the accuracy from the reference to the 3D reconstructions (reference-to-data), the density gradients stand out positively with an achieved correctness compared to the global density thresholds as well. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & scan24 & scan37 & scan40 & scan55 & scan63 & scan114 & mean in \(\%\) \\ \hline 1.5 mm & & & & & & & \\ \(\delta_{\rm u=25}\) & 3.08 (**96.99**\(\%\)) & 2.27 (**97.58**\(\%\)) & 2.95 (**98.36**\(\%\)) & 2.96 (**90.80**\(\%\)) & 1.10 (97.51 \(\%\)) & 3.02 (**97.66**\(\%\)) & **96.48** \\ \(\delta_{\rm u=50}\) & 2.98 (93.79 \(\%\)) & 2.10 (00.48) & 2.86 (95.31 \(\%\)) & 2.35 (72.22 \(\%\)) & 0.87 (77.42 \(\%\)) & 2.79 (90.27 \(\%\)) & 86.58 \\ \(\delta_{\rm u=100}\) & 2.79 (87.90 \(\%\)) & 1.66 (71.54 \(\%\)) & 2.66 (88.51 \(\%\)) & 1.94 (59.53 \(\%\)) & 0.47 (41.71 \(\%\)) & 2.39 (77.17 \(\%\)) & 71.07 \\ \(\Delta_{\delta,\rm Sobel}\) & 3.03 (**95.24**\(\%\)) & 2.03 (87.30 \(\%\)) & 2.88 (96.01 \(\%\)) & 2.09 (63.98 \(\%\)) & 1.02 (90.83 \(\%\)) & 2.40 (77.48 \(\%\)) & 85.14 \\ \(\Delta_{\delta,\rm Canty}\) & 3.00 (94.43 \(\%\)) & 2.26 (**97.28**\(\%\)) & 2.97 (**98.93**\(\%\)) & 2.90 (**88.91**\(\%\)) & 1.12 (**99.49**\(\%\)) & 3.02 (**97.54**\(\%\)) & **96.10** \\ \(\Delta_{\delta,\rm LOG}^{\rm\Delta}\) & 2.73 (85.76 \(\%\)) & 2.21 (95.08 \(\%\)) & 2.79 (93.11 \(\%\)) & 2.52 (77.15 \(\%\)) & 1.11 (**98.30**\(\%\)) & 2.95 (95.33 \(\%\)) & 90.79 \\ 1.0 mm & & & & & & & \\ \(\delta_{\rm u=25}\) & 2.98 (**93.60**\(\%\)) & 2.21 (**94.89**\(\%\)) & 2.87 (**95.80**\(\%\)) & 2.82 (**86.43**\(\%\)) & 1.06 (**93.80**\(\%\)) & 2.95 (**95.55**\(\%\)) & **93.35** \\ \(\delta_{\rm u=50}\) & 2.82 (88.73 \(\%\)) & 1.92 (82.80 \(\%\)) & 2.71 (90.50 \(\%\)) & 2.17 (66.65 \(\%\)) & 0.75 (66.83 \(\%\)) & 2.64 (85.25 \(\%\)) & 80.13 \\ \(\delta_{\rm u=100}\) & 2.52 (93.38 \(\%\)) & 1.41 (60.58 \(\%\)) & 2.43 (80.96 \(\%\)) & 1.86 (57.08 \(\%\)) & 0.36 (32.32 \(\%\)) & 2.11 (68.22 \(\%\)) & 63.09 \\ \(\Delta_{\delta,\rm Sobel}\) & 2.94 (**92.43**\(\%\)) & 1.86 (80.17 \(\%\)) & 2.82 (**93.92**\(\%\)) & 2.02 (61.87 \(\%\)) & 0.96 (84.94 \(\%\)) & 2.23 (72.00 \(\%\)) & 80.88 \\ \(\Delta_{\delta,\rm Canty}\) & 2.46 (77.22 \(\%\)) & 2.08 (**89.46**\(\%\)) & 2.73 (91.05 \(\%\)) & 2.27 (**69.57**\(\%\)) & 1.10 (**98.06**\(\%\)) & 2.87 (**92.73**\(\%\)) & **86.35** \\ \(\Delta_{\delta,\rm LOG}^{\rm\Delta}\) & 2.13 (66.94 \(\%\)) & 1.93 (83.15 \(\%\)) & 2.17 (72.36 \(\%\)) & 1.98 (60.66 \(\%\)) & 1.03 (91.48 \(\%\)) & 2.57 (83.18 \(\%\)) & 76.30 \\ 0.5 mm & & & & & & & \\ \(\delta_{\rm u=25}\) & 2.63 (**82.86**\(\%\)) & 1.81 (**77.70**\(\%\)) & 2.28 (**75.86**\(\%\)) & 2.20 (**67.46**\(\%\)) & 0.92 (**81.48**\(\%\)) & 2.73 (**88.21**\(\%\)) & **78.93** \\ \(\delta_{\rm u=50}\) & 2.31 (72.66 \(\%\)) & 1.39 (59.63 \(\%\)) & 2.00 (66.74 \(\%\)) & 1.60 (49.03 \(\%\)) & 0.55 (48.97 \(\%\)) & 2.22 (**71.66**\(\%\)) & 61.45 \\ \(\delta_{\rm u=100}\) & 1.70 (53.37 \(\%\)) & 0.88 (37.83 \(\%\)) & 1.59 (53.03 \(\%\)) & 1.21 (37.15 \(\%\)) & 0.22 (19.48 \(\%\)) & 1.49 (48.03 \(\%\)) & 41.48 \\ \(\Delta_{\delta,\rm Sobel}\) & 2.69 (**94.67**\(\%\)) & 1.40 (**60.12**\(\%\)) & 2.43 (**80.86**\(\%\)) & 1.77 (**54.22**\(\%\)) & 0.81 (72.39 \(\%\)) & 1.89 (61.00 \(\%\)) & **68.88** \\ \(\Delta_{\delta,\rm Canty}\) & 1.14 (35.85 \(\%\)) & 1.00 (42.86 \(\%\)) & 1.25 (41.69 \(\%\)) & 0.80 (24.67 \(\%\)) & 0.87 (77.51 \(\%\)) & 1.96 (63.55 \(\%\)) & 47.69 \\ \(\Delta_{\delta,\rm LOG}^{\rm\Delta}\) & 1.20 (37.66 \(\%\)) & 1.13 (48.73 \(\%\)) & 0 Figure 4: Qualitative comparison on the real DTU benchmark dataset with Chamfer cloud-to-cloud distances. Comparison between the reference point clouds and the geometric reconstructions from 3D density field using a global density threshold \(\delta_{\text{s-50}}\), density gradients from Sobel filter \(\Delta_{\delta,\text{Sobel}}\), Canny filter \(\Delta_{\delta,\text{Canny}}\) and Laplacian of Gaussian \(\Delta_{\delta,\text{LOG}}^{2}\). ## 6 Discussion In this paper, we investigate the density gradients for achieving high geometric completeness and correctness in 3D reconstructions based on density gradients from NeRFs density output. The application of gradient filters on the density field for 3D edge detection shows remarkable results, compared to the usage of global density thresholds. The latter often leads to gaps or artifacts in the reconstructions, depending on the chosen threshold. However, by extracting surfaces based on density gradients, we can overcome this issue. The qualitative results of the density gradients, especially by the Canny filter, consistently stand out as positive over all scenes in terms of completeness, this aligns with both quantitative and qualitative results. While global density thresholds yield good results, scene-dependent accuracy variations exist. For some scenes the Sobel filter as well as Laplacian of Gaussian also serve as suitable results and are alternatives to global density thresholds. Nevertheless, the improvements with the Canny filter for object edge detection outperform the other techniques, ensuring nearly gapless reconstructions for objects and subsurface in all scenes. The trade-off between correctness and completeness with global density thresholds is evident: Lower density thresholds lead to higher completeness but not necessarily superior correctness. The density gradients, especially based on the Canny filter, strike a favorable balance between correctness and completeness across the scenes. Although our framework achieves high accuracy on the object surfaces, it should be noted that points exist within the objects due to the addressing of the whole voxelized density field. These artifactual points distort the quantitative correctness and do not contribute to the visual appearance and surface accuracy. Limitations are given by processing within the entire density grid thus causing artifacts within the object. This issue may be addressed by extracting only the surface, e.g., using convex hull algorithms. In addition, we aim to apply neural methods for 3D edge detection. The range and values of the density among different NeRFs, hyperparameters, network configurations, and scenes is variable. Dealing with absolute density values presents a challenge. The density gradients are almost independent of absolute density magnitudes and relying on relative neighboring values in all directions by applying 3D edge detection filter. A notable advantage of our approach is its applicability to different applications. Density gradients allow us to extract surfaces along lower density values using 3D edge detection filters such as Sobel, Canny and Laplacian of Gaussian. ## 7 Conclusion In summary, we have demonstrated that density gradients based geometric reconstructions lead to high completeness and adequate accuracy. In considering specific relative density variations or gradients based on first and second derivatives, our approach shows potential for application to various NeRFs, that allow the extraction of a regular voxelized density field. This makes our approach rather independent from absolute NeRFs density output. Furthermore, by filtering with global density thresholds, the points are emphasized individually and independently. In contrast, the utilization of 3D gradient filters leverages the inclusion of gradient-based neighborhood information in an anisotropic manner. Therefore, our approach provides a promising anisotropic solution for complete 3D reconstruction from NeRF with high geometric accuracy. Consequently, our method introduces a promising, from absolute density magnitude independent solution, which opens new possibilities of reconstructions using NeRFs.
2301.13795
Tunable BCS-BEC crossover, reentrant, and hidden quantum phase transitions in two-band superconductors with tunable valence and conduction bands
Two-band electronic structures with a valence and a conduction band separated by a tunable energy gap and with pairing of electrons in different channels can be relevant to investigate the properties of two-dimensional multiband superconductors and electron-hole superfluids, as monolayer FeSe, recently discovered superconducting bilayer graphene, and double-bilayer graphene electron-hole systems. This electronic configuration allows also to study the coexistence of superconductivity and charge density waves in connection with underdoped cuprates and transition metal dichalcogenides. By using a mean-field approach to study the system above mentioned, we have obtained numerical results for superconducting gaps, chemical potential, condensate fractions, coherence lengths, and superconducting mean-field critical temperature, considering a tunable band gap and different filling of the conduction band, for parametric choice of the pairing interactions. By tuning these quantities, the electrons redistribute among valence and conduction band in a complex way, leading to a new physics with respect to single-band superconductors, such as density induced and band-selective BCS-BEC crossover, quantum phase transitions, and hidden criticalities. At finite temperature, this phenomenon is also responsible for the non-monotonic behavior of the superconducting gaps resulting in a superconducting-normal state reentrant transition, without the need of disorder or magnetic effects.
Giovanni Midei, Andrea Perali
2023-01-31T17:37:52Z
http://arxiv.org/abs/2301.13795v1
Tunable BCS-BEC crossover, reentrant, and hidden quantum phase transitions in two-band superconductors with tunable valence and conduction bands ###### Abstract Two-band electronic structures with a valence and a conduction band separated by a tunable energy gap and with pairing of electrons in different channels can be relevant to investigate the properties of two-dimensional multiband superconductors and electron-hole superfluids, as monolayer FeSe, recently discovered superconducting bilayer graphene, and double-bilayer graphene electron-hole systems. This electronic configuration allows also to study the coexistence of superconductivity and charge density waves in connection with underdoped cuprates and transition metal dichalcogenides. By using a mean-field approach to study the system above mentioned, we have obtained numerical results for superconducting gaps, chemical potential, condensate fractions, coherence lengths, and superconducting mean-field critical temperature, considering a tunable band gap and different filling of the conduction band, for parametric choice of the pairing interactions. By tuning these quantities, the electrons redistribute among valence and conduction band in a complex way, leading to a new physics with respect to single-band superconductors, such as density induced and band-selective BCS-BEC crossover, quantum phase transitions, and hidden criticalities. At finite temperature, this phenomenon is also responsible for the non-monotonic behavior of the superconducting gaps resulting in a superconducting-normal state reentrant transition, without the need of disorder or magnetic effects. ## I Introduction Multi-band and multi-gap superconductivity is a complex quantum coherent phenomenon with peculiar features that cannot be found in single-band and single-gap superconductors [1]. The increased number of degrees of freedom in the condensate state allows for novel quantum effects which are unattainable otherwise, for instance enriching the physics of the BCS-BEC crossover [2; 3; 4; 5]. Proximity to the crossover regime of the BCS-BEC crossover in multi-band superconductors having deep and shallow bands can determine a notable increase of superconducting gaps and critical temperature (T\({}_{c}\)) [6; 7; 8; 9], associated with an higher mean-field T\({}_{c}\), together with optimal conditions for the screening of superconducting fluctuations [10; 11; 12]. Furthermore, the interplay of low-dimensional two-band systems allows for screening of fluctuations in systems composed by coupled quasi-2D bands or even in the vicinity of a van Hove singularity (e.g., in the case of quasi-1D), enabling shrinking of the pseudo-gap phase and robust high-critical temperatures [13; 14; 15]. Motivated by high temperature superconductivity and anomalous metallic state properties in underdoped cuprates, interest has grown in the pseudogap physics, in which a blurred gap persists in the normal state near the Fermi level. There are different models and explanations for this pseudogap, the simplest one being a smooth crossover from the BCS regime towards a Bose-Einstein condensation regime in which bound pairs form first at higher temperatures, and then below a critical temperature T\({}_{c}\) they condense, with the pseudogap being the excitation energy of the quasi-molecular pairs. Another explanation relevant for underdoped cuprates is the presence of other mechanisms different from pair fluctuations, such as charge density waves (CDWs) [16; 17; 18; 19] and their fluctuations that can modify the energy spectrum with opening of (pseudo)gaps and at the same time mediate Cooper pairing. Thus, systems in which CDWs and superconductivity coexist are of primary interest to study the BCS-BEC crossover when an energy gap separates the electronic spectrum in two bands, determining a valence and a conduction band. In addition to underdoped cuprates, an interesting example is given by the transition metal dichalcogenide (TMD) family, MX\({}_{2}\), where M = Ti, Nb, Mo, Ta and X = S, Se, which exhibits a rich interplay between superconductivity and CDW order [20]. In these materials, superconductivity occurs in an environment of pre-existing CDW order [21; 22], making them an ideal platform to study many-body ground states and competing phases in the 2D regime. The relationship between CDW and superconductivity in such systems is still under investigation [23; 24]. In general, their mutual interaction is competitive, but evidence to the contrary, indicating a cooperative interplay, has also been reported in angle-resolved photoemission spectroscopy (ARPES) studies [22]. Among them, bulk Niobium diselenide (2H-NbSe\({}_{2}\)) undergoes a CDW distortion at T=30 K and becomes superconducting at 7 K. References [25; 26] reported that T\({}_{c}\) lowers to 1.9 K in 2H-NbSe\({}_{2}\) single-layers and that the CDW measured in the bulk is preserved. Theoretical support is given by Chao-Sheng Lian et al. [27]: they demonstrate enhanced superconductivity in the CDW state of monolayer tantalium diselenide (TaSe\({}_{2}\)) with DFT calculations. In contrast with 2H-NbSe\({}_{2}\), they report that as TaSe\({}_{2}\) is thinned to the monolayer limit, its superconducting critical temperature rises from 0.14 K in the bulk to 2 K in the monolayer. Another appealing superconducting material is the monolayer FeSe grown on a SrTiO\({}_{3}\) substrate, which exhibits a huge increase of T\({}_{c}\) up to 100 K [28] and it is characterized by a valence and a conduction band structure near the Fermi level. Furthermore, very recently 2D superconductivity has been found in bilayer graphene systems, in which conduction and valence bands are separated by a small energy band-gap (\(0\div 100\) meV) that can be precisely tuned by an external electric field [29] (for a review see [30]). Coupling a monolayer of WSe\({}_{2}\) with bilayer graphene has been found to enhance superconductivity by an order of magnitude in T\({}_{c}\) and superconductivity emerges already at zero magnetic field [31]. Finally, it turns out that the two-band superconducting system considered in this work is in close correspondence with two-band electron-hole superfluids in double bilayer graphene [32]. Therefore, the growing experimental realization of 2D superconductors with valence and conduction bands separated by a tunable energy gap and electron-hole superfluidity in multilayer systems motivated us to investigate the BCS-BEC crossover in this kind of systems. The detailed analysis of this configuration is lacking in the literature to the best of our knowledge. A pioneering work on a related system with valence and conduction parabolic bands has been done by Nozieres and Pistolesi [33] to study the phase transition from a semiconducting to a superconducting state and the consequent (pseudo)gap opening, in the specific case of equal pairing strengths for all interaction channels considered. In our work we consider a superconductor with two tight-binding bands with different intra-band and pair-exchange couplings, in order to probe the possibility to have coexisting Cooper pairs of different average sizes [34] in the valence and conduction band. However, for most of multi-band superconductors the tuning of intra-band and pair-exchange interactions is rather challenging and their properties cannot be studied easily in a continuous way across the BCS-BEC crossover. As shown in this work, a different way to explore the BCS-BEC crossover in such systems can be achieved by tuning the energy gap between the valence and the conduction band. In fact, since the number of particles in the single bands is not conserved, when the energy band gap is modified the number of holes and of electrons forming Cooper pair respectively in the valence and in the conduction bands changes, allowing for the occurrence of a density induced multi-band BCS-BEC crossover [35]. This redistribution of charges between the valence and the conduction band leads also to novel and interesting quantum phase transitions (QPTs) from a superconducting to an insulating state, or hidden criticalities evidenced by the analysis of the order parameter coherence lengths [36; 37]. At finite temperature, a new type of reentrant superconducting to normal state transition has been also found and characterized. The results reported and discussed in this work demonstrate the richness of the proposed valence and conduction band configuration to generate and tune new types of crossover phenomena and quantum phases. The manuscript is organized as follow. In section II we describe the model for the physical system considered and the theoretical approach for the evaluation of the superconducting state properties. In section III we report our results. The conclusions of our work will be reported in Section IV. ## II Model system and theoretical approach We consider a two-dimensional (2D) two-band superconductor with a valence and a conduction electronic band in a square lattice. The valence and the conduction bands are modelled by a tight-binding dispersion given, respectively, by Eqs. (1) and (2): \[\varepsilon_{1}(\mathbf{k})=2t[\cos(k_{x}a)+\cos(k_{y}a)]-8t-E_{g} \tag{1}\] \[\varepsilon_{2}(\mathbf{k})=-2t[\cos(k_{x}a)+\cos(k_{y}a)] \tag{2}\] where \(t\) is the nearest neighbour hopping parameter assumed to be the same for both bands, \(a\) is the lattice parameter and the wave-vectors belong to the first Brillouin zone \(-\frac{\pi}{a}\leq k_{x,y}\leq\frac{\pi}{a}\); \(E_{g}\) is the energy band-gap between the conduction and the valence band. The band dispersions are reported in Fig. 1. In order to study the superconducting state properties of our system, we assume that Cooper pairs formation is due to an attractive interaction between opposite spin electrons. The two-particle interaction has been approximated by a separable potential \(V_{ij}(\mathbf{k},\mathbf{k}^{\prime})\) with an energy cutoff \(\omega_{0}\), which is given by: \[V_{ij}(\mathbf{k},\mathbf{k}^{\prime})=-V_{ij}^{0}\Theta\Big{(}\omega_{0}-| \xi_{i}(\mathbf{k})|\Big{)}\Theta\Big{(}\omega_{0}-|\xi_{i}(\mathbf{k}^{\prime })|\Big{)} \tag{3}\] Figure 1: Electronic band structure of the two-band 2D system considered in this work. \(E_{g}\) is the energy gap between the valence (i = 1) and the conduction (i = 2) band. where \(V_{ij}^{0}>0\) is the strength of the potential in the different pairing channels and \(i,j\) label the bands. \(V_{11}^{0}\) and \(V_{22}^{0}\) are the strength of the intra-band pairing interactions (Cooper pairs are created and destroyed in the same band). \(V_{12}^{0}\) and \(V_{21}^{0}\) are the strength of the pair-exchange interactions (Cooper pairs are created in one band and destroyed in the other band, and vice versa), so that superconductivity in one band can induce superconductivity in the other band. The same energy cutoff \(\omega_{0}\) of the interaction for intra-band and pair-exchange terms is considered. Through out this work, \(\omega_{0}\) is considered an energy scale larger than the total bandwidth of our system to model an effective pairing of electronic origin, or a contact attractive potential. This is a key assumption to make possible for the system to explore the entire BCS-BEC crossover [38]. The terms corresponding to Cooper pairs forming from electrons associated with different bands (inter-band or cross-band pairing) are not considered in this work (see [39]). \(\xi_{i}(\mathbf{k})=\varepsilon_{i}(\mathbf{k})-\mu\) in Eq. (3) is the energy dispersion for the band \(i\) with respect to the chemical potential \(\mu\). The superconducting state of the system and its evolution with relevant system parameters is studied at a mean-field level. The BCS equations for the superconducting gaps have to be coupled with the density equation which fixes the chemical potential, since the self-consistent renormalization of the chemical potential is a key feature to account for the BCS-BEC crossover physics. Zero and finite temperature cases have been considered in this work. The BCS equations for the superconducting gaps in the two-band system at a given temperature T are \[\begin{split}\Delta_{1}(\mathbf{k})=&-\frac{1}{2 \Omega}\sum_{k^{\prime}}\Biggl{[}V_{11}(\mathbf{k},\mathbf{k}^{\prime})\frac{ \Delta_{1}(\mathbf{k}^{\prime})}{E_{1}(\mathbf{k}^{\prime})}\tanh\frac{E_{1}( \mathbf{k}^{\prime})}{2T}\\ &+V_{12}(\mathbf{k},\mathbf{k}^{\prime})\frac{\Delta_{2}( \mathbf{k}^{\prime})}{E_{2}(\mathbf{k}^{\prime})}\tanh\frac{E_{2}(\mathbf{k}^{ \prime})}{2T}\Biggr{]}\end{split} \tag{4}\] \[\begin{split}\Delta_{2}(\mathbf{k})=&-\frac{1}{2 \Omega}\sum_{k^{\prime}}\Biggl{[}V_{22}(\mathbf{k},\mathbf{k}^{\prime})\frac{ \Delta_{2}(\mathbf{k}^{\prime})}{E_{2}(\mathbf{k}^{\prime})}\tanh\frac{E_{2}( \mathbf{k}^{\prime})}{2T}\\ &+V_{21}(\mathbf{k},\mathbf{k}^{\prime})\frac{\Delta_{1}( \mathbf{k}^{\prime})}{E_{1}(\mathbf{k}^{\prime})}\tanh\frac{E_{1}(\mathbf{k}^{ \prime})}{2T}\Biggr{]}\end{split} \tag{5}\] where \(E_{i}(\mathbf{k})=\sqrt{\xi_{i}(\mathbf{k})^{2}+\Delta_{i}(\mathbf{k})^{2}}\) is the dispersion of single-particle excitations in the superconducting state and \(\Omega\) is the area occupied by the 2D system. \(\hbar=1\) and \(k_{B}=1\) throughout the manuscript. The superconducting gaps have the same energy cutoff of the separable interaction: \[\Delta_{i}(\mathbf{k})=\Delta_{i}\Theta\Bigl{(}\omega_{0}-|\xi_{i}(\mathbf{k} )|\Bigr{)} \tag{6}\] The total electron density of the two-band system is fixed and given by the sum of the single-band densities, \(n_{tot}=n_{1}+n_{2}\), that can vary instead. The electronic density \(n_{i}\) in the band (\(i\)) at temperature T is given by, \[n_{i}=\frac{2}{\Omega}\sum_{k}\Bigl{[}v_{i}(\mathbf{k})^{2}f\bigl{(}-E_{i}( \mathbf{k})\bigr{)}+u_{i}(\mathbf{k})^{2}f\bigl{(}E_{i}(\mathbf{k})\bigr{)} \Bigr{]} \tag{7}\] where \(f(E)\) is the Fermi-Dirac distribution function. The BCS coherence weights \(v_{i}(\mathbf{k})\) and \(u_{i}(\mathbf{k})\) are: \[v_{i}(\mathbf{k})^{2}=\frac{1}{2}\Biggl{[}1-\frac{\xi_{i}(\mathbf{k})}{\sqrt{ \xi_{i}(\mathbf{k})^{2}+\Delta_{i}(\mathbf{k})^{2}}}\Biggr{]} \tag{8}\] \[u_{i}(\mathbf{k})^{2}=1-v_{i}(\mathbf{k})^{2} \tag{9}\] For the valence band the definition of the condensate fraction is the ratio of the number of Cooper pairs in the valence band to the number of holes in the valence band, \[\alpha_{1}^{h}=\frac{\sum_{\mathbf{k}}\bigl{(}u_{1}(\mathbf{k})v_{1}(\mathbf{ k})\bigr{)}^{2}}{\sum_{\mathbf{k}}u_{1}(\mathbf{k})^{2}} \tag{10}\] For the conduction band instead, the expression already used in the one-band case is generalized to the number of Cooper pairs divided by the total number of carriers in the conduction band \[\alpha_{2}^{e}=\frac{\sum_{\mathbf{k}}\bigl{(}u_{2}(\mathbf{k})v_{2}(\mathbf{ k})\bigr{)}^{2}}{\sum_{\mathbf{k}}v_{2}(\mathbf{k})^{2}} \tag{11}\] The intra-pair coherence length \(\xi_{pair_{i}}\) has the same form for both the valence and the conduction bands, that is \[\xi_{pair_{i}}^{2}=\frac{\sum_{\mathbf{k}}\big{|}\nabla\bigl{(}u_{i}(\mathbf{k} )v_{i}(\mathbf{k})\bigr{)}\big{|}^{2}}{\sum_{\mathbf{k}}\bigl{(}u_{i}(\mathbf{ k})v_{i}(\mathbf{k})\bigr{)}^{2}} \tag{12}\] Regarding the superconducting order parameter coherence length, two characteristic length scales in the spatial behavior of superconducting fluctuations are expected, since the system is made up by two partial condensates. When the pair-exchange interaction is not present, these two lengths are simply the order parameter coherence lengths of the condensates of the valence \(\xi_{c1}\) and of the conduction \(\xi_{c2}\) band. When the pair-exchange interactions is different from zero, one has to deal with coupled condensates, and these length scales cannot be attributed to the single bands involved, describing instead the collective features of the whole two-component condensate. The pair-exchange interactions mix the superconducting order parameters of the initially non-interacting bands, that acquire mixed character. The soft, or critical, coherence length \(\xi_{s}\) diverges at the phase transition point, while the rigid, or non-critical, coherence length \(\xi_{r}\) remains finite. Following the approach in [37], these characteristic length scales are given by \[\xi_{s,r}^{2}=\frac{G(T)\pm\sqrt{G^{2}(T)-4K(T)\gamma(T)}}{2K(T)} \tag{13}\] where \(\xi_{s}\) corresponds to the solution with the plus and \(\xi_{r}\) to the one with the minus sign and \[\begin{split} G(T)=(V_{12}^{0})^{2}\big{(}\tilde{g}_{1}(T)\beta_{2} (T)+\tilde{g}_{2}(T)\beta_{1}(T)\big{)}+\\ \big{(}1-V_{11}^{0}\tilde{g}_{1}(T)\big{)}V_{22}^{0}\beta_{2}(T)+\\ \big{(}1-V_{22}^{0}\tilde{g}_{2}(T)\big{)}V_{11}^{0}\beta_{1}(T) \end{split} \tag{14}\] \[\begin{split} K(T)=\big{(}1-V_{11}^{0}\tilde{g}_{1}(T)\big{)} \big{(}1-V_{22}^{0}\tilde{g}_{2}(T)\big{)}-\\ (V_{12}^{0})^{2}\tilde{g}_{1}(T)\tilde{g}_{2}(T)\end{split} \tag{15}\] \[\begin{split}\gamma(T)=\big{(}V_{11}^{0}V_{22}^{0}-(V_{12}^{0})^{ 2}\big{)}\beta_{1}(T)\beta_{2}(T)\end{split} \tag{16}\] \[\begin{split}\tilde{g}_{i}(T)=g_{i}(T)-3\nu_{i}(T)\big{(}\Delta_ {i}(T)\big{)}^{2}\end{split} \tag{17}\] \[\begin{split} g_{i}(T)=\frac{1}{2V}\sum_{\mathbf{k}}\frac{1}{ \xi_{i}(\mathbf{k})}\tanh\frac{\xi_{i}(\mathbf{k})}{2T}\end{split} \tag{18}\] \[\begin{split}\nu_{i}(T)=\\ -\frac{1}{2V}\sum_{\mathbf{k}}\frac{\partial}{\partial|\Delta_{i} |^{2}}\Bigg{(}\frac{1}{E_{i}(\mathbf{k})}\tanh\frac{\xi_{i}(\mathbf{k})}{2T} \Bigg{)}_{\Delta_{i}=0}\end{split} \tag{19}\] \[\begin{split}\beta_{i}(T)=-\frac{1}{4V}\sum_{\mathbf{k}}\frac{ \partial^{2}}{\partial q_{l}^{2}}\Bigg{[}\frac{1}{\xi_{i}(\mathbf{k})+\xi_{i} (\mathbf{k}-\mathbf{q})}\\ \times\Big{(}\tanh\frac{\xi_{i}(\mathbf{k})}{2T}+\tanh\frac{\xi_ {i}(\mathbf{k}-\mathbf{q})}{2T}\Big{)}\Bigg{]}_{\mathbf{q}=0}\end{split} \tag{20}\] where \(l\) refers to the Cartesian axis in Eq. (20). In order to describe the physics of the quantum phase transition, the values of the coherence lengths at zero temperature have been approximated by choosing a low enough temperature so that the superconducting gaps and the chemical potential retain the same behavior of the zero temperature case. The energies are normalized in units of the hopping \(t\) and the dimensionless couplings \(\lambda_{ii}\) are defined as \(\lambda_{ii}=NV_{ii}^{0}\), where \(N=1/4\pi a^{2}t\) is the density of states at the top / bottom of the valence / conduction band, that coincide since the density of states is not modified by the concavity of the band. The intra-pair coherence lengths \(\xi_{pair_{i}}\) are normalized using the average inter-particle distance in the normal state \(l_{i}=1/\sqrt{\pi n_{i}}\), where \(n_{i}\) is the density in the band \(i\). This quantities differ by a factor of \(\sqrt{2}\) by the inverse of the respective Fermi wave-vector \(K_{Fi}\). The soft \(\xi_{s}\) and the rigid \(\xi_{r}\) coherence lengths are normalized with respect to the lattice constant \(a\), since in the two-band case they cannot be attributed to any of the two bands. ## III Results In this section we study the properties of the superconducting ground state and give a full characterization of the BCS-BEC crossover in the two-band system considered in this work. First, we study the zero temperature superconducting gaps in the conduction (\(\Delta_{2}\)) and in the valence (\(\Delta_{1}\)) band through the BCS-BEC crossover, for the case of unbalanced intra-band couplings (\(\lambda_{11}\neq\lambda_{22}\)). The results are shown in Fig. 2, in which the superconducting gaps are reported as functions of the energy band-gap \(E_{g}\), for different values of the total density \(a^{2}n_{tot}\) and for different pair-exchange couplings \(\lambda_{12}=\lambda_{21}\). In the case of an empty conduction band and a completely filled valence band, corresponding to \(a^{2}n_{tot}=2.00\), a quantum phase transition (QPT) to the normal state takes place at a specific quantum critical point (QCP), that occurs when \(E_{g}=E_{g}^{*}\). When the carrier concentration in the conduction band is non-zero, the phase transition becomes a crossover and superconductivity extends for all values of the band gap \(E_{g}\). However, the system presents different behaviors if the value of the band gap is smaller or larger of \(E_{g}^{*}\). For finite doping, the valence band contributes very weakly to the superconducting state of the system for \(E_{g}>E_{g}^{*}\). In this regime the bands are almost decoupled and the superconducting gaps does not depend on \(E_{g}\). However, in the case of Fig. 2(c) since the pair-exchange couplings are weak the conduction band cannot sustain the superconductivity in the valence band and \(\Delta_{1}\) is suppressed. Thus, continuously tuning \(E_{g}\) to higher values will result in \(\Delta_{1}<<\Delta_{2}\) so that there is only one significant super Figure 2: Superconducting gaps \(\Delta_{2}/t\) opening in the conduction band (a)-(b) and in the valence band \(\Delta_{1}/t\) (c)-(d) as functions of the band-gap energy \(E_{g}/t\) for an energy cutoff of the attractive interactions \(\omega_{0}/t=20\). The intra-band couplings are \(\lambda_{11}=0.23\) and \(\lambda_{22}=0.75\). The pair-exchange couplings are (\(\lambda_{12}=\lambda_{21}\)): (a),(c) (0.001), (b),(d) (0.1). The superconducting gaps are reported for different values of the total density \(a^{2}n_{tot}\). conducting gap and one significant condensate. In the other case instead (Fig. 2(d)), the pair-exchange couplings are stronger and \(\Delta_{1}\) is not much suppressed with respect to its initial value, since in these cases the superconductivity in the valence band is sustained by the condensate of the conduction band. Another interesting feature of this system is that \(\Delta_{1}\) is enhanced for lower values of the total density as long as \(E_{g}<E_{g}^{*}\). When \(E_{g}>E_{g}^{*}\) instead, the opposite situation occurs. The value of \(E_{g}^{*}\) at which this behavior takes place depends on the level of filling of the conduction band, shifting to the left when higher total densities are considered, and on the pair-exchange couplings that shifts \(E_{g}^{*}\) to the right when larger interactions strength are considered. The reason behind the behavior of the superconducting gaps can be found by looking at the densities of particles forming Cooper pairs, which are electrons in the conduction band and holes in the valence band. While the total density is fixed, the density in each band can vary. In this way, the density of particles in the conduction band \(n_{2}\) is no longer controlled only by doping as for a single band system, there are instead additional particles excited from the valence band. Nevertheless, for larger values of \(E_{g}\) the gain in the interaction energy due to superconductivity is much smaller than the kinetic energy cost for transferring electrons from the valence band to the conduction band, so that very few electrons (compared to the total density of electrons in the valence band) are excited into the conduction band. This behavior is shown in Fig. 3. As one can see for \(a^{2}n_{tot}=2.00\) the hole density in the valence band and the electron density in the conduction band coincide and are monotonically decreasing, both of them vanishing at the QCP \(E_{g}=E_{g}^{*}\). This is a sign that superconductivity is due to holes in the valence band and to electrons in the conduction band. In the other cases the hole density in the valence band is almost zero for \(E_{g}>E_{g}^{*}\), while the electron density in the conduction band is approaching the asymptotic value given by the total density minus the density of the filled valence band \(a^{2}n_{2}=a^{2}n_{tot}-2.00\). In Fig. 4 the chemical potential is reported as a function of \(E_{g}\), for different total densities \(a^{2}n_{tot}\) and for different pair-exchange couplings. For higher values of the total density and of the pair-exchange couplings the chemical potential shift toward higher energies, due to the larger number of electrons in the conduction band. In particular, when \(E_{g}\) is increased, in the low density regime the chemical potential starts deep inside the valence band and then enters the gap between the two bands, meaning that the condensate in the valence band spans a wide region of the BCS-BEC crossover, while the conduction band is always located in the BEC side of the crossover regime or in the BEC regime, depending on whether the chemical potential lies inside the conduction band or not. When \(E_{g}>E_{g}^{*}\) the chemical potential acquires a flat dependence and is not modified by \(E_{g}\), in a similar way to what happens to the superconducting gaps and the densities. In Fig. 5 the condensate fraction is shown as a function of \(E_{g}\), for different \(a^{2}n_{tot}\) and for different pair-exchange couplings. The usual choice of the boundaries between the different pairing regimes has been adopted: for \(\alpha<0.2\) the superconducting state is in the weak-coupling BCS regime; for \(0.2<\alpha<0.8\) the system is in the crossover regime; for \(\alpha>0.8\) the system is in the strong-coupling BEC regime. Consistently with the information obtained from the chemical potential, in the low density regime the condensate in the valence band explores the entire BCS-BEC crossover by varying \(E_{g}\). For the considered pair-exchange interactions in (Fig. 5(c)) Figure 3: Electron density \(a^{2}n_{2}^{e}\) (a)-(b) in the conduction band and hole density \(a^{2}n_{1}^{h}\) (c)-(d) in the valence band as functions of the band-gap \(E_{g}/t\) for different values of the total density \(a^{2}n_{tot}\), normalized to the area of the unit cell. \(\omega_{0}/t=20\). The intra-band couplings are \(\lambda_{11}=0.23\) and \(\lambda_{22}=0.75\). The pair-exchange couplings are (\(\lambda_{12}=\lambda_{21}\)): (a),(c) (0.001), (b),(d) (0.1). the valence band condensate is in the BCS regime for small \(E_{g}\), while for larger pair-exchange interactions (Fig. 5(d)) is in the crossover regime. When the energy gap or the total density increases, the valence band condensate enters the BEC regime, with the hole condensate fraction \(\alpha_{1}^{h}\) approaching unity, indicating that the remaining few holes are all in the condensate. The situation in the conduction band is different, since due to the strong intra-band coupling the condensate is always located in the BEC side of the crossover regime or in the BEC regime. In the case \(a^{2}n_{tot}=2.00\) both the condensate fractions suddenly drop to zero when \(E_{g}=E_{g}^{*}\) due to the quantum phase transition. In Fig. 6 the intra-pair coherence length is reported as a function of \(E_{g}\), for different \(a^{2}n_{tot}\) and for different pair-exchange couplings. Since for low densities and small pair-exchange couplings the valence band condensate is in the BCS regime (6(a)) when \(E_{g}\) is small, \(\xi_{pair_{1}}\) assumes initially larger values with respect to the average inter-particle distance \(l_{1}\). For larger \(E_{g}\) the system enters the BEC regime and \(\xi_{pair_{1}}\) becomes much smaller than the average inter-particle distance. The valence band condensate goes from the crossover to the BEC regime in a small range of band gap values. This behavior is observed also for larger values of the total density. The conduction band instead, due to the strong intra-band coupling retains a small value of the intra-pair coherence length with respect to the the average inter-particle distance \(l_{2}\) for all the considered values of the system density. In this way we found Cooper pairs of different size coexisting in the system for low density and low pair-exchange couplings values, in the regime of small \(E_{g}\). For the zero doping case the intra-pair coherence length is defined only for \(E_{g}<E_{g}^{*}\), since in this regime the system is not superconducting and a intra-pair coherence length cannot be defined. The fact that the intra-pair coherence length is approaching zero at the QCP in the BEC regime is different from Ref. [34], where giant Cooper pairs are found in the vicinity of the QCP in the BCS side. In this case instead, what we have found is equivalent to the finite-density to zero-density QCP of tightly bound molecules. Namely, near the present QCP in the BEC side the pair size is so small that pairs behave as point-like bosons and the system can be described by its bosonic counterpart [40]. In Fig. 7 the order parameter coherence coherence length is reported as a function of \(E_{g}\), for different \(a^{2}n_{tot}\) and for different pair-exchange couplings. In the case \(a^{2}n_{tot}=2.00\) the soft or critical coherence length \(\xi_{s}\) diverges when the band gap reaches the critical value \(E_{g}=E_{g}^{*}\), since the system undergoes a quantum phase transition to the insulating state. In the other cases \(a^{2}n_{tot}\neq 2.00\), the soft coherence length \(\xi_{s}\) is not diverging, since no quantum phase transition occurs in the system for any \(E_{g}\). In particular, in the cases of \(a^{2}n_{tot}=2.07\) and \(a^{2}n_{tot}=2.26\) the soft coherence length \(\xi_{s}\) shows a maximum in correspondence of the respective \(E_{g}=E_{g}^{*}\), showing its memory about the quantum phase transition of the valence band condensate, which takes place when the pair-exchange interactions are absent. The increase of \(\lambda_{12}=\lambda_{21}\) suppresses the maximum, as shown in Figs. 7(a) and (b), since the band-condensates become more coupled. In the Figure 6: Intra-pair coherence length \(\xi_{pair2}/l_{2}\) for the Cooper pairs of the conduction band (a)-(b) and intra-pair coherence length \(\xi_{pair1}/l_{1}\) for the Cooper pairs of the valence band (c)-(d) as functions of the band-gap \(E_{g}/t\) for \(\omega_{0}/t=20\). The intra-band couplings are \(\lambda_{11}=0.23\) and \(\lambda_{22}=0.75\). The pair-exchange couplings are (\(\lambda_{12}=\lambda_{21}\)): (a),(c) (0.001), (b),(d) (0.1). The intra-pair coherence lengths \(\xi_{pair_{1}}/l_{i}\) are reported for different \(a^{2}n_{tot}\). Figure 5: Condensate fractions in the conduction band \(\alpha_{2}^{e}\) (a)-(b) and in the valence band \(\alpha_{1}^{h}\) (c)-(d) as functions of the band-gap \(E_{g}/t\) for \(\omega_{0}/t=20\). The intra-band couplings are \(\lambda_{11}=0.23\) and \(\lambda_{22}=0.75\). The pair-exchange couplings are (\(\lambda_{12}=\lambda_{21}\)): (a),(c) (0.001), (b),(d) (0.1). The condensate fractions are reported for different total densities \(a^{2}n_{tot}\). Thin grey dashed lines correspond to \(\alpha=0.2,0.8\) from bottom to top. case of \(a^{2}n_{tot}=2.35\) instead, since the valence band is never superconducting for any \(E_{g}\) when the band-condensates are decoupled, there is no quantum phase transition and no peak. The rigid coherence length \(\xi_{r}\) instead remains finite for all \(E_{g}\) and for all \(a^{2}n_{tot}\). Anyway, we find the memory of the quantum phase transition that takes place when the conduction band is empty and the valence band is filled (\(a^{n}n_{tot}=2.00\)). In this case in fact, also the conduction band returns to the normal state at \(E_{g}=E_{g}^{*}\). Indeed, for zero pair-exchange couplings, the rigid coherence length \(\xi_{r}\) reduces to the coherence length of the conduction band \(\xi_{2}\). Even though for finite pair-exchange coupling the coherence length is non-diverging, it encodes the memory of the quantum phase transition of the conduction band. Also the maximum value of the rigid coherence length \(\xi_{r}\) is suppressed by the increase of \(\lambda_{12}=\lambda_{21}\) in this case, as shown in Figs. 7(c) and (d). We consider now finite temperature effects on the critical energy band gap for the case of no doping. The superconducting gaps as functions of temperature for different band gaps are reported in Fig. 8. The superconducting gaps present a non-monotonic behavior, that is very different from the temperature dependence of the gaps in conventional superconductors. The strong enhancement of \(\Delta_{2}\) at finite temperature is due to the thermal excitation of the electrons from the valence band to the conduction band. This behavior becomes more pronounced for larger \(E_{g}\), especially in the case of Fig. 8(c) in which the system is initially in the normal state for temperatures close to zero, and then becomes superconducting for larger temperatures. This superconducting-normal state reentrant transition that we have found in our two-band system is based on a different mechanism with respect to the reentrant transitions observed in superconductors containing magnetic elements [41] or in granular superconducting systems [42; 43; 44; 45]: in the former it is attributed to the competition of magnetic ordering and superconductivity, while in the latter is attributed to tunneling barriers effect, while in our valence-conduction bands system the thermal excitation of electrons from the valence into the conduction band play a crucial role. In Fig. 9 we report the phase diagram \(T\) vs \(E_{g}\) for our system. In Fig. 9 the branch of the phase transition from the superconducting to the normal state corresponding to the reentrant behavior results from the second solution at lower temperatures of the linearized self-consistent equations for the superconducting gaps. From the left panel of Fig. 9 it is clear how the reentrant transition is more pronounced when the intra-band couplings are unbalanced (\(\lambda_{22}\simeq 3\lambda_{11}\) in the figure), while the reentrance is reduced when the intra-band couplings have similar values. This effect occurs in a less evident manner also when the pair-exchange couplings are increased. Therefore, the most relevant parameter to control the reentrance phenomenon is the intra-band coupling. ## IV Conclusions We have studied the superconducting properties of a two-band system of electrons, interacting through a sep Figure 7: Soft \(\xi_{s}\) (a)-(b) and rigid \(\xi_{r}\) (c)-(d) order parameter coherence length, normalized to the lattice constant \(a\), as functions of the band-gap \(E_{g}/t\) between the two bands at temperature \(T/t=0.00065\) and for \(\omega_{0}/t=20\). The intra-band couplings are \(\lambda_{11}=0.23\) and \(\lambda_{22}=0.75\). The pair-exchange couplings are (\(\lambda_{12}=\lambda_{21}\)): (a),(c) (0.001), (b),(d) (0.03). The coherence lengths \(\xi_{s,r}\) are reported for different values of the total density \(a^{2}n_{tot}\). In the case \(a^{2}n_{tot}=2.00\) (orange dashed line) \(\xi_{r}\) has been rescaled by a factor of 7 (c) and 4.5 (d) to make the plot more visible. Figure 8: Superconducting gaps \(\Delta_{2}/t\) opening in the conduction band and in the valence band \(\Delta_{1}/t\) as functions of temperature \(T\), normalized with respect to the critical temperature \(T_{c}\), for \(a^{2}n_{tot}=2.00\). The pair-exchange couplings are (\(\lambda_{12}=\lambda_{21}\)): (a), (c), (e) (0.03), (b), (d), (f) (0.1). arable attractive potential with a large energy cutoff and multiple pairing channels, at a mean-field level. The superconducting state properties are studied by varying the energy gap between the bands. We have considered different levels of filling for the conduction band, while the valence band is always completely filled. When the band-gap is modified, the density of electrons in the two bands changes, allowing for the occurrence of a density-induced BCS-BEC crossover. When the pair-exchange couplings are small, the condensate in the valence band remains superconducting but with a strongly suppressed superconducting gap \(\Delta_{1}\) for \(E_{g}>E_{g}^{*}\). Therefore, in the regime of small pair-exchange coupling, after \(E_{g}^{*}\), there is only one significant superconducting gap and one significant condensate. Interestingly, in this case the soft coherence length present a peak as a memory of the quantum phase transition that the valence band condensate undergoes in absence of pair exchanges. This peak is more pronounced if the pair-exchange couplings are sufficiently weak and disappears for higher values of the pair-exchange couplings. For higher values of \(\lambda_{ij}\), superconductivity in the valence band is sustained by the condensate in the conduction band. Furthermore, in this regime we have found that superconductivity is enhanced in the valence band for increasing doping as long as \(E_{g}<E_{g}^{*}\), while for \(E_{g}>E_{g}^{*}\) superconductivity is enhanced for lower doping. We have also found that superconductivity may occur even when no free carriers exist in the conduction band in the normal state at \(T=0\), as soon as the gain in superconducting energy exceeds the cost in producing carriers across the band gap \(E_{g}\). If the binding energy is larger than the energy band-gap, the system becomes unstable under the formation of Cooper pairs and superconductivity emerges. However, there exists a critical value of the energy band gap \(E_{g}^{*}\) in correspondence of which the process of creating Cooper pairs is not energetically favorable anymore, at this point a quantum phase transition occurs. This quantum phase transition is confirmed by the soft coherence length, which is diverging in correspondence of the critical band gap \(E_{g}=E_{g}^{*}\). Thus, the ground state is superconducting if \(E_{g}<E_{g}^{*}\), insulating if \(E_{g}>E_{g}^{*}\). At finite temperature, the value of \(E_{g}^{*}\) is larger than its zero temperature value, because the electrons are thermally excited from the valence band. This situation is responsible for the non-monotonic behavior of the superconducting gap opening in the conduction band, which is enhanced at low temperatures because of the electrons that jump from the valence band into the conduction band due to thermal excitation. When there is a finite doping in the system, the sharp phase transition becomes a smooth crossover and superconductivity extends for all \(E_{g}\). In this case, for \(E_{g}>E_{g}^{*}\) the valence band contributes very weakly to the superconducting state, since the hole density becomes almost zero in this regime. To conclude, we have found that the system explores different regimes of the BCS-BEC crossover by tuning the energy band-gap and the total density. The valence-band condensate spans the entire BCS-BEC crossover for low enough density by varying the band-gap \(E_{g}\). For larger values of the total density, the condensate of the valence band is very dilute and results in the BEC regime for any \(E_{g}\). The condensate of the conduction band instead resides in the BEC side of the crossover or completely inside the BEC regime, due to the strength of the intra-band coupling of electrons in the conduction band. This picture of the BCS-BEC crossover for the system has been found by analyzing the consistent behavior of the chemical potential, the condensate fractions and the coherence lengths. Finally, in the case of zero doping and at finite temperature, an interesting new type of reentrant superconducting to normal state transition has been numerically discovered for unbalanced intra-band couplings, showing that in this configuration superconductivity is assisted instead of being suppressed by increasing temperature. This happens because the electrons in the valence band are able to jump into the conduction band even for larger values of the zero temperature critical band gap, due to thermal excitation, making the superconducting state available for a wider range of \(E_{g}\) when the temperature is higher. ## V Acknowledgments We are grateful to Tiago Saraiva (HSE-Moscow) and Hiroyuki Tajima (University of Tokyo) for interesting discussions and a critical reading of the manuscript. G. M. acknowledges INFN for financial support of his Ph.D. grant. This work has been partially supported by PNRR MUR project PE000023-NQSTI. Figure 9: Phase diagrams in the temperature versus energy band gap plane, for the zero doping case. In the left panel the red dashed line is for \(\lambda_{11}=0.23\), \(\lambda_{22}=0.4\), the green dashed line is for \(\lambda_{11}=0.23\), \(\lambda_{22}=0.75\) and the blue dashed line is for \(\lambda_{11}=0.65\), \(\lambda_{22}=0.75\). The pair-exchange couplings are the same for all curves, \(\lambda_{12}=\lambda_{21}=0.1\). In the right panel the pair-exchange couplings from left to right are: \(\lambda_{12}=\lambda_{21}=0.03,0.1,0.2\), while the intra-band couplings are \(\lambda_{11}=0.23\) and \(\lambda_{11}=0.75\).
2306.17554
Ab initio insights on the ultrafast strong-field dynamics of anatase TiO$_2$
Electron dynamics of anatase TiO$_2$ under the influence of ultrashort and intense laser field is studied using the real-time time-dependent density functional theory (TDDFT). Our findings demonstrate the effectiveness of TDDFT calculations in modeling the electron dynamics of solids during ultrashort laser excitation, providing valuable insights for designing and optimizing nonlinear photonic devices. We analyze the perturbative and non-perturbative responses of TiO$_2$ to 30 fs laser pulses at 400 and 800 nm wavelengths, elucidating the underlying mechanisms. At 400 nm, ionization via single photon absorption dominates, even at very low intensities. At 800 nm, we observe ionization through two-photon absorption within the intensity range of $1\times10^{10}$ to $9\times10^{12}$ W/cm$^2$, with a transition from multiphoton to tunneling ionization occurring at $9\times10^{12}$ W/cm$^2$. We observe a sudden increase in energy and the number of excited electrons beyond $1\times10^{13}$ W/cm$^2$, leading to their saturation and subsequent laser-induced damage. We estimate the damage threshold of TiO$_2$ for 800 nm to be 0.1 J/cm$^2$. In the perturbative regime, induced currents exhibit a phase shift proportional to the peak intensity of the laser pulse. This phase shift is attributed to the intensity-dependent changes in the number of free carriers, indicative of the optical Kerr effect. Leveraging the linear dependence of phase shift on peak intensities, we estimate the nonlinear refractive index ($n_2$) of TiO$_2$ to be $3.54\times10^{-11}$ cm$^2$/W.
Sruthil Lal S. B, Lokamani, Kushal Ramakrishna, Attila Cangi, D Murali, Matthias Posselt, Assa Aravindh Sasikala Devi, Alok Sharan
2023-06-30T11:13:00Z
http://arxiv.org/abs/2306.17554v1
# Ab initio insights on the ultrafast strong-field dynamics of anatase TiO\({}_{2}\) ###### Abstract Electron dynamics of anatase TiO\({}_{2}\) under the influence of ultrashort and intense laser field is studied using the real-time time-dependent density functional theory (TDDFT). Our findings demonstrate the effectiveness of TDDFT calculations in modeling the electron dynamics of solids during ultrashort laser excitation, providing valuable insights for designing and optimizing nonlinear photonic devices. We analyze the perturbative and non-perturbative responses of TiO\({}_{2}\) to 30 fs laser pulses at 400 and 800 nm wavelengths, elucidating the underlying mechanisms. At 400 nm, ionization via single photon absorption dominates, even at very low intensities. At 800 nm, we observe ionization through two-photon absorption within the intensity range of \(1\times 10^{10}\) to \(9\times 10^{12}\) W/cm\({}^{2}\), with a transition from multiphoton to tunneling ionization occurring at \(9\times 10^{12}\) W/cm\({}^{2}\). We observe a sudden increase in energy and the number of excited electrons beyond \(1\times 10^{13}\) W/cm\({}^{2}\), leading to their saturation and subsequent laser-induced damage. We estimate the damage threshold of TiO\({}_{2}\) for 800 nm to be 0.1 J/cm\({}^{2}\). In the perturbative regime, induced currents exhibit a phase shift proportional to the peak intensity of the laser pulse. This phase shift is attributed to the intensity-dependent changes in the number of free carriers, indicative of the optical Kerr effect. Leveraging the linear dependence of phase shift on peak intensities, we estimate the nonlinear refractive index (\(n_{2}\)) of TiO\({}_{2}\) to be \(3.54\times 10^{-11}\) cm\({}^{2}\)/W. pacs: 32.25.-b, 32.25.-b, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.25.+d, 32.+d, 32. Introduction Time-dependent density-functional theory (TDDFT) [1] describes the quantum dynamics of electrons under the influence of a time-dependent external potential [2; 3; 4; 5; 6; 7]. TDDFT calculations are used to study ultrashort laser-matter interactions including high-harmonic generation (HHG) [8; 9; 10], nonlinear current injection [11; 12], formation of Floquet-Bloch states [13; 14], laser ablation [15; 16]. TDDFT computations have also been utilized to distinguish between the purely electronic and phononic contribution to non-equilibrium dynamics in metals caused by lasers [17]. Furthermore, TDDFT has been applied to study the influence of the laser pulse widths in the formation of Nitrogen-Vacancy centers in the diamond lattice [18]. The strong-field response in solids has become an area of renewed interest due to recent experimental evidence that dielectrics can survive electric fields approaching their critical fields when exposed to laser pulses shorter than the electronic relaxation time scales [19; 20; 21]. Initiating, driving, and probing the nonlinear electron dynamics in crystalline materials is now possible with optical sub-cycle resolutions, opening the door for optical field-effect devices operating within single optical cycles and petahertz signal processing [22; 23; 24; 25; 26; 27; 28; 20; 21; 22; 26; 27; 28]. For instance, a reversible energy exchange at sub-30-attosecond timescales was observed in fused silica by Sommer et al. [27]. Under the influence of strong electric fields, it has been shown that the AC conductivity of fused silica increases by 18 orders of magnitude within one femtosecond [23] and is completely reversible. TDDFT calculations have shown that electron tunneling is the fundamental mechanism of carrier injection in silica under few-cycle extreme ultraviolet (XUV) illumination [26]. Materials undergo dynamic metallization [29; 30; 31; 22] when irradiated with optical pulses of amplitude as large as 1 V/A. This observation was also supported by TDDFT calculations [11]. TDDFT calculations of ultrashort laser-induced electron dynamics for nonlinear photonic applications have so far been focused on Si [15; 26], SiO\({}_{2}\)[31; 4; 15], linear carbon chain [32], diamond [33; 34; 2; 35], phospherene [36], MoS\({}_{2}\)[37]. Titanium dioxide (TiO\({}_{2}\)), commonly used as a saturable absorber in passively Q-switched fiber lasers [38; 39], has great potential for enabling nonlinear photonics. The nonlinear optical response of anatase TiO\({}_{2}\) has a typical recovery period of approximately 1.5 ps [40]. The nonlinear index (\(n_{2}\)) of bulk and thin-film of TiO\({}_{2}\) ranges from 0.8-3\(\times 10^{-14}\) cm\({}^{2}\)/W [41; 42; 43], which is greater than the nonlinear index of silica fiber (2.48\(\times\) 10\({}^{-16}\) cm\({}^{2}\)/W [44]). Moreover, the two-photon absorption of TiO\({}_{2}\) at 800 nm is minimal, making it ideal for waveguides operating near 800 nm [45]. TiO\({}_{2}\) can be formed at low temperatures (\(<\)400 \({}^{\circ}\)C) and offers advantages over silicon nitride with its higher refractive index (2.4 vs. 2.0) and more than three times stronger Kerr non-linearity [46; 47; 41]. These properties enable back-end integration with silicon micro photonic devices. Existing estimates of \(n_{2}\) of TiO\({}_{2}\) are either from femtosecond z-scan measurements or by fitting the nonlinear pulse propagation simulations (based on the nonlinear Schrodinger equation) to the experimental data [48]. A systematic analysis of ultrafast nonlinear optical interactions in TiO\({}_{2}\) from a microscopic perspective has yet to be explored. This study uses first-principle simulations to examine the microscopic electron dynamics of crystalline anatase TiO\({}_{2}\) modulated by an ultrashort and intense laser fields. We employ TDDFT calculations as implemented in the software package OCTOPUS [49]. We explore the response of anatase TiO\({}_{2}\) to 800 nm and 400 nm laser pulses with intensities spanning from the perturbative to strong-field regimes (non-perturbative). Different regimes of nonlinear interactions with the external electric field are characterized, and various underlying mechanisms are analyzed. The evolution of photoinduced current and energy transfer during the interaction is studied. We determine the nonlinear refractive index and optical damage threshold of anatase TiO\({}_{2}\), and our results are in excellent agreement with previously reported experimental data. The paper is organized as follows. Section II describes the computational methods employed for determining the photoinduced current and the energy dynamics of TiO\({}_{2}\). The results and analysis of our study are discussed in Section III, where we also compare them with the existing experimental data. We conclude the paper with a summary in Sec. IV. ## II Computational Methods ### Time-dependent Density Functional Theory The electron dynamics in a unit cell of a periodic crystal driven by a time-dependent electric field \(\mathbf{E}(\mathbf{r},t)\) is described in terms of the time-dependent Kohn-Sham (KS) equations \[i\frac{\partial}{\partial t}u_{n,k}(\mathbf{r},t)=\left\{\frac{1}{2}\left[ \mathbf{p}+\mathbf{A}_{\mathrm{s}}(\mathbf{r},t)\right]^{2}+v_{\mathrm{s}}(\mathbf{r},t)\right\}u_{n,k}(\mathbf{r},t), \tag{1}\] where \(u_{n,k}(\mathbf{r},t)\) denotes KS orbitals with the band index \(n\), the electron wave vector \(k\), and \(v_{\rm{{}_{S}}}[n](\mathbf{r},t)=v_{\rm{{}_{lon}}}(\mathbf{r},t)+v_{\rm{{}_{H}}}[n](\mathbf{r },t)+v_{\rm{{}_{NC}}}[n](\mathbf{r},t)\) is the KS potential with \(v_{\rm{{}_{lon}}}\) denoting the external ionic potential, \(v_{\rm{{}_{H}}}\) the Hartree potential, and \(v_{\rm{{}_{NC}}}\) the exchange-correlation (XC) potential. Furthermore, \(\mathbf{p}\) is the momentum operator and \({\bf A}_{\rm{{}_{S}}}(\mathbf{r},t)={\bf A}(\mathbf{r},t)+{\bf A}_{\rm{{}_{NC}}}(\mathbf{r },t)\) is the vector potential composed of the applied vector potential \({\bf A}(\mathbf{r},t)\) and an XC contribution \({\bf A}_{\rm{{}_{NC}}}(\mathbf{r},t)\)[50]. The applied vector potential represents an applied electromagnetic field, such as a laser pulse, and is related to the applied electric field by \(\mathbf{E}(\mathbf{r},t)=-(1/c)[\partial{\bf A}(\mathbf{r},t)/\partial t]\). Note that the laser pulse can be treated as spatially uniform \(\mathbf{E}(t)\) under the dipole approximation. Solving the time-dependent KS equations with the exact XC potential and XC vector potential yields the exact time-dependent electron density \(n(\mathbf{r},t)=\sum_{n,{\bf k}}^{occ}u_{n,k}^{*}(\mathbf{r},t)\,u_{n,k}(\mathbf{r},t)\). However, in practice approximations are used, e.g., a particular approximation is used to express the XC potential [51], the adiabatic approximation is applied, and the \(A_{\rm{{}_{NC}}}\) is often neglected. We follow the general practice by applying these approximations as detailed below. Note that we adopt Hartree atomic units, i.e., \(\hbar=e=m=1\). Another useful quantity is the microscopic current density which is determined in the outlined framework as \[\mathbf{j}(\mathbf{r},t)=\sum_{n{\bf k}}^{occ}\frac{1}{2}\left[u_{n,k}^{*}(\mathbf{r},t) \left(\mathbf{p}+\mathbf{A}(t)\right)u_{n,k}(\mathbf{r},t)\right], \tag{2}\] where the summation runs over the occupied bands. The macroscopic current density \(J(t)\) along the laser polarization direction \(\mathbf{E_{0}}\) is obtained by averaging \(\mathbf{j}(\mathbf{r},t)\) over the unit cell with volume \(\Omega\), \[J(t)=\frac{1}{\Omega}\int_{\Omega}d^{3}\mathbf{r}\mathbf{j}(\mathbf{r},t)\cdot\mathbf{E_{0}}/ |\mathbf{E_{0}}|. \tag{3}\] The polarization density corresponding to \(J(t)\) is \(P(t)=\int_{0}^{t}J(t^{\prime})dt^{\prime}\). The time-resolved energy density \(W(t)\) transferred between the field and the material is evaluated by \[W(t)=\int_{-\infty}^{t}dt^{\prime}\ \mathbf{E}(t)\cdot\mathbf{J}(t). \tag{4}\] Its resultant value at the end of the laser pulse \(W(t\rightarrow\infty)\) determines the total amount of energy dissipated during the light-matter interaction. The number of electrons excited from the valence band to the conduction band \(N_{exc}\) per unit cell is calculated using [52] \[N_{exc}(t)=\sum_{n,n^{\prime},\mathbf{k}}\left(\delta_{nn^{\prime}{\bf k}}-\mid \langle u_{n,k}(0)|u_{n^{\prime},k}(t)\rangle\mid^{2}\right). \tag{5}\] Here \(u_{n,k}(0)\) is the KS orbital of the initial state, \(u_{n^{\prime}k}(t)\) is the time-dependent KS orbital, and \(\delta\) is the Kronecker delta function. We use the real-space, real-time code Octopus [49] to carry out the TDDFT calculations. The laser-induced dynamics of valence electrons are calculated in a unit cell of anatase TiO\({}_{2}\). Anatase TiO\({}_{2}\) crystallizes with a tetragonal unit cell having a lattice spacing \(a=3.97\) A and \(c/a=2.52\). We treat the interaction \(v_{{}_{\mathrm{ion}}}(\mathbf{r},t)\) between valence electrons and the ionic core by the Kleinman-Bylander pseudopotential [53]. The generalized gradient approximation (GGA) based on the Perdew-Burke-Ernzerhof functional (PBE) [54] is employed for the XC potential. KS orbitals are represented on the discretized real-space grid with \(\Delta x=\Delta y=0.12\) A and \(\Delta z=0.20\) A. It is equivalent to a plane-wave cut-off at 900 eV. The time-dependent KS equations are solved on a uniform grid with \(\approx 29000\) grid points. The Brillouin zone is uniformly sampled by \(12\times 12\times 4\) Monkhorst-Pack grids [55]. The discretization consists of 363 symmetry-reduced k-points for \(x\) polarized light. With this setup, the system's total energy converges to within 1 meV. First, the ground state of TiO\({}_{2}\) is calculated, which will be used as the initial state for the time-dependent calculations. We then time-propagate the KS orbital by solving Eq. (1) in the time domain. The time evolution is calculated with the approximated enforced time-reversal symmetry (AETRS) [56] as the time-evolution propagator with a time step \(\Delta t=0.02\ a.u.\). The total simulation duration is 30 fs (1240 atomic units with a step size of 0.02 a.u., i.e., \(\approx\)64000 time steps). Note that, during the time evolution, ions are at their equilibrium positions in the ground state. Furthermore, the adiabatic approximation [57] is used which means that the time dependence of the XC potential is approximated by evaluating a ground-state XC functional at the time-dependent density. We calculate the response of TiO\({}_{2}\) to a linearly polarized laser pulse, which is represented by the spatially-uniform electric field through the corresponding vector potential \[\mathbf{A}(t)=\frac{\mathbf{E_{0}}}{\omega}\exp\left[\frac{-(t-t_{0})^{2}}{2\tau_ {0}^{2}}\right]\cos(\omega t)\,, \tag{6}\] where \(\omega\) is the central frequency of the laser pulse and \(\mathbf{E_{0}}\) is the amplitude of the time-dependent electric field \(\mathbf{E}(t)\), which is related to the laser peak intensity \(I_{0}=c|\mathbf{E_{0}}|^{2}/8\pi\). ## III Results The following section presents the electron dynamics of crystalline anatase TiO\({}_{2}\) excited by 800 nm and 400 nm laser pulses represented by Eq. (6). The duration of the pulse is set to T =30 fs (\(\approx\)12 fs at the FWHM of the envelope.), while the amplitude of the pulse is varied from \(10^{7}\) to \(10^{16}\) W/cm\({}^{2}\). The laser field is polarized along the \(x\)-axis. ### Energy Transfer Dynamics The energy transferred from the applied electric field to anatase TiO\({}_{2}\) is evaluated by Eq. (4). Fig. 1 shows the resultant energy dynamics for incident laser pulses at 800 nm (\(\hbar\omega=1.55\) eV) with different peak intensities. The central frequency of the pulse corresponds to energy lower than the direct gap (2.25 eV) [58], leading to two general types of temporal energy transfer profiles. The first type is non-resonant excitation. The transferred energy, in this case, oscillates synchronously with the \(\mathbf{E}^{2}(t)\), and the system almost returns to the ground state at the end of the pulse. This represents a virtual energy transfer from the Figure 1: Time-dependent energy exchanged between anatase TiO\({}_{2}\) and 30 fs pulses at 800 nm is given for different peak intensities: Panel (a) represents the non-resonant virtual energy transfer where the transferred energy oscillates synchronously \(\mathbf{E}^{2}(t)\) (bottom panel). This occurs for intensities below \(1\times 10^{11}\) W/cm\({}^{2}\). Panel (b) shows the energy exchange via resonant two-photon absorption for intensities ranging from \(2\times 10^{11}\) to \(7\times 10^{11}\) W/cm\({}^{2}\). For panels (a) and (b) the dynamics at \(1\times 10^{10}\) W/cm\({}^{2}\) is shown as the reference. laser pulse to the electrons. Such dynamics is observed in Fig. 1(a) for peak intensities from \(1\times 10^{10}\) to \(1\times 10^{11}\) W/cm\({}^{2}\). This behavior is typical when the frequency is below the bandgap, and the intensity is very low. The second kind of response is resonant excitation, where, along with the virtual oscillations, the transferred energy gradually increases during the pulse and persists beyond the pulse width. Given the pulse energy is below the bandgap, this occurs when the field is strong enough to induce real excitation through multi-photon absorption. Fig. 1(b) illustrates the energy transfer \(W(t)\) for this scenario, observed for intensities ranging from \(2\times 10^{11}\) to \(7\times 10^{11}\) W/cm\({}^{2}\). The permanent energy transfer is related to creating electron-hole pairs, corresponding to the population transfer from valence bands to conduction bands. Figure 2 illustrates the residual excitation energy in anatase TiO\({}_{2}\) after interacting with 800 and 400 nm laser pulses at different peak intensities. The energy absorbed at a wavelength of 400 nm is directly related to the intensity of the light and can be accurately described by the equation \(\sigma^{(1)}I\) governing single-photon absorption, where \(\sigma^{(1)}\) is a constant coefficient. This relationship holds true for light intensities lower than \(1\times 10^{11}\) W/cm\({}^{2}\). This linear absorption behavior at 400 nm is expected since the single photon (3.10 eV) bridges the direct gap of anatase TiO\({}_{2}\). Conversely, single photon absorption below the direct bandgap is unlikely at 800 nm, and hence, no permanent energy transfer for intensities below \(1\times 10^{10}\) W/cm\({}^{2}\). As the intensity increases from \(1\times 10^{10}\) W/cm\({}^{2}\) upto \(1\times 10^{12}\) W/cm\({}^{2}\), the deposited energy increases and closely follows a quadratic dependence \(\sigma^{(2)}I^{2}\) on intensity (Fig. 2). At approximately \(1\times 10^{13}\) W/cm\({}^{2}\) intensity, the excitation energies of 400 nm and 800 nm wavelengths combine to form a single curve. Below the intersection point, the excitation energy displays a perturbative behavior that can be effectively modeled by \(I^{N}\), where \(I\) represents the laser intensity, and \(N\) corresponds to the number of photons required to exceed the bandgap energy. At intensities above the intersection region, the excitation energy is independent of laser frequency, and the curve's slope decreases compared to the region below the intersection [4]. It suggests that there is a saturation-like behavior occurring in the material's response. The similarity of the number density of excited electrons for both 800 nm and 400 nm beyond \(\sim 10^{13}\) W/cm\({}^{2}\) also indicates the saturation effects [59]. For intensities higher than \(1\times 10^{14}\) W/cm\({}^{2}\), the energy transfer exhibits an abrupt increase, indicating the onset of material laser-induced dielectric breakdown, as outlined in Sec. III.4 Next, we analyze the energy of excited electrons at 800 nm and 400 nm beyond the pulse duration. The residual energy per excited electron (\(E_{exc}^{res}\)) is obtained by dividing the energy (Fig. 1) by the number of excited electrons [59] at their saturation values. The results are shown in Fig. 3. At 400 nm, \(E_{exc}^{res}\) is approximately 3.10 eV for intensities up to \(\approx 1\times 10^{12}\) W/cm\({}^{2}\), indicating single photon absorption. At 800 nm, no excited electrons are observed until the intensity reaches \(1\times 10^{10}\) W/cm\({}^{2}\). However, it approaches twice the photon energy (3.10 eV) for intensities ranging from \(1\times 10^{10}\) W/cm\({}^{2}\) to \(1\times 10^{12}\) W/cm\({}^{2}\), indicating ionization by two-photon absorption. \(E_{exc}^{res}\) gradually increases above 3.10 eV reference line for intensities larger than \(\approx 1\times 10^{12}\) W/cm\({}^{2}\) in Fig. 3, potentially due to higher-order multiphoton absorption and secondary excitation of excited electrons [60; 61]. The Keldysh parameter, denoted by \(\gamma\), serves as an approximate measure to determine the type of strong field ionization [60]. The Keldysh parameter for the interaction of a laser Figure 2: The peak laser intensity-dependence on energy absorbed in anatase TiO\({}_{2}\) crystal pumped by 800 nm and 400 nm laser pulses. Energy exchange at 400 nm is predominantly through single photon absorption for all intensities up to \(\approx 5\times 10^{12}\) W/cm\({}^{2}\). For 800 nm pulse, no energy is exchanged until peak intensity reaches \(\geq 1\times 10^{10}\), while two-photon absorption becomes dominant for intensities ranging from \(1\times 10^{10}\) W/cm\({}^{2}\) upto \(1\times 10^{12}\) W/cm\({}^{2}\). Typical intensity ranges of 800 nm pulses over which the multi-photon absorption (\(\gamma>1\)) or tunneling ionization (\(\gamma<1\)) becomes the dominant process are highlighted. pulse of frequency \(\omega\) and field amplitude \(E_{0}\) with a material of energy gap \(\Delta\) is given by \[\gamma=\frac{\omega\sqrt{m\Delta}}{eE_{0}}, \tag{7}\] where \(F[V/cm]=27.44\sqrt{I[W/cm^{2}]}\), \(I\) is the peak intensity of the laser pulse and \(e\) and \(m\) are the charge and mass of electron, respectively. The condition \(\gamma>1\) represents multi-photon ionization being the primary mechanism of ionization whereas \(\gamma<1\) indicates that tunneling ionization dominates. As the intensity of the laser pulse increases, a transition from multi-photon absorption to tunneling ionization can be observed. The Keldysh parameter at 800 nm is calculated at different peak intensities. Based on the value of \(\gamma\), the intensities over which multiphoton or tunneling ionization dominate are highlighted in Fig. 2. When \(I\approx 9\times 10^{12}\) W/cm\({}^{2}\), the Keldysh parameter assumes a value of 1. It indicates that, for 800 nm, below an intensity of \(9\times 10^{12}\) W/cm\({}^{2}\), the ionization is predominantly via multiphoton absorption and tunneling ionization above it. ### Saturation of photo-induced current at 400 nm Figure 4 shows the induced current for a laser pulse at 400 nm and a pulse duration of 30 fs with peak intensities ranging from \(I_{0}=1\times 10^{10}\) W/cm\({}^{2}\) to \(I_{0}=1\times 10^{14}\) W/cm\({}^{2}\). We take the current profile at \(I_{0}=1\times 10^{10}\) W/cm\({}^{2}\) as the reference (weak) current to discuss the dynamics. In Figs. 4 (a-e), the reference current is multiplied by a suitable factor so that the difference between currents at weak and strong field strengths indicates the nonlinear interaction. When the response is linear, the currents for weak and strong intensities will coincide and show similar profiles. In Fig. 4(a), the temporal evolution of the current at \(I_{0}=1\times 10^{11}\) W/cm\({}^{2}\) follows the driving laser field, and it coincides with the reference current, indicating a linear response. The response is dielectric-like: the current is \(\pi/2\) phase shifted with the electric field \(\mathbf{E}(t)\). For \(I_{0}>1\times 10^{11}\) W/cm\({}^{2}\) (Fig. 4(b-e)), the induced current is initially very close to that of the reference current. However, as the electric field of the pulse increases, the induced current gradually becomes weaker than expected from the linear response. This nonlinear effect of suppression of induced current occurs due to the bleaching of valence band electrons by absorption at 400 nm [5; 7]. The majority of valence electrons are already excited, and the conduction bands are mostly filled, resulting in the suppression of further electron excitation. Additionally, because the frequency of the applied laser pulse is higher than the bandgap value, a significant current remains after the incident pulse has ended. ### The Nonlinear Refractive Index Change The phase shift of light-induced current at 800 nm is depicted for various intensities in Fig. 5 with the current at \(1\times 10^{8}\) W/cm\({}^{2}\) taken as the reference. For a pulse of given peak intensity, the induced current in the initial part of the pulse is in phase with the reference current. However, as the electric field of the pulse increases, the induced current starts accumulating a phase shift. The accumulated phase shift calculated from the temporal shift at the zero-crossing after the peak of the pulse (\(\Delta\phi_{NL}^{0}\)) [62] in Fig. 5 increases as the peak intensity is increased. The phase shift can be related to the optical Kerr effect where the Figure 3: The energy of the excited electron (\(E_{exc}^{res}\)) at 800 nm and 400 nm beyond the pulse duration. At 400 nm, \(E_{exc}^{res}\) is 3.10 eV for intensities up to \(\approx 5\times 10^{12}\) W/cm\({}^{2}\), indicating single photon absorption. At 800 nm, absorption is unlikely for \(I_{0}\leq 1\times 10^{10}\) W/cm\({}^{2}\). For intensities ranging from \(1\times 10^{10}\) W/cm\({}^{2}\) to \(1\times 10^{12}\) W/cm\({}^{2}\) energy per electron lies on the two-photon absorption energy (3.10 eV). \(E_{exc}^{res}\) gradually increases and, becomes frequency independent for intensities larger than \(\approx 1\times 10^{12}\) W/cm\({}^{2}\), potentially due to higher-order multiphoton absorption and secondary excitation of excited electrons optical material density is proportional to the intensity envelope of the driving field [27; 63]. The increase in phase shift can be described as a linear rise with intensity \(\Delta\phi_{NL}^{0}=m\times I_{0}\), with \(m=1.06\times 10^{-13}\) cm\({}^{2}\)/W. From the relation \(m=kln_{2}\), where \(k=2\pi/\lambda\) and \(l=3.79\) A, the propagation length, the nonlinear refractive index \(n_{2}=3.54\times 10^{-11}\) cm\({}^{2}\)/W can be extracted for 800 nm, 30 fs pulses. Figure 4: Current profiles for 400 nm laser pulses of total duration 30 fs showing the saturation of current as the peak intensity is increased from \(I_{0}=1\times 10^{10}\) W/cm\({}^{2}\) to \(I_{0}=1\times 10^{14}\) W/cm\({}^{2}\). This is a nonlinear optical effect occurring because of ground state bleaching due to linear absorption at 400 nm. ### Onset of dielectric breakdown In Fig. 6, we present the time evolution of current, energy, and excited electron density for three different peak intensities, \(I_{0}=10^{10},~{}10^{13}\) and \(10^{14}\) W/cm\({}^{2}\). The laser frequency is \(\omega=1.55\) eV (800 nm), and the pulse duration is \(T=30\) fs. The time profiles of the electric field and the induced current are depicted in Fig. 6 A(I-III). The electric field profile is normalized with respect to the peak of the induced current at a given peak intensity to enable a comparison of the relative phase. Fig. 6 B(I-III) present the number of excited electrons calculated using Eq. (5), while Fig. 6 C(I-III) depicts the excitation energy defined Figure 5: The intensity scaling of phase shift of light-induced current at 800 nm is shown for different intensities. Phase shift is expressed by taking the current at \(1\times 10^{8}\) W/cm\({}^{2}\) as the reference. The phase shift is determined from the temporal shift of the induced current calculated at the zero-crossing after the peak of the pulse (\(\Delta\phi^{0}_{NL}\)), as illustrated in the supplemental Fig. [62]. In the inset, the induced current in the region close to the zero-crossing is zoomed in, highlighting the temporal shift. The phase shift can be related to the optical Kerr effect, according to which the increase in phase shift can be described as a linear rise with intensity \(\Delta\phi^{0}_{NL}=m\times I_{0}\). From the value of \(m\) obtained from the figure, the nonlinear refractive index \(n_{2}=3.54\times 10^{-11}\) cm\({}^{2}\)/W is extracted. in Eq. (4) as a function of time. The induced current at intensities \(1\times 10^{10}\) W/cm\({}^{2}\) (Fig. 6 A(I)) follows the pulse's electric field with a phase shift of \(\pi/2\), indicating a linear dielectric response. The excited electron density (Fig. 6 B(I)) and excitation energy (Fig. 6 C(I)) at this intensity oscillate synchronously with the electric field and the ground state conditions are restored after the interaction. The situation changes significantly at intensities of \(10^{13}\) W/cm\({}^{2}\) and \(10^{14}\) W/cm\({}^{2}\). The induced current during the interaction is distorted (Fig. 6 A(II) and A(III)), and the phase difference between the applied electric field and the induced current deviates from \(\pi/2\). For \(I=1\times 10^{14}\) W/cm\({}^{2}\), the current and the electric field become nearly out-of-phase, indicating a strongly nonlinear response of electrons to the incident field [4]. Starting from about 10 fs, the number of excited electrons and the excitation energy increase rapidly at \(10^{13}\) W/cm\({}^{2}\) (Fig. 6 B(II) and C(II)) and \(10^{14}\) W/cm\({}^{2}\) (Fig. 6 B(III) and C(III)). By 20 fs, Figure 6: Different regimes of the interaction of TiO\({}_{2}\) with 30 fs laser pulses at 800 nm with peak intensities \(10^{10}\) W/cm\({}^{2}\) (top), \(10^{13}\) W/cm\({}^{2}\) (middle) and \(10^{14}\) W/cm\({}^{2}\) (bottom). Fig. A(I-III) displays the induced current density and electric field (scaled with respect to the current amplitude to show phase relations). Fig. B(I-III) shows the number density of excited electrons per cubic centimeter and Fig. C(I-III) represents the excitation energy. Dashed vertical lines in A(I-III) are given as a guide to the eye to show the phase variations. these quantities reach saturation values. Even after the laser pulse ends, the oscillation of the induced current persists, which is a clear indication of the onset of optical breakdown [52]. This behavior is consistent with the abrupt increase in energy discussed in Sec. III.1 due to resonant energy transfer at the breakdown. However, such oscillations will eventually decay due to dissipative processes such as electron-phonon coupling, impurity, and disorder scattering on longer time scales (\(\gtrsim 100\) fs) [11]. Electrons excited into the conduction band exhibit a metallic response, resulting in collective plasmon mode. The plasma frequency corresponding to an electron density \(n_{e}\) can be estimated by \[\omega_{p}=\left(\frac{n_{e}e^{2}}{m\epsilon}\right)^{1/2}, \tag{8}\] where \(\epsilon\) is the dielectric constant of anatase TiO\({}_{2}\) (\(\epsilon=5.82\)) [64], \(m\) and \(e\) are the mass and charge of the electron respectively. At an intensity \(1\times 10^{13}\) W/cm\({}^{2}\), the final number of excited electrons (Fig. 6 B(II)) is \(1.4\times 10^{22}\) cm\({}^{-3}\). This corresponds to a plasma frequency of \(\omega_{p}=1.82\) eV, slightly higher than the frequency of the applied laser pulse (\(\omega_{l}=1.55\) eV). As the intensity of the applied field increases, the density of electrons excited via the two-photon and tunneling mechanisms in the conduction band also gradually increases. When the electron density reaches a threshold where the plasma and laser frequencies are in resonance, a significant energy transfer occurs from the laser to the electrons. The low-amplitude coherent oscillations of the induced current observed on the trailing edge of the laser pulse (Fig. 6 A(II) and A(III)) results from the partial coherence between the involved non-stationary states left by the laser field. It is characteristic of plasmonic metal systems [65]. This ultrafast and dissipative strong-field transition to plasmonic metal-like behavior is known as dynamic metallization [66; 22; 67]. Based on the dynamics presented in Fig. 2 and in Fig. 6, \(I_{0}=1\times 10^{13}\) W/cm\({}^{2}\) can be identified as the intensity at which the laser-induced damage starts. For 30 fs pulses (11.7 fs FWHM), this intensity corresponds to a damage threshold of 0.1 J/cm\({}^{2}\). The dynamics outlined in the preceding section are represented by the change in electron density induced by the laser pulse in Fig. 7. The snapshots displayed here for various peak intensities indicate the difference in electron density between the perturbed and unperturbed systems at the instant when the electric field of the pulse reaches zero right after its peak value. The positive (increase from the ground state) and the negative (reduction from the ground state) variations in the density are denoted in Figs. 7 (c) and (d) by red and blue, respectively. When the laser is weak [Figs. 7 (a) and (b)], the variation of electron density around ionic cores is uniform corresponding to a linear and adiabatic response. At higher laser intensity [Figs. 7 (c) and (d)], the charge distribution extends into the interstitial region, indicating the laser-induced population of delocalized conduction band levels [33]. ### Comparison with experiments Now let's compare the figures estimated in the current work for TiO\({}_{2}\)'s nonlinear refractive index (\(n_{2}\)) and laser-induced damage threshold (LIDT) with those found in the literature. The measured value of \(n_{2}\) of TiO\({}_{2}\) reported in the literature is summarised in Table 1. The calculated value of \(n_{2}\) for 30 fs pulses at 800 nm in the current work is about three orders of magnitude greater than that measured using identical wavelength and pulse widths [68]. The variability of experimental data of \(n_{2}\) presented in Table 1 shows that the duration and frequency of the laser pulse have a significant impact on the observed value of \(n_{2}\). Moreover, the measured value of \(n_{2}\) vary due to a variety of factors, including nonlinear refraction dispersion, different sizes and volume fractions of synthesized materials, the effect Figure 7: Snapshots of electron density difference with respect to the unperturbed state evaluated for different intensities. The snapshots displayed here for various peak intensities are taken at the same instant of time when the electric field of the pulse reaches zero right after its peak value. The red and blue colors indicate the gain and the loss of the density, respectively, with respect to the ground state. of structure confinement in the case of nanostructured compounds, etc. The simulations described here are for the bulk phase of TiO\({}_{2}\), whereas the majority of the reported \(n_{2}\) was for thin films of TiO\({}_{2}\). Additionally, the collisional relaxation not taken into account in the current work, becomes significant for laser pulses longer than \(\approx 100\) fs. Table 2 presents the experimental literature for the laser-induced damage threshold (LIDT) of TiO\({}_{2}\). We calculated the damage threshold using the critical density criterion, \begin{table} \begin{tabular}{c c c c} LIDT (J/cm\({}^{2}\)) & \(\lambda\)(nm) & Pulse width & Ref. \\ \hline 0.5 & 800 & 50 fs & [74] \\ 0.6 & 800 & 220 fs & [74] \\ 1.43 & 532 & 10 ns & [75] \\ 2.09 & 1064 & 10 ns & [75] \\ \hline 0.1 & 800 & 30 fs & This Work \\ \end{tabular} \end{table} Table 2: The measured value of the laser-induced damage threshold (LIDT) of TiO\({}_{2}\) available in the literature. The table also lists the LIDT calculated in the present work using TDDFT simulations. \begin{table} \begin{tabular}{c c c c} \(n_{2}\) (cm\({}^{2}\)/W) & \(\lambda\) (nm) & Pulse width & Ref. \\ \hline \(\sim 10^{-14}\) & 532, 780 & 35 fs & [68] \\ \(6.32\times 10^{-13}\) & 800 & 50 fs & [69] \\ \(2.0\times 10^{-14}\) & 800 & 50 fs & [43] \\ \(1.0\times 10^{-15}\) & 800 & 60 fs & [70] \\ \(2.5\times 10^{-11}\) & 800 & 250 fs & [40] \\ \(6.2\times 10^{-11}\) & 800 & 250 fs & [71] \\ \(1.2\times 10^{-13}\) & 532 & 5 ns & [72] \\ \(1.5\times 10^{-13}\) & 532 & 7 ns & [73] \\ \hline \(3.54\times 10^{-11}\) & 800 & 30 fs & This Work \\ \end{tabular} \end{table} Table 1: Summary of available experimental data for the nonlinear refractive index (\(n_{2}\)) of TiO\({}_{2}\) measured using ns and fs laser pulses at different wavelengths. The \(n_{2}\) calculated in this work from TDDFT simulation is also given in the table for comparison. similar to that measured in experiments with comparable parameters. The damage threshold depends on the frequency and duration of the laser pulse and dynamics toward thermal distribution. The thermal effects probably can be neglected in our case because an ultra-short pulsed laser (\(<50\) fs) was used. The bandgap of TiO\({}_{2}\) in the present study is underestimated due to the GGA functionals [76]. Using more accurate functionals leads to a larger bandgap. This would lead to a higher damage threshold in agreement with the trend of the experimental data [77]. ## IV Summary We presented a systematic investigation of perturbative and non-perturbative electron dynamics of TiO\({}_{2}\) to 30 fs laser pulses at 400 nm and 800 nm using ab initio time-dependent density functional theory. The mechanism of nonlinear optical interaction of TiO\({}_{2}\) at different intensities is discussed. We can see the onset of laser-induced material damage and the accompanying plasmon dynamics from first-principles. The trends of the value of the nonlinear refractive index (\(n_{2}\)) and laser-induced damage threshold obtained from the simulations are consistent with the experimental data in the literature. Non-resonant, perturbative interactions at 800 nm and the accompanying nonlinear phase shift observed in TiO\({}_{2}\) well below the damage threshold hold promises incorporating TiO\({}_{2}\) in optical switches. The present study could guide the further exploration of laser parameters and structural and defect engineering of TiO\({}_{2}\) with tailored properties for specific applications, potentially leading to improved performance in nonlinear photonics devices. By pursuing these directions, researchers can advance the understanding and utilization of TiO\({}_{2}\) and similar materials for nonlinear photonics applications. ## V Acknowledgments This work was in part supported by the Center for Advanced Systems Understanding (CASUS) which is financed by Germany's Federal Ministry of Education and Research (BMBF) and by the Saxon state government out of the State budget approved by the Saxon State Parliament.
2309.09689
Ugly Ducklings or Swans: A Tiered Quadruplet Network with Patient-Specific Mining for Improved Skin Lesion Classification
An ugly duckling is an obviously different skin lesion from surrounding lesions of an individual, and the ugly duckling sign is a criterion used to aid in the diagnosis of cutaneous melanoma by differentiating between highly suspicious and benign lesions. However, the appearance of pigmented lesions, can change drastically from one patient to another, resulting in difficulties in visual separation of ugly ducklings. Hence, we propose DMT-Quadruplet - a deep metric learning network to learn lesion features at two tiers - patient-level and lesion-level. We introduce a patient-specific quadruplet mining approach together with a tiered quadruplet network, to drive the network to learn more contextual information both globally and locally between the two tiers. We further incorporate a dynamic margin within the patient-specific mining to allow more useful quadruplets to be mined within individuals. Comprehensive experiments show that our proposed method outperforms traditional classifiers, achieving 54% higher sensitivity than a baseline ResNet18 CNN and 37% higher than a naive triplet network in classifying ugly duckling lesions. Visualisation of the data manifold in the metric space further illustrates that DMT-Quadruplet is capable of classifying ugly duckling lesions in both patient-specific and patient-agnostic manner successfully.
Nathasha Naranpanawa, H. Peter Soyer, Adam Mothershaw, Gayan K. Kulatilleke, Zongyuan Ge, Brigid Betz-Stablein, Shekhar S. Chandra
2023-09-18T11:53:57Z
http://arxiv.org/abs/2309.09689v1
Ugly Ducklings or Swans: A Tiered Quadruplet Network with Patient-Specific Mining for Improved Skin Lesion Classification ###### Abstract An ugly duckling is an obviously different skin lesion from surrounding lesions of an individual, and the ugly duckling sign is a criterion used to aid in the diagnosis of cutaneous melanoma by differentiating between highly suspicious and benign lesions. However, the appearance of pigmented lesions, can change drastically from one patient to another, resulting in difficulties in visual separation of ugly ducklings. Hence, we propose DMT-Quadruplet - a deep metric learning network to learn lesion features at two tiers - patient-level and lesion-level. We introduce a patient-specific quadruplet mining approach together with a tiered quadruplet network, to drive the network to learn more contextual information both globally and locally between the two tiers. We further incorporate a dynamic margin within the patient-specific mining to allow more useful quadruplets to be mined within individuals. Comprehensive experiments show that our proposed method outperforms traditional classifiers, achieving 54% higher sensitivity than a baseline ResNet18 CNN and 37% higher than a naive triplet network in classifying ugly duckling lesions. Visualisation of the data manifold in the metric space further illustrates that DMT-Quadruplet is capable of classifying ugly duckling lesions in both patient-specific and patient-agnostic manner successfully. melanoma, deep learning, metric learning, ugly duckling ## I Introduction In clinical practice, the most common and established criteria for visually identifying a malignant melanoma is the ABCDE (Asymmetry, Border, Colour, Diameter, Evolution) criteria [1]. This criterion indicates that a skin lesion that lacks in symmetry (Asymmetry), has a spreading or an irregular edge (Border), a variegated colour (Colour), diameter is larger than 6mm (Diameter), and is changing in size and colour over time (Evolution) might be a melanoma. Nonetheless, there exists cases where malignant melanomas do not conform to this criteria, requiring an alternate recognition strategy. Therefore, [2] introduced the concept of an 'Ugly Duckling' as an additional criteria for visual inspections of the skin for melanoma. This criterion was developed based on the fact that naevi on an individual tend to resemble one another, whereas malignant melanoma often deviates from this nevus pattern and stands out from its peers on a common body region. Hence, an 'Ugly Duckling' lesion is defined as an visually different nevus from its surrounding neavi on an individual, and is considered suspicious for malignancy. When identifying an ugly duckling among other lesions, clinicians have access to contextual information as they can observe all lesions on a patient's skin and make a judgement on the visual similarity of the lesions. This similarity judgement is an important cognitive process in humans. By comparing perceptual representations, they are able to perform tasks such as recognition and categorization. This concept of similarity underlies most machine learning techniques. Thus, we can use the same notion of perception to apply a deep learning-based solution to recognize ugly ducklings for the detection of malignant melanoma. However, disparities among individuals and their respective set of lesions can exist in terms of colour, shape, size and distribution. The characteristics of lesions of one individual might be completely different to that of another [3]. This inconsistency is further indicated by the considerable inter-observer variability in selecting ugly duckling lesion, as expert physicians also differ in clinical experience and visual perception [4, 5]. When it comes to implementing AI-based ugly duckling recognition methods, disregarding this pattern variability between individuals and training a model only at lesion-level might be disadvantageous, leading to incorrect predictions. For an example, a naive classifier trained with individual lesion images of several different individuals will not identify patient-specific lesion similarities, as an ugly duckling on one patient might look completely normal on another patient's skin (Figure 1). However, the ontology of a skin lesion dataset is such that the sub-classes (normal and ugly duckling lesions) are not mutually exclusive among individuals although intra-patient features might be different. Thus, the classification of ugly duckling lesions pose a unique and challenging task of taking into account both inter-patient (patient-level) and intra-patient (lesion-level) feature represen tations. We propose to solve this problem by using metric learning - specifically a triplet network with patient-specific sample mining and dynamic separating margins, which is then extended to a tiered quadruplet network that is capable of remaining patient-agnostic while learning patient-specific representations. Metric learning infers contextual information automatically from data to measure similarity, which it will separate naevi displaying the ugly duckling sign from the common moles or non-malignant naevi of an individual. With this system, given a set of images of naevi from a patient, the ugly ducklings can be detected easily and further observed to decide malignancy. In summary, the main contributions of this work are as follows: 1. We propose a novel patient-specific metric-learning method for improved classification of ugly duckling lesions in largely imbalanced skin lesion datasets. 2. We propose a new patient-specific triplet mining strategy that is capable of capturing the patient-level differences accurately. 3. We show that extending this triplet sampling method into a quadruplet sampling method with patient-specific dynamic margins is capable of bypassing the limitations of naive sample mining by incorporating more contextual information from two tiers of separation while maintaining the semantic differences. 4. Using the learning metric for similarity, we show that classification of ugly duckling lesions of a patient is improved compared to the traditional classification methods. 5. We build a novel deep-learning based pipeline that can be used to automatically find ugly ducklings on a patient's skin by providing all lesion images of that patient to the network. The rest of the paper is organized as follows: Section II provides a literature review on classification and metric learning methods applied for suspicious naevi classification. Section III presents the proposed method, and Section IV details the experimental evaluations. This is followed by Section V where results are presented and their implications are discussed. ## II Related work With the recent advances in deep learning, many domains of medical image analysis, including dermatology, have seen promising results. Given the advantage of having public skin lesion datasets such as the The International Skin Imaging Collaboration (ISIC) archive [6] available for analysis, there has been many work that explored deep learning methods for melanoma detection and skin classification over the past few years. Some comprehensive reviews on the topic can be found in [7, 8, 9, 10, 11, 12] and [13]. However, we focus on the classification of ugly duckling lesions in our work, for which public annotations are currently unavailable. ### _Ugly duckling recognition_ Despite the advances with melanoma analysis, only a few previous work have focused on improving the identification of cutaneous ugly duckling lesions using deep learning methods. The use of wide-field images from total-body photography are popular among the few Computer-Aided Detection (CAD) works that have focused on detecting the ugly duckling sign [14, 15, 16, 17, 18]. Wide-field images are an advantage in ugly duckling related work as by definition, an ugly duckling is a contextual observation in comparison to neighbouring lesions on a body-site, and wide-field images provide a way to include this contextual information. The common workflow in these proposed methods is to first extract the lesions out of the wide-field images, and train the detectors with them. Labelling of the lesions might be Fig. 1: Visual lesion characteristics vary among patients. Figure shows sets of lesions from 3 different patients. Each lesion set is from the same body-site of the particular patient, and ugly ducklings are marked with a ‘UD’ label. performed before or after extraction. With the use of wide-field images, even straightforward methods such as logistic regression models show promising results [14]. Modelling the ugly duckling detection as an outlier detection on wide-field is also popular, with results demonstrating that patient-level analysis of wide-field images is advantageous. As proposed by the work of [15] and [18], autoencoders trained with the extracted lesions can be used successfully for ugly duckling outlier detection. According to the study by [16], deep convolution neural networks with transfer learning also show promising results for identifying inter-lesion dependencies for the classification of ugly ducklings. Further indicating that well-optimized deep learning methods can be efficiently used for accurate assessment of suspicious pigmented lesions, [19] analysed individual lesion images instead of wide-field images to identify ugly ducklings. They learn a patient-specific contextual embedding by modelling the dependencies among lesions using a Transformer encoder, which are then used to perform patient-level and lesion-level predictions concurrently. Similarly, [20] shows that feature extraction of suspicious lesions with variational autoencoders followed by random forest and artificial neural network classifiers can obtain high classification accuracy. All of these works demonstrate that patient-level analysis or incorporation of patient-specific contextual information is more advantageous for identifying ugly duckling lesions accurately. Most of the above studies developed a ranking system to separate ugly ducklings from normal lesions [14, 15, 16, 18]. A feature vector or an embedding for each lesion were extracted from the trained models, and the distance between these embeddings were used to determine a separating threshold or a ranking score for ugly ducklings. However, some studies pooled all of the extracted lesions from wide-field images together to train the models, resulting in patient-level contextual information being lost. Some thresholds were set based on clearly defined melanomas only [18]. These actions might have resulted in a more lesion-level threshold of separation than patient-level, leading to misidentification of ugly ducklings that had features not clearly distinguishable during inference with the models. Furthermore, based on the definition of an ugly duckling, datasets used for these studies would need specific ugly duckling annotations. While some studies labelled the ugly ducklings in their datasets with the help of board-certified dermatologists, a few others simply used existing labels of malignant melanoma in place of ugly ducklings. However, malignant melanoma annotations cannot be used interchangeably with ugly duckling labels given that not all melanomas present ugly duckling characteristics [21]. Thus, studies using malignant melanoma labels might have resulted in misclassification of actual ugly duckling lesions. Therefore, expert supervision is necessary to either annotate datasets correctly with ugly duckling labels, or evaluate the ranked lesions to test high agreement. This might be costly as time and expertise supervision is required. In addition, there are currently no public datasets that are specifically annotated with ugly duckling labels. These might be the reason that not many deep learning based methods have not been explored for ugly duckling recognition. ### _Metric learning for skin lesion classification_ In current literature, no metric learning approaches have been employed for the identification of ugly duckling lesions specifically. However, a few works on melanoma detection and skin lesion segmentation have explored the use of metric learning. As demonstrated by past work, traditional triplet loss is capable of achieving higher intra-class compactness and stronger inter-class separability in skin lesion and skin disease classification [22, 23]. A few studies have also improved skin lesion segmentation by using triplet networks to learn pixel embedding in the metric space [24, 25]. In addition, [26] proposed a trained triplet network to gather similar images from publicly available skin lesion datasets to support skin lesion diagnosis with content-based image retrieval. Apart from that, most of the works using metric learning for skin lesion classification are focused on solving the class-imbalance problem in the respective skin lesion datasets used [27]. In cases of heavy imbalance, traditional classifiers might maintain a high overall accuracy because of the bias introduced from the large number of samples from the major class. Thus, complex balancing techniques have to be introduced within traditional classifiers to improve prediction accuracy on minority class [28, 29]. Contrastive learning, especially with triplet networks and the traditional triplet loss itself, is capable of addressing this class imbalance by generating a balanced contribution by each class in mined triplets. However, samples within the same class might present large visual differences, causing the feature distribution of that class to spread over a large space. By employing a class-center involved triplet loss, this distribution can be made more compact in the learned feature space as shown by previous work [30, 31, 32]. Using a pretrained triplet network capable of generating the embeddings for the samples, the class centers can be calculated, which are then used in place of positive and negative samples within the mined triplet. This approach ensures the representations of samples from the same class are driven closer towards the class center. [31] extends the center-oriented triplet loss with an adaptive margin value, where the margin value is automatically adjusted given cluster separation instead of training with a fixed value. Despite these advances, no deep metric learning models have enforced a patient-specific separation for skin lesion classification focusing specifically on ugly duckling identification. Therefore, to the best of our knowledge, this is the first work employing a tiered and patient-specific triplet network for ugly duckling lesion classification. ## III Methods ### _Overview of the proposed architecture_ Publicly available skin lesion datasets suffer from high imbalances as melanoma incidences are rare [33, 34, 35]. In order to accurately classify both normal and rare ugly duckling lesions, while accounting for both local and global feature representations, we propose a two-stage classification system. The overview of the proposed framework is presented in Figure 2. The first stage involves training a quadruplet network, and the second stage utilizes the trained quadruplet network to generate a latent representation of samples in metric space and train a classifier on them. #### Iii-A1 Classical triplet loss and triplet mining To better describe our method, we first introduce the functionality of a classical triplet network. A triplet network is trained on multiple triplets sampled from a dataset, where each triplet contains an anchor sample (\(a\)), a positive sample (\(p\)) from the same class as the anchor, and a negative sample (\(n\)) from the opposite class. The traditional triplet loss then aims to minimize the \(a-p\) distance, while maximizing the \(a-n\) distance by at least a margin \(\alpha\): \[L_{triplet}=\frac{1}{N}\sum_{i=1}^{N}[d(a^{i},p^{i})-d(a^{i},n^{i})+\alpha]] \tag{1}\] where \(d\) is the Euclidean distance between a lower dimensional embedding of two samples. For each training iteration, a batch of N samples is used to select random triplets. The triplet loss calculated per triplet instance is then averaged within that batch. The selection of triplets, which is referred to as 'triplet mining', can be performed either offline or online. In the offline triplet mining strategy, we form the triplets at the beginning of each epoch and feed batches of those triplets throughout the epoch. However, this is inefficient as the number of triplets that need to be computed are high and as not all of them contribute to the learning of the network. This can be alleviated by the online mining strategy where useful triplets are mined on the fly for a mini-batch of samples. However, not all online triplets are 'valid' as some of them don't have two positives and one negative sample. To find out the valid and useful triplets, strategies such as semi-hard mining and hard mining can be used. In hard-mining, the negative sample is closer to the anchor than the positive and the loss is positive as well as greater than \(\alpha\). In semi-hard mining, the negative sample is further than the positive but not larger than the margin, and the loss is positive. #### Iii-A2 Triplet Loss with patient-specific mining approach In the first stage of our proposed method, we first implement a patient-specific triplet loss to train a feature extractor capable of capturing discriminate inter-patient features in samples. The Fig. 2: The proposed DMT-Quadruplet architecture. In the first stage, a Quadruplet network is trained with online patient-specific quadruplet mining and a dynamic margin. The trained quadruplet network is then used as a backbone feature extractor to train a simple CNN classifier in the second stage. During the testing phase, only the trained classifier network from the second stage is used for generating a lower dimensional embedding for each test image and performing binary classification. triplet network consists of a CNN backbone, whose output is a 128-dimensional feature vector. The distances for the triplet loss are calculated between corresponding embeddings within a mined triplet to update the CNN weights. In a naive triplet mining approach, the patient-level separation is disregarded, instead mining the triplets based only on their class labels. However, we consider the fact that there are two levels of contextual information included in the particular problem of classifying skin lesions - lesion-level (intra-patient) and patient-level (inter-patient) differences. Therefore, we follow a patient-specific mining approach as illustrated in Figure 3. The pseudo code in Algorithm 1 outlines the patient-specific triplet mining and training procedure of the triplet network. We start by using a custom batch-sampler to create mini-batches of samples. For this, we randomly select an \(X\) number of individuals from the training dataset, and randomly sample \(k\) images from each individual between both classes of lesions. This ensures each mini-batch would see sufficient information of each individual's lesions. Then, during each epoch \(t\) of training, we consider only the samples belonging to a particular individual (\(x\)) to select the triplets for iteration. To create the triplets, we first mine all pairs of anchor (\(a^{x}\)) and positive (\(p^{x}\)) samples. Then, for each anchor-positive pair, we create all possible triplet instances by considering each negative sample (\(n^{x}\)) of that individual in the mini-batch. For all such triplet instances created for an anchor-positive pair, a 128-dimensional embedding is then calculated using the most recently updated neural network backbone of the triplet network. For each embedding triplet, the triplet loss is computed using Equation 1 where the distance measure is the Euclidean distance between the embeddings. Based on the loss value of each triplet with the mini-batch, we then perform random-hard triplet mining to select the hardest negative sample corresponding to each anchor-positive pair. This procedure is repeated for all individuals within that mini batch to create a batch of useful triplets, effectively incorporating more patient-specific local information into the triplet. Finally, the backbone CNN parameters are updated according to the triplet loss calculated on the mined triplets. This patient-specific mining procedure ensures that the CNN will focus on learning better patient-level feature representations. #### Iii-B3 Tered quadruplet loss We further extend our patient-specific triplet loss into a Tered Quadruplet (T-Quad) loss to incorporate more lesion-level global context in the training of the triplet. With this, our aim is to identify the similarities between the lesions of individuals, as well as dissimilarities between individuals in the data. We model this as a two-tier problem where Tier 1 is composed of different individuals in Fig. 3: Patient-specific mining approach. The blue boxes indicate the mining of a triplet within a patient, where anchor and positive are from the same class, while negative is from the opposite class. For the Tiered Quadruplet loss, we mine a fourth sample (secondary-negative) as indicated by the green box. The secondary-negative can be of either class, but from a _different_ patient. For the Tiered Quadruplet with a Dynamic Margin, we calculate an online margin of separation for two distinct clusters of embeddings of an individual using KMeans. the dataset, while Tier 2 is the collection of lesions of each individual (Figure 3). The Tiered Quadruplet network follows the same sample mining procedure as in Algorithm 1, but now we mine a fourth sample for each \((a^{x},p^{x})\) pair. We refer to this additional sample as a secondary-negative (\(sn\)) which is mined from a different patient (\(y\)) from the mini-batch. This secondary-negative can be of the same class or opposite class as the anchor (\(a^{x}\)). Similar to above, the hardest secondary-negative sample is chosen randomly for each \((a^{x},p^{x})\) pair. Thus, the Tiered Quadruplet loss takes in quadruplets of samples in the form \((a^{x},p^{x},n^{x},sn^{y})\) for each batch, and the loss is computed as: \[\begin{split}& L_{patient\;level^{i}}=d(a^{i},p^{i})-d(a^{i},n^{i})+ \alpha\\ & L_{lesion\;level^{i}}=d(a^{i},p^{i})-d(a^{i},sn^{i})+\beta\\ & L_{t-quad}=\frac{1}{N}\sum_{i=1}^{N}[L_{lesion\;level^{i}}+L_ {patient\;level^{i}}]\end{split} \tag{2}\] where \(\beta\) (\(>\alpha\)) is a coarse margin of patient separation. Thus, with the tiered loss, the network learns a more global representation of lesions as Tier 2 classes (normal and UD lesions) are mutual across Tier 1 classes (individuals). #### Iii-B4 Dynamic margin involved tiered quadruplet loss In naive triplet mining strategies, the margin of separation \(\alpha\) is set to a default value across all mini-batches of training. However, we argue that this margin would be different from one individual to another based on the previously mentioned observation of their phenotypic features being different among themselves. Therefore, based on our assumption that the metric learning benefits from patient-specific information to learn a better separation of lesion characteristics, we further extend our Tiered Quadruplet loss to include a patient-specific dynamic margin instead of a global fixed margin. The pseudo code in Algorithm 2 outlines the tiered quadruplet mining and calculation of the patient-specific dynamic margin for training the quadruplet network. For each mini-batch of embeddings during training, we calculate a dynamic margin of separation between two drastically different clusters of embeddings of a patient (Figure 3). For each patient in the mini-batch, the embeddings are clustered unsupervised into two groups using the K-Means algorithm, and the distance between the two cluster centroids is used as the dynamic margin for mining \((a^{x},p^{x},n^{x})\) triplets from that particular patient for the considered mini-batch. Each patient-specific dynamic margin \(\alpha^{x}\) is used to compute the Dynamic Margin Tiered Quadruplet (DMT-Quad) loss at each iteration as: \[\begin{split}& L_{patient\;level^{i}}=d(a^{i},p^{i})-d(a^{i},n^{i})+ \alpha^{\mathbf{x}}\\ & L_{lesion\;level^{i}}=d(a^{i},p^{i})-d(a^{i},sn^{i})+\beta\\ & L_{dmt-quad}=\frac{1}{N}\sum_{i=1}^{N}[L_{lesion\;level^{i}}+L _{patient\;level^{i}}]\end{split} \tag{3}\] For mining the \(sn^{y}\) from a different patient, a fixed default margin \(\beta\) is used. This is because the calculation of dynamic margins between patients would make the quadruplet mining process excessively computationally heavy. #### Iii-B5 Classification Network In the second stage of our proposed method, we train a CNN classifier to be able to identify normal and ugly duckling lesions. As illustrated in Figure 2, the trained quadruplet network acts as the feature extractor of the CNN classifier, where a fully-connected layer is added in the end to produce a 2-dimensional output that can be used for binary classification. All layers of the quadruplet network are frozen, and only the parameters of the last layer are updated in the classifier. ## IV Experimental Evaluation ### _Data preprocessing_ The deidentified dataset used in this study was provided by the Dermatology Research Centre located within the Frazer Institute, The University of Queensland. It contains dermoscopy images of skin lesions from 59 participants that were originally collected under the Brisbane Naevus Morphology Study (BNMS) [36]. The data was collected from participants at multiple visits over a period of 3 years from October 2009 to October 2012. For each participant, all naevi (\(\geq 2mm\) and \(\geq 5mm\)) at 16 body sites (excluding scalp, buttocks, mucosal surfaces, and genitalia of both sexes) had dermoscopic images taken with the FotoFinder(r) Bad Birnbach, Germany) imaging system. For the purpose of this study, all lesions were annotated as normal or ugly duckling by a board-certified dermatologist (HPS) with 30+ years of experience using an in-house annotation software. The software presents all lesions recorded per visit for a selected individual on the screen, allowing the annotator to observe and compare the lesions with each other rather than observe a single lesion at a time. Using this software, ugly duckling labels were assigned to lesions of each visit based on visually contrasting characteristics. To reduce the effect of large data imbalances, we discarded the records from participants who had no ugly ducklings recorded in any visit. For participants with ugly ducklings in some visits only, we also discarded the visits which had no ugly ducklings recorded. If a lesion of a participant had been labelled as UD at any visit, all lesions recorded from that visit were retained. If a lesion was labeled UD at one visit the labelling was kept consistent across all retained visits to reduce the confusion in feature learning for the neural network with inconsistencies. For normal lesions, all versions recorded across the retained ``` 0: Training data \(D\), number of epochs \(e\), patient-level margin \(\beta\), network hyperparameters Initialization:\(t,X,k\) while\(t<e\)do list quadruplets \(Q\) Randomly select \(k\) samples from \(X\) individuals for iteration \(t\) for\(x\)in\(X\)do Find dynamic margin for all embeddings of individual \(x\) Generate embeddings for all samples of \(x\) Apply KMeans on all embeddings for clustering into 2 classes Calculate the distance between the two cluster centroids as \(\alpha_{x}\) Mine all anchor-positive pairs \((A^{x},P^{x})\) for\((a^{x},p^{x})\)in\((A^{x},P^{x})\)do Calculate lesion-level loss Mine all negative samples \(N^{x}\) from individual \(x\) Create all triplets \((a^{x},p^{x},n^{x})\) for\(n^{x}\) in \(N^{x}\) foreach triplet \((a^{x},p^{x},n^{x})_{i}\)do Generate embeddings \((w_{a}^{x},w_{p}^{x},w_{n}^{x})_{i}\) with current model parameters Calculate triplet loss \(tl_{i}\) for each embedding triplet \((w_{a}^{x},w_{p}^{x},w_{n}^{x})_{i}\) as in Equation 3 using dynamic margin \(\alpha_{x}\) end for From the subset of triplets where \(tl>0\), randomly select triplet \((a^{x},p^{x},n^{x})\) Calculate patient-level loss Mine all secondary-negative samples \(SN^{y}\) where \(y\)!= \(x\) Create all triplets \((a^{x},p^{x},sn^{y})\) for\(sn^{y}\) in \(SN^{y}\) foreach triplet \((a^{x},p^{x},sn^{y})_{j}\)do Generate embeddings \((w_{a}^{x},w_{p}^{x},w_{sn}^{y})_{j}\) with current model parameters Calculate triplet loss \(tl_{j}\) for each embedding triplet \((w_{a}^{x},w_{p}^{x},w_{sn}^{y})_{j}\) as in Equation 3 using patient-level margin \(\beta\) end for From the subset of triplets where \(tl>0\), randomly select triplet \((a^{x},p^{x},sn^{y})\) Add selected \((a^{x},p^{x},n^{x},sn^{y})\) to \(Q\) end for Calculate tiered quadruplet loss for iteration \(tl_{t}\) with all triplets in \(Q\) as in Equation 3 with \(\beta\) and corresponding \(\alpha_{x}\) Backpropagate to update CNN parameters end for ``` **Algorithm 2**Pseudo-code for DMT-Quadruplet loss The final dataset had a total of 10,493 images from 37 participants, with 10,174 normal lesions and 319 UDs. Thus, the ratio between UD to normal lesions is approximately 1:32. Of the final dataset, 21 (\(\sim\)57%) participants were male and all participants were of Caucasian heritage. This dataset derived from the BNMS data will be referred to as the SkinUD dataset hereafter. The SkinUD dataset was split into training, validation, and test sets by patients in order to avoid data leakage. The images were originally 768\(\times\)576 pixels in size. During training, images are first center-cropped to 512\(\times\)512 pixels, which are then resized to 96\(\times\)96 pixels. Further, random horizontal and vertical flipping are applied as data augmentations. In addition, to alleviate the effect of the major class imbalance, we manually oversample UD lesions in the training set by duplicating the images by 10-fold. In addition to the SkinUD dataset, we train and evaluate the performance of our proposed method on the ISIC2020 dataset [1]. The ISIC2020 dataset contains 33,126 images of 2,056 individual patients, with 32,542 benign and 584 malignant melanoma images. Although malignant melanoma and ugly duckling labels are not interchangeable as mentioned previously, we consider the melanoma labels as ugly ducklings with the ISIC2020 dataset for the purpose of evaluation. In addition, malignant melanoma being rare incidences, ISIC2020 also has a large class imbalance (ratio 55.7:1). This allow us to test if our proposed DMT-Quadruplet network is capable of still classifying the minority class accurately. ### _Experiment setup_ The quadruplet network that serves as the main backbone of our proposed method is trained with a CNN feature extractor. By default, this is a ResNet18 model where the last layer has been replaced by a fully-connected layer that provides a 128-dimensional embedding as the output. For the quadruplet network, Adam optimizer is adopted during training with a learning rate of \(0.0001\). The default margin values for \(\alpha\) and \(\beta\) are \(1.0\) and \(1.5\) respectively. During classification, the original last layer of the backbone CNN is added back in to provide a 2-dimensional embedding. The classifier is also trained with an Adam optimizer, with a Cross Entropy loss and a learning rate of \(0.0001\). We compare our classification results with ResNet18 as a baseline traditional classifier. We also test a few different backbone architectures including ResNet34, VGG16, and EfficientNetB0. All experiments were carried out with PyTorch on a NVIDIA Tesla V100 32GB GPU. Both quadruplet and classifier networks were trained with a batch size of 32 and until the best accuracy was obtained with the least disparity among train and validation sets. To perform comprehensive performance comparison of the final classification task, we use seven metrics, including sen sitivity, specificity, total accuracy, ROC AUC, and macro-weighted precision, recall, and F1 score. ## V Results and Discussion ### _Effectiveness of patient-specific Tered Quadruplet approach_ This section evaluates the effectiveness of the proposed approach on the SkinUD dataset. We mainly focus on comparing the sensitivity measure as that is the metric that corresponds to the classification accuracy on UD lesions. Table I shows that the Patient-specific Tered Quadruplet with Dynamic Margin (DMT-Quadruplet) has the highest sensitivity measure for the SkinUD dataset at 71.2% accuracy. This is 54% better than the baseline ResNet18 classifier, while being 37% better than the naive triplet classifier. Further, as illustrated in Figure 4, all triplet-based classifiers consecutively improve upon the sensitivity measure, all reporting higher accuracy on classifying UD lesions than the baseline. This causes a trade-off between specificity and sensitivity as the bias caused by the majority class is reduced, resulting in specificity and overall accuracy declining slightly. However, the specificity and overall accuracy of the DMT-Quadruplet is lower than the baseline ResNet18 only by 4.8% and 3.9% respectively. Moreover, the DMT-Quadruplet reports the highest recall and ROC AUC as well. In addition, it can be seen that both Tered Quadruplet and DMT-Quadruplet outperforms the patient-specific triplet in sensitivity. This further highlights the importance of our Tered Quadruplet loss as it is capable of incorporating more lesion-level information with the help of the secondary-negative, which provides the network with an additional global view of lesion features. Overall, coupled with the patient-specific mining approach, computing an online dynamic margin of separation boosts the performance of the DMT-Quadruplet network. This could be due to the fact that having a fixed margin earlier would have inaccurately discarded some of the useful triplets from individuals during the online mining process. Having a patient-specific online dynamic margin helps the mining process to pick more accurate and useful triplets from each individual, feeding better representations to the final loss of each iteration. Further, by setting a large value for the patient-level separating margin \(\beta\) of the DMT-Quadruplet, more weight is applied on learning global features than the finer local features during training. This allows the network to remain patient-agnostic at lesion-level while separating lesions at patient-level accurately. The effectiveness of the DMT-Quadruplet approach is further proven when we visualize the separation of lesions at both patient-level and lesion-level. We applied t-SNE algorithm to obtain 2-d embeddings from the 128-dimensional embeddings generated for samples in the test set to illustrate the cluster separations as shown in Figure 5. The manifold of the Naive triplet does not show a good separation for UD lesions (Figure 5(a)) with UDs spread over the manifold, but the DMT-Quadruplet shows a better separation by clustering UD lesions closer together (Figure 5(b)). Additionally, as shown in Figure 6, the two networks behave \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline **Classificent mode** & **Specificity** & **Sensitivity** & **Recall** & **Precision** & **F1-score** & **AUC** & **Accuracy** \\ \hline ResNet18 & 91.71.41 & 72.58.59 & **81.42.12** & 62.218.08 & 66.360.03 & 90.240.00 & 90.381.2 \\ ResNet14 & 91.02.55 & 63.57 & 78.31.84 & 60.915.01 & 64.322.06 & 86.711.0 & 89.392.4 \\ VGG16 & 87.70.4 & 69.62.76 & 78.211.1 & 58.882.2 & 61.540.04 & 89.21.1 & 87.705.5 \\ EfficientNet00 & **95.91.2** & 56.64 & 77.22.36 & **67.45.19** & **78.26.04** & **91.01.04** & **94.00.08** \\ DenseNet169 & 90.78.30 & 64.89.55 & 77.28.34 & 60.720.0 & 63.822.3 & 87.720.0 & 89.392.6 \\ \hline \end{tabular} \end{table} TABLE II: Classification performance on the test set with different backbone CNNs of DMT-Quadruplet Fig. 4: Performance of the triplet-based classifiers compared to the baseline. The sensitivity measure (classification accuracy on UDs) significantly increases with improvements to the triplet network, and the final DMT-Quadruplet network performs best with a reduced bias. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline **Method** & **Specificity** & **Sensitivity** & **Recall** & **Precision** & **F1-score** & **AUC** & **Accuracy** \\ \hline ResNet18 & 96.41.3 & 46.26.26 & 71.352.8 & 66.34.30 & 68.12.62 & 84.04.00 & 94.01.4 \\ ResNet34 & 96.34.06 & 47.44.4 & 71.92.3 & 66.12.7 & 68.42.4 & 88.781.1 & 94.04.08 \\ VGG16 & 94.90.9 & 54.64.00 & 74.74.5 & 63.90.2 & 67.41.0 & **90.74.14** & **92.74.05** \\ EfficientNet00 & **97.64.02** & 40.40.5 & 69.00.2 & **68.98.08** & **68.94.03** & 89.78.1 & **95.08.00** \\ \hline \multirow{2}{*}{**Traditional metric losses**} & Naive Siamese & 93.43.61 & 60.12.5 & 76.34.11 & 62.540.30 & 63.36.14 & 86.361.4 & 91.740.5 \\ & Naive Triplet & 93.91.6 & 51.92.6 & 76.43.5 & 63.31.0 & 66.940.6 & 88.58.0 & 92.02.14.1 \\ \hline \multirow{2}{*}{**Patient-specific metric losses**} & Patient-specific Triplet (PS-Triplet) & 94.53.1 & 58.864.1 & 73.92.4 & 64.53.0 & 68.824.7 & 87.160.9 & 92.322.5 \\ \hline \multirow{2}{*}{**Patient-specific metric losses**} & Patient-specific Triplet (D-Quadruplet) & 93.51.5 & 63.047.2 & 78.34.2 & 63.34.09 & 67.240.9 & 88.04.0 & 91.741.2 \\ & Patient-specific Triered Quadruplet + Dynamic Margin (DMT-Quad) & 91.74.1 & **71.245.9** & **81.442.2** & 62.240.8 & 66.340.9 & 90.240.0 & 90.381.3 \\ \hline \end{tabular} \end{table} TABLE I: Classification performance of triplet-based classifiers on the test set differently when differentiating between all lesions of the same patient, where the DMT-Quadruplet shows a clearer clustering. Thus, even though the mining of triplets for the DMT-Quadruplet was patient-specific, it is capable of behaving in a patient-agnostic manner while separating lesions within an individual equally well. ### _Analyses on different feature extraction backbones for DMT-Quadruplet_ Table II presents the classification performance of the DMT-Quadruplet network with different backbone CNNs. The results indicate that EfficientNetB0 performs better at classifying the majority class with a higher sensitivity and overall accuracy. However, it leads to a lower sensitivity measure, indicating poor classification of the minority (UD) class. ResNet18 reports the highest sensitivity, while maintaining high specificity, AUC and overall accuracy. Thus, ResNet18 can be considered as the most suitable feature extractor for DMT-Quadruplet. ### _Analyses on different classifiers for classification of DMT-Quadruplet Embeddings_ For the embeddings generated by the trained DMT-Quadruplet, we tested different classifiers for our stage 2 of the proposed architecture. As presented in Table III, the results indicate that a Random Forest classifier with a depth of 4 is performs best in classifying UDs (sensitivity 78.4%) with the highest recall (83.7%). However, our original CNN classifier remains the best performing for all other metrics. ### _Classification on ISIC2020 dataset_ As patient-specific data is available in ISIC2020 dataset, we trained our DMT-Quadruplet on it separately. Evaluation on the test set indicates that the trained model can achieve more than 70% accuracy on both benign and melanoma classes. We present the comparison of the performance of our method with existing work in Table IV. For this comparison, we only considered work where binary classification was performed on the ISIC2020 dataset. Further, we omitted methods using a combination of ISIC datasets, methods with ensemble models, and methods not reporting sensitivity and specificity for a fair comparison. While [40] reported the highest sensitivity and specificity among the considered work, it should be noted that they address the issue of class imbalance in ISIC2020 by generating synthetic skin lesion images using an improved Generative Adversarial Network (GAN). This results in a more balanced dataset, and better classification accuracy. However, our method only uses duplication of the minority class to address the class imbalance, yet achieving \(\geq\)70% in accuracy for both classes with DMT-Quadruplet. Further, the results of [38] and [39] show that there's a high trade-off between sensitivity and specificity. While both these works obtained a high specificity, their sensitivity was considerably low, indicating poorer performance in classifying the melanoma lesions. Fig. 5: The 2D separation manifold in metric space for all samples in the test set using (a) the Naive triplet and (b) the DMT-Quadruplet to generate embeddings. The 2D embeddings were generated by applying t-SNE algorithm on the 128-dimensional output of the triplet networks. Fig. 6: The 2D separation manifold with image tiles in metric space for the same individual in the test set using (a) the Naive triplet and (b) the DMT-Quadruplet to generate embeddings. For ease of interpretation, not all image tiles for normal lesions have been plotted. However, all UD lesion tiles are plotted. \begin{table} \begin{tabular}{|c|c|c c|c c|} \hline **Reference** & \multicolumn{3}{c|}{**Method**} & **Model** & **Sen** & **Spe** & **Ace** \\ \hline [37] & SMOTE-Tomek Links sampling for data imbalance + Transfer learning & ResNet50 & 99.7 & 55.67 & 93.96 \\ [38] & Long attention + Multi-scale learning & MSLANet & 59.6 & 97.4 & 95.6 \\ [39] & Semi supervised color mapping + multi-scale attention mechanism & SSGNet & 64.1 & 94.67 & 89.68 \\ [40] & WGAN for data imbalance + mobile deep learning & Melanlysis & 92.5 & 89.5 & 94 \\ & Our proposed method (patient-specific tiered quadruplet with dynamic margin) & DMT-Quadruplet & 71.4 & 73.3 & 72.7 \\ \hline \end{tabular} \end{table} TABLE IV: Performance comparison on the ISIC2020 dataset for binary classification However, our proposed DMT-Quadruplet method achieves similar accuracy for both classes. Thus, if we applied more advanced data augmentation techniques to address the class imbalance in ISIC2020, DMT-Quadruplet would further improve in performance. Altogether, the results indicate that our proposed DMT-Quadruplet model is generalisable across skin lesion datasets with patient-specific metadata. Although previous work on metric learning have implemented similar hierarchical triplet networks with a layered ontology, the datasets that have been used in training these models had mutually exclusive sub-classes (eg: types of shoes and types of trousers in the Fashion60 dataset) [41]. However, our problem is unique in the sense that it has a mutual ontology where the sub-classes (lesion classes within an individual) are the same as the classes of the overall dataset. For the whole dataset, there are only two sub-classes of lesions (normal and UD), although they differ slightly from one individual to another. Thus, our experimental evaluation indicates that our DMT-Quadruplet loss captures inclusive intra-class and inter-class relationships optimally and accurately. ### _Limitations_ It must be noted that the manual assessment and annotation of ugly duckling lesions are subjective to inter-observer variability. Based on expertise and skill, dermatologists may vary among themselves in what they identify as an ugly duckling lesion among normal lesions. For our study, we considered annotations by only one board-certified dermatologist. For more non-biased ground truth annotations, future work could improve by considering a majority vote on ugly ducklings from several dermatologists. Further, our SkinUD dataset is comprised of an entirely Caucasian population. Thus, our work may lack generalisability outside Caucasian individuals as populations with skin of colour may present ugly duckling features differently. In people with skin of colour, a highly pigmented lesion might not always show signs of an ugly duckling. Thus, our work could benefit from incorporating data from skin of colour. However, currently there are no public skin lesion datasets that include non-Caucasian individuals, nor for annotations of ugly duckling lesions. In addition, due to lack of data and manual annotation resources, our SkinUD dataset is considerably small with only 37 participants. A larger sample size with annotations would help improve any future work. ## VI Conclusion In this work, we present a a novel patient-specific metric learning method for improved classification of ugly duckling lesions. For this, we implement a quadruplet sampling strategy that enables the network to learn features from two tiers of information, where tier 1 is lesion features between individuals, and tier 2 is lesion features within an individual. Considering our problem is unique with mutual sub-classes between tier 1 individuals, we introduce a tiered quadruplet network to incorporate more global context for network training. We further improve our tiered quadruplet network by including an inter-patient dynamic margin of separation to drive the network to mine for tailored and useful triplets from individuals. The effectiveness of our proposed approach is demonstrated by the extensive experimental results. Our Tiered Quadruplet network with a Dynamic margin (DMT-Quadruplet) is shown to effectively capture global lesion-level features, while learning finer, patient-level differences as well. In addition, our optimized approach surpasses traditional classification methods for identifying lesions of the minority class, while also being capable of handling datasets with major class imbalances. As future work, a class-center based triplet loss can be incorporated to the tiered quadruplet network to validate if the class imbalance can be further alleviated. In conclusion, with our two-stage pipeline combining a triplet-based feature extractor and a classifier, we can effectively separate lesions of an individual and accurately classify ugly duckling lesions. In clinical application, our method will be particularly useful for patients who have many naevi (\(>\)100) as assessing each lesion individually is time consuming and unrealistic for a clinician. Applying this alongside an ABCDE algorithm might provide an image triage so the dermatologist/clinician only needs to assess the most suspicious lesions, saving time and effort. As such, the proposed method can successfully assist clinicians in early melanoma detection as ugly duckling lesions are an indicator of a potential malignant melanoma developing. ## VII Conflicts of interest HPS is a shareholder of MoleMap NZ Limited and e-derm consult GmbH and undertakes regular teledermatological reporting for both companies. HPS is a Medical Consultant for Canfield Scientific Inc, Blaze Bioscience Inc, and a Medical Advisor for First Derm. ## VIII Author Contributions **Nathasha Naranpanawa:** Conceptualization, Data Curation, Methodology, Software, Validation, Formal analysis, Writing - Original Draft, Writing - Review & Editing, Visualization **H. Peter Soyer:** Conceptualization, Data Curation, Resources, Writing - Review & Editing **Adam Mothershaw:** Software, Resources, Writing - Review & Editing **Gayan K. Kulatilleke:** Writing - Review & Editing **Zongyuan Ge:** Writing - Review & Editing **Brigid Betz-Stablein:** Conceptualization, Data Curation, Writing - Review & Editing, Supervision **Shekhar S. Chandra:** Conceptualization, Methodology, Writing - Review & Editing, Supervision ## IX Funding The images used in this study are derived from the BNMS - Brisbane Naevus Morphology Study. This BNM Study was approved by the Metro South Human Research Ethics Committee (approval #HREC/09/QPAH/ 162, 26 August 2009) and The University of Queensland (approval #2009001590, 14 October 2009) and conducted in accordance with the Declaration of Helsinki. Participants of the BNM Study provided written consent after receiving a Participant Information and Consent Form. The BNM Study was funded by the Australian National Health and Medical Research Council (NHMRC) (project grants APP1062935, APP1083612) and the Centre of Research Excellence for the Study of Naevi (grant no. APP1099021). The above mentioned funding sources of the BNM Study had no involvement in the work presented in this paper.
2304.00137
Measurement of the cosmic p+He energy spectrum from 50 GeV to 0.5 PeV with the DAMPE space mission
Recent observations of the light component of the cosmic-ray spectrum have revealed unexpected features that motivate further and more precise measurements up to the highest energies. The Dark Matter Particle Explorer is a satellite-based cosmic-ray experiment that has been operational since December 2015, continuously collecting data on high-energy cosmic particles with very good statistics, energy resolution, and particle identification capabilities. In this work, the latest measurements of the energy spectrum of proton+helium in the energy range from 46 GeV to 464 TeV are presented. Among the most distinctive features of the spectrum, a spectral hardening at 600 GeV has been observed, along with a softening at 29 TeV measured with a 6.6{\sigma} significance. Moreover, the detector features and the analysis approach allowed for the extension of the spectral measurement up to the sub-PeV region. Even if with small statistical significance due to the low number of events, data suggest a new spectral hardening at about 150 TeV.
DAMPE Collaboration, F. Alemanno, C. Altomare, Q. An, P. Azzarello, F. C. T. Barbato, P. Bernardini, X. J. Bi, I. Cagnoli, M. S. Cai, E. Casilli, E. Catanzani, J. Chang, D. Y. Chen, J. L. Chen, Z. F. Chen, P. Coppin, M. Y. Cui, T. S. Cui, Y. X. Cui, H. T. Dai, A. De Benedittis, I. De Mitri, F. de Palma, M. Deliyergiyev, A. Di Giovanni, M. Di Santo, Q. Ding, T. K. Dong, Z. X. Dong, G. Donvito, D. Droz, J. L. Duan, K. K. Duan, R. R. Fan, Y. Z. Fan, F. Fang, K. Fang, C. Q. Feng, L. Feng, M. Fernandez Alonso, J. M. Frieden, P. Fusco, M. Gao, F. Gargano, K. Gong, Y. Z. Gong, D. Y. Guo, J. H. Guo, S. X. Han, Y. M. Hu, G. S. Huang, X. Y. Huang, Y. Y. Huang, M. Ionica, L. Y. Jiang, Y. Z. Jiang, W. Jiang, J. Kong, A. Kotenko, D. Kyratzis, S. J. Lei, W. H. Li, W. L. Li, X. Li, X. Q. Li, Y. M. Liang, C. M. Liu, H. Liu, J. Liu, S. B. Liu, Y. Liu, F. Loparco, C. N. Luo, M. Ma, P. X. Ma, T. Ma, X. Y. Ma, G. Marsella, M. N. Mazziotta, D. Mo, M. Muñoz Salinas, X. Y. Niu, X. Pan, A. Parenti, W. X. Peng, X. Y. Peng, C. Perrina, E. Putti-Garcia, R. Qiao, J. N. Rao, A. Ruina, Z. Shangguan, W. H. Shen, Z. Q. Shen, Z. T. Shen, L. Silveri, J. X. Song, M. Stolpovskiy, H. Su, M. Su, H. R. Sun, Z. Y. Sun, A. Surdo, X. J. Teng, A. Tykhonov, J. Z. Wang, L. G. Wang, S. Wang, S. X. Wang, X. L. Wang, Y. Wang, Y. F. Wang, Y. Z. Wang, Z. M. Wang, D. M. Wei, J. J. Wei, Y. F. Wei, D. Wu, J. Wu, L. B. Wu, S. S. Wu, X. Wu, Z. Q. Xia, H. T. Xu, J. Xu, Z. H. Xu, Z. L. Xu, E. H. Xu, Z. Z. Xu, G. F. Xue, H. B. Yang, P. Yang, Y. Q. Yang, H. J. Yao, Y. H. Yu, G. W. Yuan, Q. Yuan, C. Yue, J. J. Zang, S. X. Zhang, W. Z. Zhang, Yan Zhang, Yi Zhang, Y. J. Zhang, Y. L. Zhang, Y. P. Zhang, Y. Q. Zhang, Z. Zhang, Z. Y. Zhang, C. Zhao, H. Y. Zhao, X. F. Zhao, C. Y. Zhou, Y. Zhu
2023-03-31T21:22:52Z
http://arxiv.org/abs/2304.00137v5
# Measurement of the cosmic p+He energy spectrum from 46 GeV to 316 TeV with the DAMPE space mission ###### Abstract Recent observations of the light component of the cosmic-ray spectrum have revealed unexpected features that motivate further and more precise measurements up to the highest energies. The Dark Matter Particle Explorer (DAMPE) is a satellite-based cosmic-ray experiment that is operational since December 2015, continuously collecting data on high-energy cosmic particles with very good statistics, energy resolution, and particle identification capabilities. In this work, the latest measurements of the energy spectrum of proton+helium in the energy range from 46 GeV to 316 TeV are presented. Among the most distinctive features of the spectrum, a spectral hardening at \(\sim\)600 GeV has been observed, along with a softening at \(\sim\)29 TeV measured with a 6.6\(\sigma\) significance. Moreover, by measuring the energy spectrum up to 316 TeV, a strong link is The Dark Matter Particle Explorer (DAMPE) is a space-based particle and gamma-ray detector that has been operational since December 2015. It is designed to observe cosmic radiation up to \(\sim\)10 TeV for photons and e\({}^{-}\) + e\({}^{+}\), and hundreds of TeV for protons and ions while searching for indirect signatures of dark matter. The instrument consists of four sub-detectors: a plastic scintillator detector (PSD), designed to discriminate electrons from gamma rays and measure the absolute charge of impinging particles. The PSD comprises 82 bars, divided into 2 orthogonal layers, which are composed of 2 planes of staggered bars each. Below the PSD, a silicon-tungsten tracker-converter (STK) is used to measure the charged particle direction, giving additional information on the charge and converting photons in electron-positron pairs (with the help of tungsten layers). A bismuth germanium oxide (BGO) imaging calorimeter measures the energy of the particle and separates hadronic from electromagnetic showers. The BGO calorimeter is made of 14 layers, with 22 BGO bars each, for a total depth of more than 31 radiation lengths and \(\sim\)1.6 nuclear interaction lengths. Finally, the neutron detector (NUD), composed of boron-loaded plastic scintillators, collects neutrons from hadronic showers further refining the event identification. DAMPE has a deep calorimeter, large acceptance, and good energy resolution (\(\sim\)1.5% for electrons and \(\sim\)30% for protons) making it an optimal instrument for measuring cosmic rays up to a few hundreds of TeV [19]. In this study, the very high energy spectrum for p+He up to 316 TeV is presented, using 6 years of flight data collected by DAMPE. By selecting a combined proton and helium sample, event selection criteria can be relaxed (with respect to the case of p alone or He alone) while keeping low contamination, thus obtaining more statistics. Consequently, higher statistics allow for an extension in energy up to 316 TeV, providing for the first time a bridge between space-based and ground-based results with relatively small uncertainties. _Monte Carlo simulations_ - Monte Carlo (MC) simulations are needed to understand the response of the detector to different particles. In this analysis, the GEANT4 version 4.10.5 toolkit [20] is used along with the FTFP_BERT physics list1 for protons between 10 GeV and 100 TeV and helium nuclei between 10 GeV and 500 TeV. The physics list EPOS-LHC [21] is used for the energy interval 100 TeV - 1 PeV for protons and 500 TeV - 1 PeV for helium, by linking them to GEANT4 with the Cosmic Ray Monte Carlo (CRMC) package2[22]. Before launching DAMPE into space, several beam tests were performed at CERN, using ion beams of 40 GeV/n, 75 GeV/n and 400 GeV/n [23; 24; 25]. The data taken in the beam tests were compared with the simulations, showing a good agreement. The simulated events are initially generated with an isotropic spectrum, following an E\({}^{-1}\) dependence, and then re-weighted during the analysis according to an E\({}^{-2.6}\) power-law, following both theoretical expectations and experimental observations. As detailed later on, the exact shape of the energy spectrum used to weigh MC events negligibly affects our analysis results. Additional MC data are produced with alternative hadronic interaction models. Specifically, helium nuclei are simulated with FLUKA 2011.2x [26], which uses the DPMJET3 model [27; 28; 29], while GEANT4-QGSP_BERT is used for protons. The spectrum is computed anew using these MC samples, with the difference between the two spectra providing an estimate for the systematic uncertainty from the hadronic interaction model. Footnote 1: [https://geant4.web.cern.ch/node/302](https://geant4.web.cern.ch/node/302) Footnote 2: [https://web.ikp.kit.edu/rulrich/crmc.html](https://web.ikp.kit.edu/rulrich/crmc.html) _Event Selection -_ In this study, 72 months of flight data taken between January 2016 and December 2021 are used. The events potentially affected by the South Atlantic Anomaly (SAA) region are excluded from the analysis. From this dataset, an event preselection is applied first, followed by a selection of p or He particles. This procedure is applied both to MC and flight data. After subtracting the instrumental dead time, which is 3.0725 ms per event (\(\sim\)18% of the operation time), the on-orbit calibration time (\(\sim\)1.6%), a giant solar flare period between September 9 and September 13, 20173, and the SAA passage time (\(\sim\)5%) [30], a total live time of \(\sim\)14.5\(\times\)10\({}^{7}\) s remains, corresponding to \(\sim\)76% of the total operation time. _(i) Preselection -_ The preselection is based mainly on the measurements performed by the BGO calorimeter, according to the following criteria: Footnote 3: [https://solarflare.njit.edu/datasources.html](https://solarflare.njit.edu/datasources.html) * The energy deposited by a Minimum Ionizing Particle (MIP) in a BGO bar is expected to be \(\simeq\) 23 MeV. The activation of the high energy trigger (HET) is required, with the condition of an energy deposition larger than \(\sim\) 10 MIPs in the first 3 BGO layers and larger than \(\sim\) 2 MIPs in the fourth layer [31]. Events that are able to initiate a shower at the top of the calorimeter will satisfy this condition. * Events with deposited energy in the calorimeter higher than 20 GeV are selected, to avoid the effect of the geomagnetic rigidity cutoff [32]. * The energy deposited in any single layer of the BGO calorimeter has to be lower than 35% of the total energy, in order to reject most of the events entering from the sides of the calorimeter. * Additionally, a good lateral containment of the shower inside the calorimeter is achieved, by asking for the shower axis to be contained in a central region covering 93% of the calorimeter width. Furthermore, events whose maximum energy deposition occurs at the lateral edges of the calorimeter are rejected. _(ii) Track selection_ - The track of the incoming particle is reconstructed by the STK [33]. In order to select the highest quality events, the STK information is combined with measurements from other sub-detectors. The first requirement is that the reconstructed track in the STK should match the shower axis in the BGO calorimeter. An additional requirement is that the STK track and the signal in the PSD are consistent. To achieve this, a PSD fiducial volume (covering \(\sim\)97% of the PSD active area, in the central region) is defined, with the condition of having the STK track projection within that specific volume. _(iii) Charge selection_ - The different nuclei are selected according to the energy deposited in the PSD. A correction is applied to the signal of the PSD bars, accounting for light attenuation, detector alignment and incident angle [34; 35]. After this correction, the signal can be considered to be proportional to \(\mathrm{Z}^{2}\), in accordance with the Bethe-Bloch equation (with Z being the charge of the incident particle). The PSD global deposited energy for a particular event is obtained by combining the independent energy loss readings from each of the 2 PSD layers. The deposited energy loss not only depends on the charge of the incident particle but also on its primary energy. For this reason, the charge selection is performed in different bins of energy deposited in the BGO calorimeter. For each bin, the PSD global energy distribution of the events is fitted using a Landau convoluted with a Gaussian function (LanGaus). The Landau function describes fluctuations in energy loss of ionizing particles and the Gaussian is used to account for detector effects. From these fits, the most probable value (MPV) and width value (sigma) of the resulting function are obtained. Both the MPV and sigma values have a dependence on the total energy deposited in the calorimeter. This dependence is modeled by fitting the MPV and sigma obtained from the fits with a fourth-order polynomial function of the logarithm of the energy, which is used to retrieve a charge selection condition for different values of deposited energy. The functions obtained for flight data and MC data were found to have a slight disagreement, probably because of an overestimation of the back-scattering effect in MC simulations. In order to account for this mismatch, a smearing correction is applied to the charge distributions for MC results: the proton and helium peaks are corrected in order to match MPV and sigma of flight data. Figure 1 shows an example of the PSD charge distributions for three different bins of deposited energy and their comparison with MC data, after the smearing correction. The vertical dashed lines show the charge selection conditions, with a maximum value of \(\mathrm{MPV_{He}}\) + 6\(\sigma_{\mathrm{He}}\) and a minimum value of \(\mathrm{MPV_{p}}\) - 2\(\sigma_{\mathrm{p}}\), where the sigma value is given by \(\sigma=\sqrt{Width_{\mathrm{Landau}}^{2}+\sigma_{\mathrm{Gaus}}^{2}}\). These limits are optimized and chosen to maximize the statistics while maintaining a low-background level (\(\lesssim\) 0.4% up to 10 TeV, see background section). _Effective acceptance_ - After applying the selection cuts described in the previous section, the efficiencies are computed using MC simulations. Their comparison with flight data and subsequent validation is reported in the appendix (see below). Afterwards, the effective acceptance (\(A^{\mathrm{i}}\)) can be evaluated. Figure 2 shows the acceptance of the DAMPE detector as a function of the primary energy, which can be described by the following expression: \[A^{\mathrm{i}}(E_{\mathrm{T}}^{\mathrm{i}})=G_{\mathrm{gen}}\times\frac{N(E_{ \mathrm{T}}^{\mathrm{i}},\mathrm{sel})}{N(E_{\mathrm{T}}^{\mathrm{i}})}, \tag{1}\] where \(G_{\mathrm{gen}}\) is the geometrical factor used to generate MC data, \(N(E_{\mathrm{T}}^{\mathrm{i}})\) is the number of MC generated events in the i-th bin of primary energy (\(E_{\mathrm{T}}\)), and \(N(E_{\mathrm{T}}^{\mathrm{i}},\mathrm{sel})\) the number of those MC events surviving the selection cuts. This result was found to be independent of the spectral shape or the p/He mixture assumed in the simulation (see the section on the systematic uncertainties evaluation). _Background Estimation_ - Protons constitute a background for helium, and vice-versa. By combining these nuclei in a single spectrum, the remaining background is very low and mainly comprises electrons-positrons and lithium nuclei. Electrons and positrons are separated from protons in the BGO calorimeter using shower morphology discrimination. A detailed description of the separation of electrons and positrons from protons can be found in [36]. For the present analysis, the contamination of electrons in the p+He spectrum is \(\sim\)0.5% at 40 GeV of energy deposited in the BGO calorimeter, and it decreases with increasing energy. The lithium background is estimated using the template fit of the energy released in the PSD based on MC simulations of proton, helium and lithium. The contamination from lithium is lower than 0.3% up to 10 TeV, and it is \(\sim\)1.6% for energies higher than 10 TeV. The background from electrons-positrons and lithium is shown is Figure A4 of the appendix. _Energy measurement & unfolding procedure_ - The energy of the hadronic showers cannot be completely contained in the calorimeter. In particular, for p and He, around 35% to 45% of the total energy is collected in the detector. Consequently, an unfolding procedure is necessary to obtain the energy spectrum of the incident particles. In this case, a Bayesian approach is adopted [37], in which the detector response is estimated from MC simulations of both proton and helium nuclei, after applying the selection cuts described in the _Event Selection_ section. The actual number of events in the i-th bin of true energy, \(N(E_{\rm T}^{\rm i})\), can be obtained from the following expression: \[N(E_{\rm T}^{\rm i})=\sum_{j=1}^{n}P\left(E_{\rm T}^{\rm i}|E_{\rm O}^{\rm j} \right)N(E_{\rm O}^{\rm j}), \tag{2}\] where \(N(E_{\rm O}^{\rm j})\) is the number of observed events in the j-th bin of energy deposited in the calorimeter (\(E_{\rm O}^{\rm j}\)) and \(P\left(E_{\rm T}^{\rm i}|E_{\rm O}^{\rm j}\right)\) the response matrix derived from MC simulations (see Figure 3). The color scale represents the conditional probability that the p+He candidates with incident energy \(E_{\rm T}^{\rm i}\), are observed with deposited energy \(E_{\rm O}^{\rm j}\) in the calorimeter. The energy of an event is determined from the BGO calorimeter measurements, which need to be corrected in order to obtain the true energy deposited in the calorimeter. For events with deposited energy \(\gtrsim\) 4 TeV in a single BGO bar, some readout channels might get saturated. For this reason, a method developed using MC simulations is used to correct saturated events [38]. Another correction is applied to account for Birk's quenching in the BGO calorimeter. Quenching is more significant for heavy nuclei which produce more secondary particles with high charge and low velocity [39]. The BGO quenching is taken into account by including its effect in the MC simulations for ionization energy densities above 10 MeV/mm [40]. The effect is more important for incident energies around \(\sim\)80 GeV, where it would result in a \(\sim\)2% lower energy reconstruction. _Results_ - The flux for each energy bin (\(\Phi_{\rm i}\)) can be written as follows: \[\Phi_{\rm i}=\frac{\Delta N_{\rm i}}{\Delta T\times A_{\rm i}\times\Delta E_{ \rm i}}, \tag{3}\] with \(N_{\rm i}\) the number of events in the i-th energy bin after Figure 1: Distributions of PSD global energy, defined as the mean value of the energy released in the two PSD layers, for events with deposited energy in the BGO calorimeter in the ranges: 100–158 GeV (left), 0.6–1.0 TeV (center), and 10.0–15.8 TeV (right). Flight data are shown with black points, together with MC data of proton+helium, in red. The blue vertical dashed lines represent the charge selection ranges for p+He. Figure 3: Response matrix derived from MC simulations of p and He after applying the selection cuts. The colors represent the probability that the event in a bin of incident energy, migrates to different bins of energy deposited in the BGO calorimeter. Figure 2: Effective acceptance of the p+He analysis obtained by using p and He MC samples, after applying all the selection cuts (see text). the unfolding, \(\Delta T\) the total live time, \(A_{\rm i}\) the acceptance in the i-th bin, and \(\Delta E_{\rm i}\) representing the width of the i-th energy interval. Figure 4 shows the p+He flux in the energy range 46 GeV - 316 TeV, multiplied by a power of the energy and compared with other direct (Fig. 4, left) and indirect (Fig. 4, right) p+He measurements. The 1\(\sigma\) statistical uncertainties on DAMPE data are represented by error bars, while the continuous bands indicate the systematic uncertainties associated to the analysis procedure (inner band) and the total systematic uncertainties (outer band), including the one on the hadronic interaction model. The results are also reported in Table 1 of the appendix. The statistical uncertainties are associated with the Poissonian fluctuations of the number of detected events and the MC sample. However, due to the unfolding process, this uncertainty cannot be directly translated into the incident energy bins. To achieve this, a batch of toy-MC samples is generated according to a Poisson distribution for each deposited energy bin. The fluxes are then obtained through the regular unfolding procedure, and their root mean square is taken as the 1\(\sigma\) statistical error for each energy bin [11]. The systematic uncertainty band is the result of several contributions. The main contribution (up to \(\sim\)15%, for energy larger than 100 TeV) comes from the hadronic interaction model used for the MC simulation. The GEANT4-FTFP_BERT model is found to be in better agreement with flight and test beam data [23; 24; 45], and is therefore chosen for the computation of the p+He spectrum. To quantify the uncertainty resulting from this choice, the p+He spectrum is computed also using the FLUKA DPMJET-3 model for helium and GEANT4-QGSP_BERT for protons. The difference between the two spectra is used to estimate the uncertainty on the hadronic model. Additional contributions to the systematic uncertainties are given by the event selection procedure. In this case, selection efficiencies of MC and flight data are compared (more details are given in the appendix). Their difference is found to be \(\sim\)4% for the HET efficiency, \(\sim\)2% for the track selection efficiency, and a maximum of \(\sim\)3% for the charge selection efficiency at energies higher than 2 TeV, as shown in Figure 1, 2 and 3 of the appendix. The quadratic sum of the aforementioned differences in efficiencies (between MC and data) is taken as the systematic uncertainty of the acceptance, amounting to \(\sim\)5.4 % for energies higher than 2 TeV. Another source of uncertainties is the assumed proton and helium mixture in the simulation. In the energy range considered in this work, the abundance of proton and helium is of the same order of magnitude. For this reason, the spectrum was initially computed assuming the same amount of proton and helium (50% proton and 50% helium). Afterwards, the spectrum was obtained again by weighting the proton and helium MC samples, using a fit function to the DAMPE proton and helium spectrum, respectively [11; 17]. The difference between the spectra obtained with these two approaches is taken as a source of systematic uncertainty and results in \(\sim\) 5% at low energy and \(\sim\) 2% for energies higher than 10 TeV. The spectrum with 50% proton and 50% helium is taken as the final result, to avoid introducing any bias while still assuming the correct order of magnitude. For more information on the systematic uncertainties and their energy dependence please refer to Figure 5 of the appendix. The proton+helium spectrum has been fitted with a smoothly-broken power-law (SBPL) function following a similar approach as the one used in [11; 17; 36; 46] (see details in the appendix). The result shows the presence of a spectral hardening around \(\sim\) 600 GeV followed by a softening at \(28.8\pm 4.5\) TeV, measured with a significance of \(6.6\sigma\). The hardening feature is in line with results obtained by other experiments [2; 3; 4; 5; 6; 7; 8; 9; 10; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47] and previous results from DAMPE on proton [11] and helium [17] spectra. Moreover, DAMPE revealed a softening feature in both the proton and helium spectra [11; 17], further confirmed by the present analysis (the fit is shown in Figure 6 of the appendix). The spectral fit parameters obtained with the three analyses are reported in Tab. 1, where \(E_{\rm b}\) is the energy in which there is a change of slope in the spectrum, \(\gamma\) represents the spectral index before \(E_{\rm b}\) and \(\Delta\gamma\) the difference between the 2 indexes, before and after \(E_{\rm b}\). The \(E_{\rm b}\) values suggest rigidity-dependent features, even though a mass dependence cannot be ruled out. The DAMPE p+He spectrum is in agreement with other direct-detection experiments, within the systematic uncertainties. The p+He spectrum was also compared with the sum of the DAMPE proton and helium spectra, showing good compatibility. The comparison with indirect measurements shows an overall consistency, although this picture will be clarified with further observations made by future space-borne experiments (e.g. HERD [48; 49]) along with a spectral extension at higher energies. The p+He spectrum shows a novel hint regarding a possible direct detection of a second hardening around 150 TeV, also suggested by preliminary results from the HAWC collaboration [50] and foreseen in [51]. _Summary_ - The p+He spectrum was measured from 46 GeV to 316 TeV, using 72 months of data from the \begin{table} \begin{tabular}{c c c c} \hline & Proton & Helium & Proton+Helium \\ \hline \(E_{\rm b}\) (TeV) & \(13.6^{+4.1}_{-4.8}\) & \(34.4^{+6.7+11.6}_{-0.8-0.0}\) & \(28.8^{+6.2+2.9}_{-4.4-0.0}\) \\ \(\gamma\) & \(2.60\pm 0.01\) & \(2.41^{+0.02+0.02}_{-0.02-0.00}\) & \(2.51^{+0.021+0.01}_{-0.024-0.00}\) \\ \(\Delta\gamma\) & \(-0.25\pm 0.07\) & \(-0.51^{+1.08+0.10}_{-0.20-0.00}\) & \(0.43^{+0.066+0.066}_{-0.057-0.00}\) \\ \hline \end{tabular} \end{table} Table 1: Results of the SBPL fit in the softening energy region for the DAMPE proton [11], helium [17] and p+He spectra (this work). For the helium and p+He results, the systematic uncertainties from the hadronic model are represented by the second uncertainty. DAMPE satellite. The spectrum confirms the hardening and softening features, with the unprecedented significance of 6.6\(\sigma\). The selection of proton+helium, instead of individual proton and helium contributions, allows the collection of additional statistics, thus reaching higher energies with low background. Consequently, these results provide a link between direct and indirect cosmic-ray measurements, exhibiting a good general agreement among very different techniques, and pointing out deviations from a simple power-law behavior. _Acknowledgments_ - The DAMPE mission was funded by the strategic priority science and technology projects in space science of Chinese Academy of Sciences (CAS). In China, the data analysis was supported by the National Key Research and Development Program of China (No. 2022YFF0503302) and the National Natural Science Foundation of China (Nos. 12220101003, 11921003, 11903084, 12003076 and 12022503), the CAS Project for Young Scientists in Basic Research (No. YSBR061), the Youth Innovation Promotion Association of CAS, the Young Elite Scientists Sponsorship Program by CAST (No. YESS20220197), and the Program for Innovative Talents and Entrepreneur in Jiangsu. In Europe, the activities and data analysis are supported by the Swiss National Science Foundation (SNSF), Switzerland, the National Institute for Nuclear Physics (INFN), Italy, and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (No. 851103). ## Appendix: Supplemental Material ### Efficiency validations #### HET and track efficiency There are four implemented triggers for the DAMPE detector: the Unbiased Trigger (UNBT), the Minimum Ionizing Particle Trigger (MIPT), the Low Energy Trigger (LET) and the High Energy Trigger (HET) [31]. These triggers are subject to different pre-scaling factors depending on the latitude. The UNBT is the least restrictive and it is used to estimate the HET efficiency, which can be calculated as follows: \[\epsilon_{\text{HET}}=\frac{N_{\text{HET+UNBT}}}{N_{\text{UNBT}}}, \tag{4}\] where \(N_{\text{HET+UNBT}}\) is the number of events that pass both the HET and UNBT triggers. Figure 11 shows the HET efficiency as a function of the deposited energy in the BGO for MC simulations and flight data. The UNBT sample has a pre-scale factor of 1/512 (1/2048) when the satellite operates in (out of) the geographical Figure 4: p+He spectrum measured with the DAMPE detector (red circles), between 46 GeV and 316 TeV, compared with: direct measurements of p+He made by ATIC-02 [15], NUCLEON [14] and CREAM [13] (left), and indirect measurements from ARGO-YBJ+WFCT [41], HAWC [42], KASCADE [43] and EAS-TOP+MACRO [44] (right). Statistical uncertainties (1\(\sigma\)) are represented by error bars, while the continuous bands represent the systematic uncertainties on the analysis (inner band) and the total systematic uncertainties (outer band). latitude range [-20\({}^{\circ}\); 20\({}^{\circ}\)]. Therefore, at high energies, the statistical uncertainties of the flight data are expected to be relatively large. Tracks can be determined from hits in the STK and the BGO. However, the former is more precise and is therefore more commonly employed in standard analyses. The STK track efficiency is evaluated by selecting a sample of p and He based on the BGO tracks and PSD charge and considering only the events that pass the STK track selection. The STK track efficiency is given by: \[\epsilon_{\rm track}=\frac{N_{\rm STK+BGO+PSD}}{N_{\rm BGO+PSD}}, \tag{5}\] where \(N_{\rm BGO+PSD}\) is the number of events selected with the BGO track that match the PSD charge, and \(N_{\rm STK+BGO+PSD}\) is the number of events that pass the STK track selection cuts used in the present analysis. Figure 11 shows the track selection efficiency as a function of the deposited energy both for MC simulations and flight data. #### Charge selection efficiency Charge selection efficiencies are calculated individually for each PSD layer, using the measurements coming from the first cluster point of the STK track. For example, the efficiency of the first PSD layer is determined by the ratio between the events selected using the charge of both PSD layers and the first cluster point of the STK track (\(N_{\rm PSD_{1}+PSD_{2}+STK_{1}}\)), and the events selected using only the second PSD layer and the first cluster of STK track (\(N_{\rm PSD_{2}+STK_{1}}\)): \[\epsilon_{\rm PSD_{1}}=\frac{N_{\rm PSD_{1}+PSD_{2}+STK_{1}}}{N_{\rm PSD_{2}+ STK_{1}}}. \tag{6}\] An analogous method is used to compute the efficiency of the second layer. Figure 12 shows the charge selection efficiencies as a function of the deposited energy for each PSD layer. ### Background contamination Combining protons and helium in a single spectrum leaves a very low background, mainly constituted by electrons, positrons and lithium. Electron contamination is calculated by discriminating the morphology of proton-electron showers. The method is thoroughly described in [36]. The lithium background is estimated using the template fit of the energy released in the PSD, based on MC simulations of protons, helium and lithium. Figure 13 shows the estimated electron and lithium backgrounds, which are smaller than 0.4% up to 10 TeV, and equal to \(\sim\)1.6% for energy larger than 10 TeV. ### Unfolded flux and systematic uncertainties Table 13 shows the p+He flux, (see also Figure 4 of the main text [ref]). E, E\({}_{\rm low}\) and E\({}_{\rm high}\) are the median energy and bin edges of the corresponding flux \(\Phi\). \(\sigma_{\rm stat}\) is the statistical error, \(\sigma_{\rm sys}^{\rm ana}\) and \(\sigma_{\rm sys}^{\rm had}\) are the systematic uncertainties on the analysis procedure and on the hadronic model, respectively (see section \(Results\) of the main text [ref]). Figure A5 shows the statistical error and the relative uncertainty for each considered systematic effect, as described in the paper. ### Spectral fitting The proton+helium spectrum exhibits several features that are well described by a smoothly broken power law: \[\Phi(E)=\Phi_{0}\left(\frac{E}{\rm TeV}\right)^{-\gamma}\left[1+\left(\frac{E} {E_{\rm B}}\right)^{s}\right]^{\Delta\gamma/s}, \tag{7}\] where \(\Phi_{0}\) is the flux normalization, \(\gamma\) the spectral index before the break energy (\(E_{\rm B}\)), \(\Delta\gamma\) the difference between the spectral index before and after the break, and \(s\) the smoothness of the break. To properly account for the systematic uncertainties, a set of independent nuisance parameters are adopted and multiplied to the in put model, using a similar procedure as the one used in [11; 17; 36; 46]. The \(\chi^{2}\) function is defined as follows: \[\chi^{2}= \sum_{i=k}^{n}\sum_{j=k}^{n}[\Phi\left(E_{i}\right)S\left(E_{\rm i}; \rm w\right)-\Phi_{i}]C_{i,j}^{-1}[\Phi\left(E_{\rm i}\right)S\left(E_{\rm j}; \rm w\right)-\Phi_{\rm j}]\] \[+\sum_{l=1}^{n}\left(\frac{1-w_{\rm l}}{\sigma_{\rm sys,l}} \right)^{2}, \tag{8}\] where \(E_{\rm i}\) is the median energy, \(\Phi_{\rm i}\) the flux in the i-th energy bin, \(\Phi(E_{\rm i})\) is the model predicted flux in each corresponding energy bin, \(S(E_{\rm i},\rm w)\) is a piece-wise function defined by its value \(w_{\rm i}\) in the corresponding energy range covered by the j-th nuisance parameter, \(C_{i,j}\) represents the covariance matrix of the fluxes obtained from toy MC simulations when estimating the statistical uncertainties and \(\tilde{\sigma}_{\rm sys,j}=(\sigma_{\rm sys}^{\rm ana})/\Phi\) is the relative systematic uncertainty of the data in such an energy range. The fit is performed in the energy range from 7 to 130 TeV, using 2 nuisance parameters. In order to account for the uncertainties on the fit parameters resulting from the selected hadronic model, the same fit is performed on the spectrum computed with MC samples simulated using the FLUKA DPMJET-3 and GEANT4-QGSP_BERT models. The difference between the two fits is taken as a second uncertainty on the parameters. The results of the fit are \(\Phi_{0}\) = (1.35 \(\pm\) 0.09) \(\times\) 10\({}^{-4}\) GeV\({}^{-1}\)m\({}^{-2}\)s\({}^{-1}\)sr\({}^{-1}\), \(\gamma\) = \(2.51^{+0.021+0.01}_{-0.024-0.00}\), \(\Delta\gamma\) = \(0.43^{+0.066+0.066}_{-0.057-0.00}\), E\({}_{\rm B}\) = \(28.8^{+6.2+2.9}_{-4.4-0.0}\) TeV, and \(\chi^{2}\)/dof = 0.9/2. The parameter s was fixed to 5, which is consistent with the DAMPP proton and helium fit of the softening [11; 17] considering their combined smoothness. To estimate the significance of the softening, the same energy region has been fitted with a single power-law function, giving a result of \(\chi^{2}\)/dof = 48.14/4, resulting in a significance of 6.63\(\sigma\). The SBPL fit of the softening is shown in Figure 6.
2308.16480
Inter-finger Small Object Manipulation with DenseTact Optical Tactile Sensor
The ability to grasp and manipulate small objects in cluttered environments remains a significant challenge. This paper introduces a novel approach that utilizes a tactile sensor-equipped gripper with eight degrees of freedom to overcome these limitations. We employ DenseTact 2.0 for the gripper, enabling precise control and improved grasp success rates, particularly for small objects ranging from 5mm to 25mm. Our integrated strategy incorporates the robot arm, gripper, and sensor to manipulate and orient small objects for subsequent classification effectively. We contribute a specialized dataset designed for classifying these objects based on tactile sensor output and a new control algorithm for in-hand orientation tasks. Our system demonstrates 88% of successful grasp and successfully classified small objects in cluttered scenarios.
Won Kyung Do, Bianca Aumann, Camille Chungyoun, Monroe Kennedy III
2023-08-31T06:26:51Z
http://arxiv.org/abs/2308.16480v1
# Inter-finger Small Object Manipulation with DenseTact Optical Tactile Sensor ###### Abstract The ability to grasp and manipulate small objects in cluttered environments remains a significant challenge. This paper introduces a novel approach that utilizes a tactile sensor-equipped gripper with eight degrees of freedom to overcome these limitations. We employ DenseTact 2.0 for the gripper, enabling precise control and improved grasp success rates, particularly for small objects ranging from 5mm to 25mm. Our integrated strategy incorporates the robot arm, gripper, and sensor to manipulate and orient small objects for subsequent classification effectively. We contribute a specialized dataset designed for classifying these objects based on tactile sensor output and a new control algorithm for in-hand orientation tasks. Our system demonstrates 88% of successful grasp and successfully classified small objects in cluttered scenarios. ## I Introduction Grasping objects commonly found in daily environments is essential for human-robot collaboration tasks. Nevertheless, in-hand manipulation and grasping in cluttered settings continue to pose significant challenges in robotics. Recent research has increasingly focused on incorporating tactile feedback as a vital element in control systems to manage contact kinematics and manipulation tasks more effectively. Despite this, the specific issue of grasping small objects in cluttered environments remains largely unresolved. When a robot interacts with an object, the situation changes, requiring a revised approach. This adaptability is common in human interactions but challenging for robots. The solution involves enabling robots to manipulate or identify small objects in cluttered scenarios. Tactile sensors are instrumental in overcoming these issues. When grasping objects in cluttered spaces, traditional external vision systems often prove insufficient. Visuotactile sensors, however, offer a remedy by providing high-resolution data in localized areas. Additionally, hemispherical tactile sensors like DenseTact offer enhanced sensing capabilities and greater adaptability in terms of deformation, which is advantageous for compliance control. In this study, we use tactile sensing and extra degrees of freedom on the gripper to tackle grasping, manipulating, and classifying small objects in cluttered environments. The transient dynamics of small objects, simulation challenges, and inadequacy of traditional controls post-grasp complicate the problem. The primary contributions of this paper are: 1. Development of a novel gripper with DenseTact 2.0, featuring 8 degrees of freedom for rolling manipulation. 2. Establishment of an integrated strategy involving the robot arm, gripper, and sensor for the manipulation and orientation of small objects for classification. 3. Creation of a dataset for classifying small objects based on tactile sensor outputs. 4. Successful classification and manipulation of objects smaller than the sensor and gripper sizes. 5. Design of a new control algorithm for in-hand orientation tasks involving 'unknown' small objects. The paper is structured as follows: Section II reviews related works; Section III outlines the problems addressed; Section IV discusses the methodologies for gripper development, perception, object grasping, manipulation, and classification; Section V presents the results and demonstrations, and conclusions and future work are discussed in Section VI. ## II Related Works Grasping and manipulating small objects through tactile sensor input is a complex endeavor. A plethora of research initiatives have been aimed at various facets of this task. Specifically, in-hand manipulation has emerged as an active research domain in recent years. Works such as those cited in [1, 2] have proficiently tackled challenges associated Fig. 1: **Overview of the grasping and classifying of small objects in cluttered environments.** The left image shows the process of grasp and control to classify the object, the right top shows the result of images from the sensor, and the right bottom shows the result of classification.
2309.09851
order boundedness and essential norm of generalized weighted composition operators on Bergman spaces with doubling weights
In this paper, the order boundedness and essential norm of generalized weighted composition operators on Bergman spaces with doubling weights are characterized. Specially, we estimate the essential norm of these operators on weighted Bergman spaces by using the reduce order method.
Zuoling Liu
2023-09-18T15:10:48Z
http://arxiv.org/abs/2309.09851v1
Order boundedness and essential norm of generalized weighted composition operators on Bergman spaces with doubling weights ###### Abstract. In this paper, the order boundedness and essential norm of generalized weighted composition operators on Bergman spaces with doubling weights are characterized. Specially, we estimate the essential norm of these operators on weighted Bergman spaces by using the reduce order method. Key words and phrases:composition operator, Bergman space, order bounded, essential norm 2000 Mathematics Subject Classification: Primary 47B38, 47B33, Secondary 30H05, 30H20 ## 1. Introduction Let \(\mathbb{D}=\{z:\ |z|<1\}\) be the open unit disk in the complex plane \(\mathbb{C}\). Let \(\omega:\mathbb{D}\to[0,\infty)\) be an integrable function and radial, that is, \(\omega(z)=\omega(|z|)\) for all \(z\in\mathbb{D}\). Denote \(\hat{\omega}(z)=\int_{|\mathbb{C}}^{1}\omega(s)ds\) for all \(z\in\mathbb{D}\). A weight \(\omega\) belongs to the class \(\hat{\mathcal{D}}\) if \(\hat{\omega}(r)\leq C\hat{\omega}(\frac{1+r}{2})\), where \(C=C(\omega)\geq 1\) and \(0\leq r<1\). Furthermore, we write \(\omega\in\hat{\mathcal{D}}\) if there exist constants \(\vartheta=\vartheta(\omega)>1\) and \(C=C(\omega)>1\) such that \(\hat{\omega}(r)\geq C\hat{\omega}(1-\frac{1-r}{\vartheta})\) for all \(0\leq r<1\). We denote \(\mathcal{D}=\hat{\mathcal{D}}\cap\hat{\mathcal{D}}\) and \(\omega(E)=\int_{E}\omega dA\) for each measurable set \(E\subset\mathbb{D}\). Let \(H(\mathbb{D})\) be the space of analytic functions on \(\mathbb{D}\). For \(0<p<\infty\) and the radial weight \(\omega\), the Bergman space \(A_{\omega}^{p}\) associated to \(\omega\) is defined by \[A_{\omega}^{p}=\left\{f\in H(\mathbb{D}):\|f\|_{A_{\omega}^{p}}^{p}=\int_{ \mathbb{D}}|f(z)|^{p}\omega(z)dA(z)<\infty\right\},\] where \(dA(z)=\frac{1}{\pi}dxdy\) is the normalized Lebesgue area measure on \(\mathbb{D}\). As usual, let \(A_{\alpha}^{p}\) stand for the classical weighted Bergman space induced by radial weight \(\omega(z)=(1-|z|^{2})^{\alpha}\), where \(-1<\alpha<\infty\). \(A_{\omega}^{p}\) is a Banach space for \(1\leq p<\infty\) under the norm \(\|\cdot\|_{A_{\alpha}^{p}}\). See [23, 8] for the theory of weighted Bergman spaces. Let \(q>0\) and \(\mu\) be a finite positive Borel measure on \(\mathbb{D}\). We say that \(f\in L_{\mu}^{q}\) if the measurable function \(f\) satisfies \[\|f\|_{L_{\mu}^{q}}^{q}=\int_{\mathbb{D}}|f(w)|^{q}d\mu(w)<\infty.\] Suppose \(\varphi\) is an analytic map of \(\mathbb{D}\) into itself. Every analytic self-map \(\varphi\) induces a composition operator \(C_{\varphi}\) on \(H(\mathbb{D})\) by \[C_{\varphi}(f)(z)=f(\varphi(z)),\ f\in H(\mathbb{D}),\] for all \(z\in\mathbb{D}\). See [1] and [27] for the theory of composition operators. For \(n\in\mathbb{N}\), \(D^{n}f=f^{(n)}\) is the differential operator on \(H(\mathbb{D})\). Let \(n\in\mathbb{N}\cup\{0\}\) and \(u\in H(\mathbb{D})\). The generalized weighted composition operator, denoted by \(D^{n}_{\varphi,u}\), is defined by \[D^{n}_{\varphi,u}(f)(z)=u(z)f^{(n)}(\varphi(z)),\ f\in H(\mathbb{D}).\] The generalized weighted composition operator was coined by Zhu in [36]. Clearly, if \(n=0\) and \(u\equiv 1\), the operator \(D^{n}_{\varphi,u}\) becomes the composition operator \(C_{\varphi}\). When \(n=0\), the operator \(D^{n}_{\varphi,u}\) is called the weighted composition operator, usually denoted by \(uC_{\varphi}\). As \(n=1\) and \(u(z)=\varphi^{\prime}(z)\), then \(D^{n}_{\varphi,u}=DC_{\varphi}\). If \(n=1\) and \(u(z)\equiv 1\), then \(D^{n}_{\varphi,u}=C_{\varphi}D\). The operators \(DC_{\varphi}\) and \(C_{\varphi}D\) have been studied in [19, 10, 20, 21]. For some recent work on generalized weighted composition operators, we refer the interested readers to [34, 37, 38] and [18]. Let \(\mathbf{X}\) be a quasi-Banach space and \(\mu\) be a positive measure on \(\mathbb{D}\). Assume \(0<q<\infty\) and let \(T:X\to L^{q}_{\mu}\) be an operator. We say that \(T\) is order bounded if \(T\) maps the closed unit ball \(B_{\mathbf{X}}\) of \(\mathbf{X}\) into an order interval of \(L^{q}_{\mu}\). In other words, there exists a non-negative element \(h\in L^{q}_{\mu}\) such that \(|Tf|\leq h\) almost everywhere with respect to \(\mu\) for all \(f\in B_{\mathbf{X}}\). This concept has been studied in several references [28, 29, 31]. The order bounded composition operators on Hardy spaces was introduced by Hunziker and Jarchow in [11]. Motivated by [11], Hibschweiler [9] characterized order bounded composition operators acting on standard weighted Bergman spaces. Later, Ueki [31] considered the order boundedness of weighted composition operators on standard weighted Bergman spaces. Subsequently, Wolf [33] studied order bounded weighted composition operators acting on Bergman spaces with general weights. Recently, the order boundedness of weighted composition operators acting between Banach spaces like Hardy spaces, weighted Bergman spaces, weighted Dirichlet spaces and derivative Hardy spaces were discussed (see [7, 12, 28, 29]). Motivated by [31, 33], we investigate the order boundedness of \(D^{n}_{\varphi,u}\) on Bergman spaces with doubling weights. Let \(X\) and \(Y\) be Banach spaces. The essential norm of linear operator \(T:X\to Y\) is defined as \[\|T\|_{e,\ X\to Y}=\inf_{K}\|T-K\|_{X\to Y},\] where \(K\) is any compact operator and \(\|\cdot\|_{X\to Y}\) is the operator norm. It is obvious that \(\|T\|_{e,\ X\to Y}=0\) if and only if \(T\) is a compact operator. The study of the essential norm of composition operators on Hardy spaces and Bergman spaces was dated back to Shapiro [26]. Cuckovic and Zhao extended Shapiro's [26] results to standard weighted Bergman spaces and Hardy spaces in [2, 3]. After their works, Demazeux [4] considered the essential norm of weighted composition operators on Hardy spaces in terms of pullback measure for \(1\leq p,q\leq\infty\). In light of their work, the authors [5] investigated the boundedness and essential norm of weighted composition operators on Bergman spaces induced by doubling weights. Based on their work and inspired by the idea from [37], Liu [13] studied the boundedness and compactness of generalized weighted composition operators \(D^{n}_{\varphi,u}\) between different Bergman spaces with doubling weights. See [5, 13, 14, 15] for more results of composition operators on Bergman spaces \(A^{p}_{\omega}\) The essential norm of composition operators on Bergman spaces with admissible Bekolle weights was studied by [30]. Recently, Esmaeili and Kellay [6] considered the essential norm of weighted composition operators on weighted Bergman spaces. Many authors considered the essential norm of composition operators on different weighted Bergman spaces, see [17, 32] and references therein. Motivated by the idea from [5, 6, 30], we estimate the essential norm of generalized weighted composition operators on Bergman spaces with doubling weights. In this paper, we denote constants by \(C\) which are positive and may differ from one occurrence to the other. The notation \(a\lesssim b\) means that there is a positive constant \(C\) such that \(a\leq Cb\). The symbol \(a\asymp b\) means that both \(a\lesssim b\) and \(b\lesssim a\) hold. ## 2. Preliminary results The pseudo-hyperbolic metric \(\rho\) on \(\mathbb{D}\) is defined as \[\rho(z,w)=|\varphi_{w}(z)|=\Big{|}\frac{w-z}{1-\bar{w}z}\Big{|},\] for \(z,w\in\mathbb{D}\). For \(r\in(0,1)\), the pseudo-hyperbolic disk is defined by \[\Delta(w,r)=\{z\in\mathbb{D},\;\rho(z,w)<r\}.\] For \(z\in\mathbb{D}\backslash\{0\}\), \[S(z)=\left\{\xi\in\mathbb{D}:\;|z|\leq|\xi|<1,\;|\arg\xi-\arg z|<\frac{1-|z|}{ 2}\right\}\] is called a Carleson square. We set \(S(0)=\mathbb{D}\). **Lemma 2.1**.: [13, Lemma 2.1] Let \(\omega\in\mathcal{D}\), \(0<p<\infty\) and \(n\in\mathbb{N}\bigcup\{0\}\). If \(f\in A^{p}_{\omega}\), then there exists a constant \(C=C(\omega)>0\) such that \[|f^{(n)}(z)|\leq C\frac{||f||_{A^{p}_{\omega}}}{(\omega(S(z)))^{1/p}(1-|z|)^{n}}\] for all \(z\in\mathbb{D}\). **Lemma 2.2**.: [13, Proposition 3.1] Let \(0<p\leq q<\infty\), \(\omega\in\mathcal{D}\) and \(n\in\mathbb{N}\bigcup\{0\}\). Let \(\mu\) be a positive Borel measure on \(\mathbb{D}\). Then there exists \(r=r(\omega)\in(0,1)\) such that the following statements hold. * \(D^{n}:A^{p}_{\omega}\to L^{q}_{\mu}\) is bounded if and only if \[\sup_{z\in\mathbb{D}}\frac{\mu(\Delta(z,r))}{(\omega(S(z)))^{q/p}(1-|z|)^{nq} }<\infty.\] (2.1) Moreover, \[\|D^{n}\|^{q}_{A^{p}_{\omega}\to L^{q}_{\mu}}\asymp\sup_{z\in\mathbb{D}}\frac{ \mu(\Delta(z,\;r))}{(1-|z|)^{nq}(\omega(S(z)))^{q/p}}.\] * \(D^{n}:A^{p}_{\omega}\to L^{q}_{\mu}\) is compact if and only if \[\lim_{|z|\to 1^{-}}\frac{\mu(\Delta(z,r))}{(\omega(S(z)))^{q/p}(1-|z|)^{nq}}=0.\] (2.2) In light of Lemma 2.2, we get the following lemma. **Lemma 2.3**.: Let \(0<p\leq q<\infty\), \(\omega\in\mathcal{D}\) and \(n\in\mathbb{N}\bigcup\{0\}\). Assume that \(\mu\) is a positive Borel measure on \(\mathbb{D}\), \(r=r(\omega)\in(0,1)\). Then there exist a large enough \(\delta=\delta(\omega,p)>0\) such that \[\|D^{n}\|_{A_{\omega}^{p}\to L_{\mu}^{q}}^{q}\asymp\sup_{a\in\mathbb{D}}\int_{ \mathbb{D}}\frac{(1-|a|)^{\delta q}}{|1-\overline{a}w|^{(\delta+n)q}(\omega(S( a)))^{q/p}}d\mu(w).\] Proof.: For \(a\in\mathbb{D}\) and \(r\in(0,1)\), we have \[\frac{\mu(\Delta(a,\ r))}{(1-|a|)^{nq}(\omega(S(a)))^{q/p}} =\int_{\Delta(a,r)}\frac{1}{(1-|a|)^{nq}(\omega(S(a)))^{q/p}}d\mu(w)\] \[\asymp\int_{\Delta(a,r)}\frac{(1-|a|)^{\delta q}}{|1-\overline{a }w|^{(\delta+n)q}(\omega(S(a)))^{q/p}}d\mu(w)\] \[\lesssim\int_{\mathbb{D}}\frac{(1-|a|)^{\delta q}}{|1-\overline{ a}w|^{(\delta+n)q}(\omega(S(a)))^{q/p}}d\mu(w).\] By Lemma 2.2, we find that \[\|D^{n}\|_{A_{\omega}^{p}\to L_{\mu}^{q}}^{q} \asymp\sup_{a\in\mathbb{D}}\frac{\mu(\Delta(a,\ r))}{(1-|a|)^{nq} (\omega(S(a)))^{q/p}}\] \[\lesssim\sup_{a\in\mathbb{D}}\int_{\mathbb{D}}\frac{(1-|a|)^{ \delta q}}{|1-\overline{a}w|^{(\delta+n)q}(\omega(S(a)))^{q/p}}d\mu(w).\] By [22, Lemma 3.1], we can choose some large enough \(\delta=\delta(\omega,p)>0\) and \[f_{a}(z)=\Big{(}\frac{1-|a|}{1-\overline{a}z}\Big{)}^{\delta}\omega(S(a))^{-1 /p},\ a,z\in\mathbb{D}, \tag{2.3}\] then \(\|f_{a}\|_{A_{\omega}^{p}}\lesssim 1\). Thus, we get \[\int_{\mathbb{D}}|f_{a}^{(n)}(z)|^{q}d\mu(z)=\int_{\mathbb{D}}\frac{|a|^{n}(1 -|a|)^{\delta q}}{|1-\overline{a}z|^{(\delta+n)q}(\omega(S(a)))^{q/p}}d\mu(z) \lesssim\|D^{n}\|_{A_{\omega}^{p}\to L_{\mu}^{q}}^{q}.\] The proof is complete. We use the pullback measure as an important tool to study the generalize weighted composition operators between different Bergman spaces with doubling weights. Let \(\varphi\) be an analytic self-map of \(\mathbb{D}\) and \(0<q<\infty\). Assume that \(u\in H(\mathbb{D})\), we define a finite positive Borel measure \(\mu_{\varphi,\ u}^{\nu}\) on \(\mathbb{D}\) as follows: \[\mu_{\varphi,\ u}^{\nu}(E)=\int_{\varphi^{-1}(E)}|u(z)|^{q}\nu(z)dA(z),\] where \(E\) is a Borel subset of unit disk \(\mathbb{D}\). For \(D_{\varphi,u}^{n}:A_{\omega}^{p}\to A_{\nu}^{q}\), it can be clearly seen that \[\|D_{\varphi,u}^{n}f\|_{A_{\varphi}^{q}}=\int_{\mathbb{D}}|f^{(n)}(z)|^{q}d \mu_{\varphi,\ u}^{\nu}(z),\ f\in A_{\omega}^{p}. \tag{2.4}\] **Lemma 2.4**.: Let \(\omega\in\mathcal{D}\), \(n\in\mathbb{N}\bigcup\{0\}\), \(0<p<\infty\) and \(0<r=r(\omega)<1\). If \(f\in A_{\omega}^{p}\), there exists a constant \(C=C(\omega)>0\) such that \[|f^{(n)}(z)|^{p}\leq\frac{C}{\omega(S(z))}\int_{\Delta(z,r)}\frac{|f(w)|^{p}}{ (1-|w|)^{np}}\widetilde{\omega}(w)dA(w)\] for \(z\in\mathbb{D}\). Here \(\widetilde{\omega}(z)=\frac{\hat{\omega}(z)}{1-|z|}\). Proof.: It is clear that \(1-|z|\asymp 1-|w|\) for \(w\in\Delta(z,r)\). Since \(\omega\in\mathcal{D}\), by [22, Lemma 2.1] and [24, (2.27)], there exist constants \(0<\alpha=\alpha(\omega)<\beta=\beta(\omega)<\infty\) and \(C=C(\omega)\geq 1\) such that \[\frac{1}{C}\left(\frac{1-r}{1-t}\right)^{\alpha}\leq\frac{\hat{ \omega}(r)}{\hat{\omega}(t)}\leq C\left(\frac{1-r}{1-t}\right)^{\beta}, \tag{2.5}\] where \(0\leq r\leq t<1\). By (2.5), we know that \(\hat{\omega}(z)\asymp\hat{\omega}(w)\) for \(w\in\Delta(z,r)\). By a direct calculation, we know that \(\hat{\omega}(z)(1-|z|)\asymp\omega(S(z))\) for \(\omega\in\mathcal{D}\). By [16, Lemma 2.1], we claim that \[|f^{(n)}(z)|^{p} \leq\frac{C}{(1-|z|)^{2+np}}\int_{\Delta(z,r)}|f(w)|^{p}dA(w)\] \[\asymp\frac{C}{\hat{\omega}(z)(1-|z|)(1-|z|)^{np}}\int_{\Delta(z,r)}|f(w)|^{p}\frac{\hat{\omega}(w)}{(1-|w|)}dA(w)\] \[\asymp\frac{C}{\omega(S(z))}\int_{\Delta(z,r)}\frac{|f(w)|^{p}}{ (1-|w|)^{np}}\widetilde{\omega}(w)dA(w).\] **Lemma 2.5**.: [13, Theorem 1.3] Let \(0<p\leq q<\infty\) and \(\omega,\nu\in\mathcal{D}\). Assume that \(\varphi\) is an analytic self-map of \(\mathbb{D}\), \(u\in A^{q}_{\nu}\) and \(n\in\mathbb{N}\cup\{0\}\). Then \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) is bounded if and only if there exists a large enough \(\delta=\delta(\omega,p)>0\) such that \[\sup_{a\in\mathbb{D}}\int_{\mathbb{D}}\frac{(1-|a|)^{\delta q}| u(\xi)|^{q}\nu(\xi)}{|1-\overline{a}\varphi(\xi)|^{(\delta+n)q}(\omega(S(a))) ^{q/p}}dA(\xi)<\infty. \tag{2.6}\] ## 3. order boundness of \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) for \(0<p,q<\infty\) Next, we will study the order boundedness of \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) for \(0<p,q<\infty\). **Theorem 3.1**.: Let \(0<p,\ q<\infty\) and \(\omega,\nu\in\mathcal{D}\). Suppose \(n\in\mathbb{N}\cup\{0\}\). Let \(\varphi\) be an analytic self-map of \(\mathbb{D}\) and \(u\in H(\mathbb{D})\). Then \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) is order bounded if and only if \[\int_{\mathbb{D}}\frac{|u(z)|^{q}\nu(z)}{(1-|\varphi(z)|^{2})^{ nq}(\omega(S(\varphi(z))))^{q/p}}dA(z)<\infty. \tag{3.1}\] Proof.: Assume that \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) is order bounded. There exists a non-negative function \(h\in L^{q}_{\nu}\) such that \(|D^{n}_{\varphi,u}f(z)|\leq h(z)\) for all \(z\in\mathbb{D}\) and \(f\in A^{p}_{\omega}\) with \(\|f\|_{A^{p}_{\omega}}\lesssim 1\). To get (3.1), we set \[I_{1}=\int_{[z\in\mathbb{D},|\varphi(z)|>\frac{1}{2}]}\frac{|u(z) |^{q}\nu(z)}{(1-|\varphi(z)|^{2})^{nq}(\omega(S(\varphi(z))))^{q/p}}dA(z) \tag{3.2}\] and \[I_{2}=\int_{[z\in\mathbb{D},|\varphi(z)|\leq\frac{1}{2}]}\frac{| u(z)|^{q}\nu(z)}{(1-|\varphi(z)|^{2})^{nq}(\omega(S(\varphi(z))))^{q/p}}dA(z). \tag{3.3}\] By (2.3) and for \(z\in\mathbb{D}\), take \[f_{\varphi(z)}(w)=\frac{(1-|\varphi(z)|^{2})^{\delta}}{(1-\overline{\varphi(z)}w )^{\delta}(\omega(S(\varphi(z))))^{1/p}},\ w\in\mathbb{D}.\] For some large enough \(\delta=\delta(\omega,p)>0\), we know that \(f_{\varphi(z)}\in A^{p}_{\omega}\) and \(\|f_{\varphi(z)}\|_{A^{p}_{\omega}}\lesssim 1\). Then \[f^{(n)}_{\varphi(z)}(w)=C_{\delta,n}\frac{(1-|\varphi(z)|^{2})^{\delta}( \overline{\varphi(z)})^{n}}{(1-\overline{\varphi(z)}w)^{\delta+n}(\omega(S( \varphi(z))))^{1/p}}, \tag{3.4}\] where \(C_{\delta,n}=\delta(\delta+1)(\delta+2)...(\delta+n-1)\). By a direct computation, for \(z\in\mathbb{D}\), we have \[|D^{n}_{\varphi,u}f_{\varphi(z)}(w)|=\frac{C_{\delta,n}(1-|\varphi(z)|^{2})^{ \delta}|u(w)||\varphi(z)|^{n}}{|1-\overline{\varphi(z)}\varphi(w)|^{n+\delta} (\omega(S(\varphi(z))))^{1/p}}\leq h(w).\] So, by taking \(w=z\), we can get \[\frac{C_{\delta,n}|u(z)||\varphi(z)|^{n}}{(1-|\varphi(z)|^{2})^{n}(\omega(S( \varphi(z))))^{1/p}}=|D^{n}_{\varphi,u}f_{\varphi(z)}(z)|\leq h(z).\] For \(z\in\mathbb{D}\) such that \(|\varphi(z)|>\frac{1}{2}\), we get \(|\varphi(z)|^{n}>\frac{1}{2^{n}}\). Therefore, \[\begin{split} I_{1}&=\int_{[z\in\mathbb{D},|\varphi (z)|>\frac{1}{2}]}\frac{|u(z)|^{q}}{(1-|\varphi(z)|^{2})^{nq}(\omega(S( \varphi(z))))^{q/p}}\nu(z)dA(z)\\ &\leq\frac{2^{nq}}{C_{\delta,n}}\int_{\{z\in\mathbb{D},|\varphi (z)>\frac{1}{2}\}}\Big{|}\frac{C_{\delta,n}|u(z)||\varphi(z)|^{n}}{(1-|\varphi (z)|^{2})^{n}(\omega(S(\varphi(z))))^{1/p}}\Big{|}^{q}\nu(z)dA(z)\\ &\lesssim\int_{\mathbb{D}}\Big{|}\frac{C_{\delta,n}|u(z)||\varphi (z)|^{n}}{(1-|\varphi(z)|^{2})^{n}(\omega(S(\varphi(z))))^{1/p}}\Big{|}^{q} \nu(z)dA(z)\\ &\leq\int_{\mathbb{D}}|h(z)|^{q}\nu(z)dA(z)<\infty.\end{split} \tag{3.5}\] For \(z\in\mathbb{D}\) such that \(|\varphi(z)|\leq\frac{1}{2}\), we can find a constant \(C>0\) such that \[\frac{1}{(1-|\varphi(z)|^{2})^{n}(\omega(S(\varphi(z))))^{1/p}}\leq C. \tag{3.6}\] On the other hand, since \(P_{n}(z)=\frac{z^{n}}{\|z^{n}\|_{A^{p}_{\omega}}}\) is in \(A^{p}_{\omega}\) and \(\|P_{n}\|_{A^{p}_{\omega}}\leq 1\), by the order boundedness of the operator \(D^{n}_{\varphi,u}\), for \(z\in\mathbb{D}\), we obtain \[\frac{n!}{\|z^{n}\|_{A^{p}_{\omega}}}|u(z)|=|D^{n}_{\varphi,u}P_{n}(z)|\leq h( z). \tag{3.7}\] Since \(n\) is fixed, from (3.6) and (3.7), for \(z\in\mathbb{D}\), we get \[\begin{split} I_{2}&=\int_{[z\in\mathbb{D},|\varphi (z)|\leq\frac{1}{2}]}\frac{|u(z)|^{q}}{(1-|\varphi(z)|^{2})^{nq}(\omega(S( \varphi(z))))^{q/p}}\nu(z)dA(z)\\ &\leq C\int_{[z\in\mathbb{D},|\varphi(z)|\leq\frac{1}{2}]}|u(z)|^ {q}\nu(z)dA(z)\lesssim\int_{\mathbb{D}}|u(z)|^{q}\nu(z)dA(z)\\ &\leq\int_{\mathbb{D}}|h(z)|^{q}\nu(z)dA(z)<\infty.\end{split} \tag{3.8}\] By (3.5) and (3.8), we see that \[\int_{\mathbb{D}}\frac{|u(z)|^{q}}{(1-|\varphi(z)|^{2})^{nq}(\omega(S(\varphi(z)) ))^{q/p}}\nu(z)dA(z)=I_{1}+I_{2}<\infty.\] Thus, the condition (3.1) holds. Conversely, assume that condition (3.1) holds. Define \[h(z)=\frac{|u(z)|}{(1-|\varphi(z)|^{2})^{n}(\omega(S(\varphi(z))))^{1/p}}.\] Then \(h\) is a nonnegative function in \(L^{q}_{\nu}\). For any function \(f\in A^{p}_{\omega}\) with \(\|f\|_{A^{p}_{\omega}}\leq 1\), by Lemma 2.1, there is a constant \(C=C(\omega)>0\) such that \[|D^{n}_{\varphi,u}f(z)|=|u(z)f^{(n)}(\varphi(z))|\leq C\frac{|u(z)|}{(1-| \varphi(z)|^{2})^{n}(\omega(S(\varphi(z))))^{1/p}}=Ch(z)\] for any \(z\in\mathbb{D}\). Thus, \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) is order bounded. The proof is complete. . Essential norm of \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) for \(1\leq p\leq q<\infty\) We begin this section with an approximation of the essential norm of the bounded operator \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) for \(1\leq p\leq q<\infty\). If \(f\in H(\mathbb{D})\), then \(f(z)=\sum_{k=0}^{\infty}a_{k}z^{k}\). For any \(m\geq 1\), let \(R_{m}f(z)=\sum_{k=m}^{\infty}a_{k}z^{k}\) and \(T_{m}=I-R_{m}\), where \(If=f\) is the identity operator. In order to prove one of the main results, we need the following lemmas. **Lemma 4.1**.: [35, Proposition 1] Suppose \(X\) is a Banach space of holomorphic functions in \(\mathbb{D}\) with the property that the polynomials are dense in \(X\). Then \(\|T_{m}f-f\|_{X}\to 0\) as \(m\to\infty\) for each \(f\in X\) if and only if \(\sup\{\|T_{m}\|:m\geq 1\}<\infty\). **Lemma 4.2**.: [35, Corollary 3] The Taylor series of every function in \(H^{p}\) converges in norm if and only if \(1<p<\infty\). **Lemma 4.3**.: For \(1<p<\infty\) and \(\omega\) is a radial weight, then \(\|T_{m}f-f\|_{A^{p}_{\omega}}\to 0\) as \(m\to\infty\) for each \(f\in A^{p}_{\omega}\). Moreover, \(\sup\{\|R_{m}\|_{A^{p}_{\omega}\to A^{p}_{\omega}}:m\geq 1\}<\infty\) and \(\sup\{\|T_{m}\|_{A^{p}_{\omega}\to A^{p}_{\omega}}:m\geq 1\}<\infty\), where \(R_{m}=I-T_{m}\). Proof.: It follows from Lemmas 4.1 and 4.2 that \(T_{m}\) is bounded uniformly on \(H^{p}\) for \(1<p<\infty\). Thus, there exists a constant \(C>0\) such that \[\frac{1}{2\pi}\int_{0}^{2\pi}|T_{m}f(re^{i\theta})|^{p}d\theta\leq C\frac{1}{2 \pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{p}d\theta,\] for \(p>1\) and any \(m\geq 1\). Applying polar coordinates, we see that \[\|T_{m}f\|_{A^{p}_{\omega}}^{p}\leq C\int_{0}^{1}\omega(r)rdr\int_{0}^{2\pi}| f(re^{i\theta})|^{p}d\theta\leq C\|f\|_{A^{p}_{\omega}}^{p}.\] Therefore \(\|T_{m}\|_{A^{p}_{\omega}\to A^{p}_{\omega}}\leq C\) for any \(m\geq 1\). By Lemma 4.1, we obtain that \(\|T_{m}f-f\|_{A^{p}_{\omega}}\to 0\) as \(m\to\infty\). Since \(R_{m}=I-T_{m}\), we have \[\|R_{m}\|_{A^{p}_{\omega}\to A^{p}_{\omega}}=\|I-T_{m}\|_{A^{p}_{\omega}\to A^{ p}_{\omega}}\leq 1+\|T_{m}\|_{A^{p}_{\omega}\to A^{p}_{\omega}}\leq 1+C.\] **Lemma 4.4**.: Suppose that \(\omega\in\hat{\mathcal{D}}\) and \(1<p<\infty\). Let \(\varepsilon>0\) and \(r\in(0,1)\). Then there exists a \(m_{0}\in\mathbb{N}\), for any \(m\geq m_{0}\), \[|R_{m}f(z)|\lesssim\varepsilon\|f\|_{A_{\omega}^{p}}, \tag{4.1}\] for every \(z\in D_{r}=\{z\in\mathbb{D},|z|\leq r\}\) and each \(f\in A_{\omega}^{p}\). Proof.: Let \(\omega_{n}=\int_{0}^{1}r^{n}\omega(r)dr\). By [25, P665], we see that \[B_{z}^{\omega}(\xi)=\sum_{n=0}^{\infty}\frac{(\xi\overline{z})^{n}}{2\omega_{ 2n+1}}\] is the reproducing kernel of \(A_{\omega}^{p}\) for \(p\geq 1\). Then, we have \[\begin{split}|R_{m}f(z)|=|\langle R_{m}f,B_{z}^{\omega}\rangle|= |\langle f,R_{m}B_{z}^{\omega}\rangle|&\leq\int_{\mathbb{D}}|f( w)\overline{R_{m}B_{z}^{\omega}(w)}|\omega(w)dA(w)\\ &\lesssim\|f\|_{A_{\omega}^{p}}\|R_{m}B_{z}^{\omega}\|_{A_{ \omega}^{q}},\end{split} \tag{4.2}\] where \(\frac{1}{p}+\frac{1}{q}=1\). For \(z\in D_{r}\), we show that \[\|R_{m}B_{z}^{\omega}\|_{A_{\omega}^{q}}=\left(\int_{\mathbb{D}}|R_{m}B_{z}^{ \omega}(w)|^{q}\omega(w)dA(w)\right)^{\frac{1}{q}}\lesssim\sum_{k=m}^{\infty} \frac{r^{k}}{2\omega_{2k+1}}.\] By [25, Lemma 6], we deduce that \[\lim_{m\to\infty}\sum_{k=m}^{\infty}\frac{r^{k}}{2\omega_{2k+1}}=0. \tag{4.3}\] Therefore, for any \(\varepsilon>0\), there exists a \(m_{0}\in\mathbb{N}\) and \(m\geq m_{0}\), such that \[\|R_{m}B_{z}^{\omega}\|_{A_{\omega}^{p}}\leq\varepsilon.\] By (4.2), we get \(|R_{m}f(z)|\lesssim\varepsilon\|f\|_{A_{\omega}^{p}}\) for any \(f\in A_{\omega}^{p}\). For \(p=1\), let \(\mathcal{T}_{m}f(z)=\sum_{k=0}^{m-1}(1-\frac{k}{m})a_{k}z^{k}\) and \(\mathcal{R}_{m}=I-\mathcal{T}_{m}\). We get the following lemma. **Lemma 4.5**.: Let \(\omega\in\hat{\mathcal{D}}\) and \(f\in A_{\omega}^{1}\), then \(\|\mathcal{T}_{m}f-f\|_{A_{\omega}^{1}}\to 0\) as \(m\to\infty\) for each \(f\in A_{\omega}^{1}\). Moreover, \(\sup\{\|\mathcal{T}_{m}\|_{A_{\omega}^{1}\to A_{\omega}^{1}}:m\geq 1\}<\infty\) and \(\sup\{\|\mathcal{R}_{m}\|_{A_{\omega}^{1}\to A_{\omega}^{1}}:m\geq 1\}<\infty\), where \(\mathcal{R}_{m}=I-\mathcal{T}_{m}\). Proof.: By [4, P.196], \(\|\mathcal{T}_{m}\|_{H^{1}\to H^{1}}\leq 1\). Using the same way of Lemma 4.3, we know that \(\|\mathcal{T}_{m}\|_{A_{\omega}^{1}\to A_{\omega}^{1}}\leq C\) for any \(m\geq 1\). We claim that \[\|\mathcal{R}_{m}\|_{A_{\omega}^{1}\to A_{\omega}^{1}}=\|I-\mathcal{T}_{m}\|_ {A_{\omega}^{1}\to A_{\omega}^{1}}\leq 1+\|\mathcal{T}_{m}\|_{A_{\omega}^{1} \to A_{\omega}^{1}}<1+C \tag{4.4}\] for any \(m\geq 1\) and C is a positive constant. **Lemma 4.6**.: Assume that \(\omega\in\hat{\mathcal{D}}\). Let \(\varepsilon>0\) and \(r\in(0,1)\). Then there exists a \(m_{0}\in\mathbb{N}\), for any \(m\geq m_{0}\), \[|\mathcal{R}_{m}f(w)|\lesssim\varepsilon\|f\|_{A_{\omega}^{1}}, \tag{4.5}\] for every \(w\in D_{r}=\{w\in\mathbb{D},|w|\leq r\}\) and each \(f\in A_{\omega}^{1}\). Proof.: By the proof of Lemma 4.4, we deduce that \[|\mathcal{R}_{m}f(w)|=|\langle\mathcal{R}_{m}f,B^{\omega}_{w}\rangle|=|\langle f, \mathcal{R}_{m}B^{\omega}_{w}\rangle|\lesssim\|f\|_{A^{1}_{\omega}}\|\mathcal{R }_{m}B^{\omega}_{w}\|_{H^{\infty}}.\] Take \(|w|\leq r\), we can prove that \[\|\mathcal{R}_{m}B^{\omega}_{w}\|_{H^{\infty}}=\sup_{\xi\in\mathbb{D}}| \mathcal{R}_{m}B^{\omega}_{w}(\xi)|=\sup_{\xi\in\mathbb{D}}|(I-\mathcal{T}_{m })B^{\omega}_{w}(\xi)|\leq\frac{1}{m}\sum_{k=1}^{\infty}\frac{kr^{k-1}}{2 \omega_{2k+1}}+\sum_{k=m}^{\infty}\frac{r^{k}}{2\omega_{2k+1}}.\] By [25, Lemma 6], we see that \(\sum_{k=1}^{\infty}\frac{kr^{k-1}}{2\omega_{2k+1}}\) is convergent and (4.3) holds. Therefore, for any \(\varepsilon>0\), there exists a \(m_{0}\in\mathbb{N}\) and \(m\geq m_{0}\), such that \[\|\mathcal{R}_{m}B^{\omega}_{w}\|_{H^{\infty}}\leq\varepsilon.\] Thus \(|\mathcal{R}_{m}f(w)|\lesssim\varepsilon\|f\|_{A^{1}_{\omega}}\) for any \(f\in A^{1}_{\omega}\). The following lemma is very useful to prove the compactness of composition operators and its generalizations on some function spaces. **Lemma 4.7**.: [13, Lemma 2.2] Suppose \(0<p,\,q<\infty\), \(\omega,\nu\in\mathcal{D}\). Suppose \(u\in H(\mathbb{D})\) and \(n\in\mathbb{N}\bigcup\{0\}\). Let \(\varphi\) be an analytic self-map of \(\mathbb{D}\) such that \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) is bounded. Then \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) is compact if and only if whenever \(\{f_{k}\}\) is bounded in \(A^{p}_{\omega}\) and \(f_{k}\to 0\) uniformly on compact subsets of \(\mathbb{D}\) as \(k\to\infty\), \(\lim_{k\to\infty}\|D^{n}_{\varphi,u}(f_{k})\|_{A^{q}_{\nu}}=0\). **Theorem 4.1**.: Let \(1\leq p\leq q<\infty\) and \(\omega,\nu\in\mathcal{D}\). Suppose \(n\in\mathbb{N}\bigcup\{0\}\). Let \(\varphi\) be an analytic self-map of \(\mathbb{D}\) and \(u\in A^{q}_{\nu}\). If \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\nu}\) is bounded, then there exists a large enough \(\delta=\delta(\omega,p)>0\) such that \[\|D^{n}_{\varphi,u}\|^{q}_{e,\ A^{p}_{\omega}\to A^{q}_{\nu}}\asymp\limsup_{|a |\to 1}\int_{\mathbb{D}}\frac{(1-|a|)^{\delta q}|u(\xi)|^{q}\nu(\xi)}{|1- \overline{a}\varphi(\xi)|^{(\delta+n)q}(\omega(S(a)))^{q/p}}dA(\xi). \tag{4.6}\] Proof.: **Lower estimate**. Let \(f_{a}(z)=\left(\frac{1-|a|}{1-\overline{a}z}\right)^{\delta}\omega(S(a))^{-1/p}\) for some large enough \(\delta=\delta(\omega,p)>0\). Then \(\{f_{a}\}\) is a bounded sequence in \(A^{p}_{\omega}\) converging to zero uniformly on compact subsets of \(\mathbb{D}\) as \(|a|\to 1\). Fix a compact operator \(K:\ A^{p}_{\omega}\to A^{q}_{\nu}\), by Lemma 4.7, we know that \(\|Kf_{a}\|_{A^{q}_{\nu}}\to 0\) as \(|a|\to 1\). Therefore \[\|D^{n}_{\varphi,u}-K\|_{A^{p}_{\omega}\to A^{q}_{\nu}} \gtrsim\limsup_{|a|\to 1}\|(D^{n}_{\varphi,u}-K)f_{a}\|_{A^{q}_{\nu}}\] \[\gtrsim\limsup_{|a|\to 1}\sup(\|D^{n}_{\varphi,u}f_{a}\|_{A^{q}_{ \nu}}-\|Kf_{a}\|_{A^{q}_{\nu}})\] \[=\limsup_{|a|\to 1}\|D^{n}_{\varphi,u}f_{a}\|_{A^{q}_{\nu}}.\] Moreover, we have \[\|D^{n}_{\varphi,u}\|_{e,\ A^{p}_{\omega}\to A^{q}_{\nu}} =\inf_{K}\|D^{n}_{\varphi,u}-K\|_{A^{p}_{\omega}\to A^{q}_{\nu}} \gtrsim\limsup_{|a|\to 1}\|D^{n}_{\varphi,u}f_{a}\|_{A^{q}_{\nu}}\] \[=\limsup_{|a|\to 1}\left(\int_{\mathbb{D}}\frac{|a|^{n}(1-|a|)^{ \delta q}|u(\xi)|^{q}\nu(\xi)}{|1-\overline{a}\varphi(\xi)|^{(\delta+n)q}( \omega(S(a)))^{q/p}}dA(\xi)\right)^{1/q}.\] We get \[\|D^{n}_{\varphi,u}\|^{q}_{e,\ A^{p}_{\omega}\to A^{q}_{\nu}}\gtrsim\limsup_{|a| \to 1}\int_{\mathbb{D}}\frac{(1-|a|)^{\beta q}|u(\xi)|^{q}\nu(\xi)}{|1- \overline{a}\varphi(\xi)|^{(\delta+n)q}(\omega(S(a)))^{q/p}}dA(\xi). \tag{4.7}\] **Upper estimate**. The case \(1<p\leq q<\infty\). Considering the compact operator \(T_{m}:A^{p}_{\omega}\to A^{q}_{\nu}\) by \(T_{m}f=\sum_{k=0}^{m-1}b_{k}z^{k}\) and letting \(R_{m}=I-T_{m}\), where \(I\) is identity operator. We can see that \[\|D^{n}_{\varphi,u}\|_{e,\ A^{p}_{\omega}\to A^{q}_{\nu}}\leq\|D^{n}_{ \varphi,u}\circ R_{m}\|_{e,\ A^{p}_{\omega}\to A^{q}_{\nu}}+\|D^{n}_{\varphi,u }\circ T_{m}\|_{e,\ A^{p}_{\omega}\to A^{q}_{\nu}}=\|D^{n}_{\varphi,u}\circ R _{m}\|_{e,\ A^{p}_{\omega}\to A^{q}_{\nu}}.\] Thus \[\|D^{n}_{\varphi,u}\|^{q}_{e,\ A^{p}_{\omega}\to A^{q}_{\nu}}\leq\liminf_{m \to\infty}\|D^{n}_{\varphi,u}\circ R_{m}\|^{q}_{e,\ A^{p}_{\omega}\to A^{q}_{ \nu}}\leq\liminf_{m\to\infty}\|D^{n}_{\varphi,u}\circ R_{m}\|^{q}_{A^{p}_{ \omega}\to A^{q}_{\nu}}. \tag{4.8}\] Fix \(f\in A^{p}_{\omega}\) with \(\|f\|_{A^{p}_{\omega}}\leq 1\) and \(r\in(0,1)\). Suppose \(D_{r}=\{z\in\mathbb{D},\ |z|\leq r\}\). Then \[\|(D^{n}_{\varphi,u}\circ R_{m})f\|^{q}_{A^{p}_{\omega}\to A^{q}_ {\nu}} \leq\int_{\mathbb{D}}|(R_{m}f)^{(n)}(\varphi(\xi))|^{q}|u(\xi)|^{q} \nu(\xi)dA(\xi)\] \[=\int_{\mathbb{D}}|(R_{m}f)^{(n)}(z)|^{q}d\mu^{\nu}_{\varphi,\ u}( z),\] where \(\mu^{\nu}_{\varphi,\ u}=\int_{\varphi^{-1}(E)}|u(z)|^{q}\nu(z)dA(z)\) for all \(E\) is Borel subsets of \(\mathbb{D}\). From Lemma 2.1, we have \[|f(z)|^{q-p}\lesssim\frac{\|f\|^{q-p}_{A^{p}_{\omega}}}{(\omega(S(z)))^{(q-p) /p}}. \tag{4.9}\] By Lemma 2.4 and (4.9), we obtain \[\int_{\mathbb{D}}|(R_{m}f)^{(n)}(z)|^{q}d\mu^{\nu}_{\varphi,\ u} (z)\] \[\leq\int_{\mathbb{D}}d\mu^{\nu}_{\varphi,\ u}(z)\frac{C}{\omega( S(z))}\int_{\Delta(z,r)}\frac{|R_{m}f(w)|^{q}}{(1-|w|)^{nq}}\widetilde{\omega}(w) dA(w)\] \[\asymp\int_{\mathbb{D}}d\mu^{\nu}_{\varphi,\ u}(z)\int_{\Delta(z, r)}\frac{|R_{m}f(w)|^{q-p+p}}{(1-|w|)^{nq}\omega(S(w))}\widetilde{\omega}(w) dA(w)\] \[\lesssim\] \[= \|R_{m}f\|^{q-p}_{A^{p}_{\omega}}\int_{\mathbb{D}}d\mu^{\nu}_{ \varphi,\ u}(z)\int_{\Delta(z,r)}\frac{|R_{m}f(w)|^{p}}{(1-|w|)^{nq}(\omega(S( w)))^{q/p}}\widetilde{\omega}(w)dA(w)\] \[= \|R_{m}f\|^{q-p}_{A^{p}_{\omega}}\int_{\mathbb{D}}d\mu^{\nu}_{ \varphi,\ u}(z)\int_{\mathbb{D}}\frac{\chi_{\Delta(z,r)}(w)|R_{m}f(w)|^{p}}{(1 -|w|)^{nq}(\omega(S(w)))^{q/p}}\widetilde{\omega}(w)dA(w), \tag{4.10}\] where \(\chi_{\Delta(z,r)}\) is the characteristic function of the set \(\Delta(z,r)\). Obviously, \(\chi_{\Delta(z,r)}(w)=\chi_{\Delta(w,r)}(z)\). By Fubini's Theorem, we obtain \[\int_{\mathbb{D}}|(R_{m}f)^{(n)}(z)|^{q}d\mu^{\nu}_{\varphi,\ u}(z) \tag{4.11}\] \[\lesssim \|R_{m}f\|^{q-p}_{A^{p}_{\omega}}\int_{\mathbb{D}}\frac{\mu^{\nu }_{\varphi,\ u}(\Delta(w,r))}{(1-|w|)^{nq}(\omega(S(w)))^{q/p}}|R_{m}f(w)|^{p} \widetilde{\omega}(w)dA(w).\] Set \[J_{1,\ m}=\int_{D_{r}}\frac{\mu^{\nu}_{\varphi,\ u}(\Delta(w,r))}{(1-|w|)^{nq} (\omega(S(w)))^{q/p}}|R_{m}f(w)|^{p}\widetilde{\omega}(w)dA(\xi)\] and \[J_{2,\ m}=\int_{\mathbb{D}\setminus D_{r}}\frac{\mu^{\nu}_{\varphi,\ u}( \Delta(w,r))}{(1-|w|)^{nq}(\omega(S(w)))^{q/p}}|R_{m}f(w)|^{p}\widetilde{ \omega}(w)dA(w).\] Then we get \[\|(D^{n}_{\varphi,u}\circ R_{m})f\|^{q}_{A^{p}_{\omega}\to A^{q}_{ \varphi}}\lesssim\|R_{m}f\|^{q-p}_{A^{p}_{\omega}}(J_{1,\ m}+J_{2,\ m}), \tag{4.12}\] with \(m\geq 1\). Since \(D^{n}_{\varphi,u}:A^{p}_{\omega}\to A^{q}_{\varphi}\) is bounded, Lemma 2.5 implies that there exists a large enough \(\delta=\delta(\omega,p)>0\) such that \[M =\sup_{a\in\mathbb{D}}\int_{\mathbb{D}}\frac{(1-|a|)^{\delta q}|u (w)|^{q}\nu(w)}{|1-\overline{a}\varphi(w)|^{(\delta+n)q}(\omega(S(a)))^{q/p} }dA(w)\] \[=\sup_{a\in\mathbb{D}}\int_{\mathbb{D}}\frac{(1-|a|)^{\delta q}}{ |1-\overline{a}\xi|^{(\delta+n)q}(\omega(S(a)))^{q/p}}d\mu^{\nu}_{\varphi,\ u}(\xi)\] \[<\infty. \tag{4.13}\] For \(\xi\in\Delta(w,r)\), we have \[\frac{\mu^{\nu}_{\varphi,\ u}(\Delta(w,\ r))}{(1-|w|)^{nq}(\omega (S(w)))^{q/p}} =\int_{\Delta(w,r)}\frac{1}{(1-|w|)^{nq}(\omega(S(w)))^{q/p}}d\mu^ {\nu}_{\varphi,\ u}(\xi)\] \[\asymp\int_{\Delta(w,r)}\frac{(1-|w|)^{\delta q}}{|1-\overline{w} \xi|^{(\delta+n)q}(\omega(S(w)))^{q/p}}d\mu^{\nu}_{\varphi,\ u}(\xi)\] \[\lesssim\int_{\mathbb{D}}\frac{(1-|w|)^{\delta q}}{|1-\overline{ w}\xi|^{(\delta+n)q}(\omega(S(w)))^{q/p}}d\mu^{\nu}_{\varphi,\ u}(\xi).\] Fix \(\varepsilon>0\). By (4.14), (4.13) and Lemma 4.4, hence \[J_{1,\ m} =\int_{D_{r}}\frac{\mu^{\nu}_{\varphi,\ u}(\Delta(w,r))}{(1-|w|)^ {nq}(\omega(S(w)))^{q/p}}|R_{m}f(w)|^{p}\widetilde{\omega}(w)dA(w)\] \[\leq\sup_{w\in\mathbb{D}}\frac{\mu^{\nu}_{\varphi,\ u}(\Delta(w,r ))}{(1-|w|)^{nq}(\omega(S(w)))^{q/p}}\int_{D_{r}}|R_{m}f(w)|^{p}\widetilde{ \omega}(w)dA(w)\] \[\leq CM\int_{D_{r}}|R_{m}f(w)|^{p}\widetilde{\omega}(w)dA(w)\] \[\leq CM\varepsilon^{p}\|f\|^{p}_{A^{p}_{\omega}},\] for any \(m\geq m_{0}\). Thus, \[\lim_{m\to\infty}\sup_{\|f\|_{A^{p}_{\omega}}\leq 1}\|R_{m}f\|_{A^{p}_{\omega}}^{q -p}J_{1,\;m}=0. \tag{4.15}\] For \(\omega\in\mathcal{D}\) and \(f\in H(\mathbb{D})\), from [25, Proposition 5], we know that \[\|f\|_{A^{p}_{\omega}}\asymp\|f\|_{A^{p}_{\omega}}. \tag{4.16}\] By (4.14), (4.16) and Lemma 4.3, we claim that \[J_{2,\;m} =\int_{\mathbb{D}\setminus D_{r}}\frac{\mu^{\nu}_{\varphi,\;u}( \Delta(w,r))}{(1-|w|)^{nq}(\omega(S(w)))^{q/p}}|R_{m}f(w)|^{p}\widetilde{ \omega}(w)dA(w)\] \[\leq\sup_{|a|>r}\frac{\mu^{\nu}_{\varphi,\;u}(\Delta(a,r))}{(1-|a |)^{nq}(\omega(S(a)))^{q/p}}\int_{\mathbb{D}\setminus D_{r}}|R_{m}f|^{p} \widetilde{\omega}(w)dA(w)\] \[\lesssim\sup_{m\geq 1}\|R_{m}\|_{A^{p}_{\omega}\to A^{p}_{ \omega}}^{p}\|f\|_{A^{p}_{\omega}}^{p}\sup_{|a|>r}\int_{\mathbb{D}}\frac{(1-| a|)^{\delta q}}{|1-\overline{a}\xi|^{(\delta+n)q}(\omega(S(a)))^{q/p}}d\mu^{ \nu}_{\varphi,\;u}(\xi).\] Hence, \[\lim_{m\to\infty}\sup_{\|f\|_{A^{p}_{\omega}}\leq 1}\|R_{m}f\|_{A^{p}_{\omega}}^{ q-p}J_{2,\;m}\lesssim\sup_{|a|>r}\int_{\mathbb{D}}\frac{(1-|a|)^{\delta q}}{|1- \overline{a}\xi|^{(\delta+n)q}(\omega(S(a)))^{q/p}}d\mu^{\nu}_{\varphi,\;u}( \xi). \tag{4.17}\] Combining (4.8), (4.12), (4.15), (4.17) and (4.13), we deduce that \[\|D^{n}_{\varphi,u}\|_{e,\;A^{p}_{\omega}\to A^{q}_{\nu}}^{q} \lesssim\liminf_{m\to\infty}\|D^{n}_{\varphi,u}\circ R_{m}\|_{A^{p}_{\omega}\to A ^{q}_{\nu}}^{q}\] \[\lesssim\sup_{|a|>r}\int_{\mathbb{D}}\frac{(1-|a|)^{\delta q}|u(w )|^{q}\nu(w)}{|1-\overline{a}\varphi(w)|^{(\delta+n)q}\omega(S(a))^{q/p}}dA(w).\] Letting \(r\to 1\), we have \[\|D^{n}_{\varphi,u}\|_{e,\;A^{p}_{\omega}\to A^{q}_{\nu}}^{q} \lesssim\limsup_{|a|\to 1}\int_{\mathbb{D}}\frac{(1-|a|)^{\delta q}|u(w)|^{q} \nu(w)}{|1-\overline{a}\varphi(w)|^{(\delta+n)q}\omega(S(a))^{q/p}}dA(w). \tag{4.18}\] When \(1=p\leq q<\infty\), by Lemmas 4.5 and 4.6, we can use the same way to get that (4.18) holds. We omit the details. The proof of the Theorem 4.1 is complete. ### Data Availability All data generated or analyzed during this study are included in this article and in its bibliography. ### Conflict of Interest The authors declared that they have no conflict of interest. **Acknowledgements.** The author is extremely thankful to Professor Hasi Wulan for his kind suggestions. The research was supported by National Natural Science Foundation of China (Nos.11720101003 and 12171299) and Guangdong Basic and Applied Basic Research Foundation (No.2022A151012117). The author would like to thank the anonymous referee for his/her careful reading of the manuscript and valuable comments.
2309.08597
Charge pumping with strong spin-orbit coupling: Fermi surface breathing, Berry curvature, and higher harmonic generation
Spin and charge pumping induced by a precessing magnetization has been instrumental to the development of spintronics. Nonetheless, most theoretical studies so far treat the spin-orbit coupling as a perturbation, which disregards the competition between exchange and spin-orbit fields. In this work, based on Keldysh formalism and Wigner expansion, we develop an adiabatic theory of spin and charge pumping adapted to systems with arbitrary spin-orbit coupling. We apply this theory to the magnetic Rashba gas and magnetic graphene cases and discuss the pumped ac and dc current. We show that the pumped current possesses both intrinsic (Berry curvature-driven) and extrinsic (Fermi surface breathing-driven) contributions, akin to magnetic damping. In addition, we find that higher harmonics can be generated under large-angle precession and we propose a couple of experimental setups where such an effect can be experimentally observed.
A. Manchon, A. Pezo
2023-09-15T17:52:55Z
http://arxiv.org/abs/2309.08597v2
# Charge pumping with strong spin-orbit coupling: ###### Abstract Spin and charge pumping induced by a precessing magnetization has been instrumental to the development of spintronics. Nonetheless, most theoretical studies so far treat the spin-orbit coupling as a perturbation, which disregards the dynamical competition between exchange and spin-orbit fields. In this work, based on Keldysh formalism and Wigner expansion, we develop an adiabatic theory of spin and charge pumping adapted to multiorbital systems with arbitrary spin-orbit coupling. We apply this theory to the magnetic Rashba gas and magnetic graphene cases and discuss the pumped ac and dc current. We show that the pumped current possesses both intrinsic and extrinsic contributions, akin to the magnetic damping. In addition, we find that higher harmonics can be generated under large angle precession and we propose a couple of experimental setups where such an effect could be experimentally observed. ## I Introduction Adiabatic spin pumping [1; 2] has been instrumental to the development of spintronics over the past two decades. It is now routinely used to inject pure spin currents from a magnetic spin source into an adjacent metal, enabling the investigation of spin-to-charge interconversion processes in a wide range of materials, from transition metal compounds [3] to two dimensional gases [4], oxide heterostructures [5], topological surface states [6; 7], van der Waals heterostructures (see for instance Ref. [8]) but also other magnetic materials such as antiferromagnets [9] and spin liquids [10]. Although the magnetic spin source is usually a ferromagnet excited at magnetic resonance, the recent demonstration of spin-to-charge interconversion using antiferromagnetic resonance [11; 12] opens appealing avenues for the generation of very high frequency currents via spin pumping. In the standard theory of spin pumping [1; 2], the spin current induced by the precessing magnetization \(\mathbf{m}\) and injected in the adjacent metal reads \[\mathcal{J}_{\mathrm{s}}=\eta_{r}\mathbf{m}\times\partial_{t}\mathbf{m}+\eta_ {i}\partial_{t}\mathbf{m}, \tag{1}\] where \(\eta_{r,i}\) are coefficients related to the spin mixing conductance at the interface between the magnet and the nonmagnetic metal, and to the spin relaxation in the metal [13; 14]. When spin-orbit coupling is present in the metal, the spin current is converted into a charge current that takes the general form \[\mathbf{J}_{c}=\alpha_{H}\eta_{r}\mathbf{z}\times(\mathbf{m}\times\partial_{t }\mathbf{m})+\alpha_{H}\eta_{i}\mathbf{z}\times\partial_{t}\mathbf{m}, \tag{2}\] where \(\mathbf{z}\) is normal to the interface and \(\alpha_{H}\) is the spin-to-charge conversion efficiency, proportional to the spin-orbit coupling strength, and whose specific structure depends on the involved mechanism (spin Hall effect, Rashba-Edelstein effect, possibly spin swapping etc.). Equation (2) is widely used to interpret experimental data and quantify the physical parameters such as the spin-mixing conductance itself, the spin-to-charge conversion efficiency and the spin relaxation length [15; 16; 17; 18]. Notice that, to date, the vast majority of experiments have focused on the time-averaged, rectified part of the pumped charge current \(\mathbf{J}_{\mathrm{c}}|_{\mathrm{dc}}=\alpha_{H}\eta_{r}\langle\mathbf{z} \times(\mathbf{m}\times\partial_{t}\mathbf{m})\rangle\), and only a handful of them have achieved to measure the ac contribution [19; 20; 21]. An important shortcoming of the standard theory of spin pumping is that it formally applies in the presence of vanishingly small spin-orbit coupling compared to the s-d exchange between conduction and localized electrons (about 1-2 eV in Fe, Co, Ni compounds). In fact, in most experiments the adjacent metal rather possesses a large spin-orbit coupling, i.e., a few 100 meV (heavy metals, topological insulators and Weyl semimetals, to name a few). In other words, the spin mixing conductance approach is not adapted to treat these systems and overlooks the competition between exchange and spin-orbit Figure 1: (Color online) (a) Sketch of the nonlinear spin dynamics expected when a strong Rashba spin-orbit coupling coexists with s-d exchange. (b) Band structure of the magnetic Rashba gas for \(\mathbf{m}=\mathbf{y}\) (red) and \(-\mathbf{y}\) (black) for (top) \(\Delta=0.1t\), \(t_{R}=0.1t\) and (bottom) \(\Delta=1t\), \(t_{R}=0.3t\). (c) Corresponding Fermi’ surface computed at \(E_{F}=-1t\), showing strong ”breathing” effect upon magnetization reorientation. In these calculations, \(t=0.2\). interactions. As a matter of fact, in noncentrosymmetric multilayers interfacial spin-orbit splitting adopts the form of Rashba spin-orbit coupling [22], which substantially modifies the spin dynamics at the interface. As sketched in Fig. 1(a), the time-dependent current \(\mathbf{j}_{c}(t)\) pumped by the precessing magnetization is accompanied by the so-called Rashba field, \(\mathbf{B}_{R}(t)\propto\mathbf{z}\times\mathbf{j}_{c}(t)\), which competes with the s-d exchange to drag the itinerant spin away from the magnetization. Since this Rashba field is itself proportional to the charge current, one naturally expects a massive modification of the spin dynamics in the limit of strong spin-orbit coupling. In a recent paper, Chen and Zhang [23] proposed a Green's function approach to compute the induced spin current in the presence of spin-orbit coupling. Unfortunately, this theory is limited to small angle precession such that the spin-orbit dynamics is overlooked. In a recent work [24], we recently performed time-dependent quantum transport simulations and reported the progressive emergence of higher harmonics upon increasing the strength of Rashba spin-orbit coupling. This result, confirmed by another work [25], clearly advocates for the presence of nonlinear itinerant spin dynamics but its lack of transparency hinders a precise understanding of the underlying physics. In the present work, we address this problem by adopting a different theoretical approach. Based on Keldysh formalism we first derive a formula for the charge pumping that is valid in the slow dynamics regime and, most importantly, valid for the full range of spin-orbit coupling and exchange interaction. We then apply this formalism to the paradigmatic magnetic Rashba gas and magnetic graphene, and demonstrate that the harmonic generation is a direct consequence of the Fermi surface breathing. We then suggest several materials systems and configurations in which the harmonic generation could be observed. ## II Adiabatic pumping and Fermi surface breathing ### Keldysh theory of adiabatic pumping Let us start from Keldysh-Dyson equation [26] in Wigner representation, i.e., only the macroscopic coordinates (time and position) of the center-of-mass of the wave packet are treated explicitly while its microscopic internal degrees of freedom are Fourier transformed (see, e.g., Ref. [27]). In the present theory, we are interested in deriving the response of an observable \(\mathcal{O}\), expressed through the lesser Green's function \(G_{\mathbf{k}}^{<}\), to the first order in magnetization dynamics, \(\partial_{t}\mathcal{H}_{\mathbf{k}}\). Keldysh-Dyson equations can be rewritten \[(\varepsilon-\mathcal{H}_{\mathbf{k}}-\Sigma^{R})\otimes G_{ \mathbf{k}}^{R}=1, \tag{3}\] \[G_{\mathbf{k}}^{<}=G_{\mathbf{k}}^{R}\otimes\Sigma^{<}\otimes G _{\mathbf{k}}^{A}, \tag{4}\] where \(\mathcal{H}_{\mathbf{k}}\) is the unperturbed Hamiltonian, \(\otimes=\exp[i\hbar(\partial_{t}\partial_{\varepsilon}-\partial_{ \varepsilon}\partial_{t})]\) is Moyal's product that emerges from Wigner transform, and \(\Sigma^{R,<}=n_{i}V_{0}^{2}\int d^{3}\mathbf{k}/(2\pi)^{3}G_{\mathbf{k}}^{R,<}\) is the (retarded, lesser) self-energy in the presence of delta-like impurities with potential \(V_{0}\) and density \(n_{i}\). We now expand these two equations to the first order in \(\partial_{t}\mathcal{H}_{\mathbf{k}}\) and after some algebra, one obtains \[G_{\mathbf{k}}^{<} = G_{\mathbf{k}}^{R}\Sigma^{<}G_{\mathbf{k}}^{A}-\frac{i\hbar}{2 }\left(G_{\mathbf{k}0}^{R}\partial_{t}\mathcal{H}_{\mathbf{k}}\partial_{ \varepsilon}G_{\mathbf{k}0}^{A}-\partial_{\varepsilon}G_{\mathbf{k}0}^{R} \partial_{t}\mathcal{H}_{\mathbf{k}}G_{\mathbf{k}0}^{A}\right), \tag{5}\] \[G_{\mathbf{k}}^{R} = G_{\mathbf{k}0}^{R}-\frac{i\hbar}{2}\left(G_{\mathbf{k}0}^{R} \partial_{t}\mathcal{H}_{\mathbf{k}}\partial_{\varepsilon}G_{\mathbf{k}0}^{R} -\partial_{\varepsilon}G_{\mathbf{k}0}^{R}\partial_{t}\mathcal{H}_{\mathbf{k} 0}^{R}\right) \tag{6}\] We then simply insert Eq. (6) into Eq. (5), and compute the response of an observable \(\mathcal{O}\) as \[\mathcal{O}=\int\frac{d\varepsilon}{2i\pi}\mathrm{Tr}_{\mathbf{k}}[\hat{O}G_{ \mathbf{k}}^{<}]. \tag{7}\] Here, \(\mathrm{Tr}_{\mathbf{k}}[...]=\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}\mathrm{ Tr}[...]\). By posing \(\mathcal{O}=\delta\mathcal{O}_{j}\partial_{t}m_{j}\), and \(\partial_{t}\mathcal{H}_{\mathbf{k}}=-\mathbf{T}_{\mathbf{k}}\cdot\partial_{ t}\mathbf{m}\), where \(\mathbf{T}_{\mathbf{k}}=-\partial_{\mathbf{m}}\mathcal{H}_{\mathbf{k}}\) is the torque operator, we obtain two contributions \[\delta\mathcal{O}_{j}^{\mathrm{surf}}=\hbar\int\frac{d\varepsilon}{2\pi} \mathrm{Re}\mathrm{Tr}_{\mathbf{k}}[\hat{O}(G_{\mathbf{k}0}^{R}-G_{\mathbf{k} 0}^{A})T_{\mathbf{k}}^{j}G_{\mathbf{k}0}^{A}]\partial_{\varepsilon}f(\varepsilon), \tag{8}\] \[\delta\mathcal{O}_{j}^{\mathrm{sea}}=\hbar\int\frac{d\varepsilon}{2\pi} \mathrm{Re}\mathrm{Tr}_{\mathbf{k}}[\hat{O}G_{\mathbf{k}0}^{A}[T_{\mathbf{k}}^ {j},G_{\mathbf{k}0}^{A}]G_{\mathbf{k}0}^{A}]f(\varepsilon). \tag{9}\] These two expressions represent the Fermi surface and Fermi sea contributions to the adiabatic pumping. It is noteworthy to point out that this theory generalizes the theory of the magnetic damping proposed by Gilmore et al. [28], in which intrinsic and extrinsic electronic contributions to the magnetic damping are discussed. In our formalism, the magnetic damping can be computed simply by replacing \(\hat{O}\) by the torque operator, resulting in the torque-torque correlation introduced by Kambersky [29]. These formulas are valid for slow dynamics but, most importantly, remain exact for all values of spin-orbit coupling and exchange, as well as for any direction of the magnetization. It is also well adapted to multiband systems and heterostructures and can be used to compute spin, charge and orbital pumping in realistic materials. ### Pumping and Fermi surface breathing Let us now evaluate the charge current pumped by the slowly precessing magnetization. To do so, we consider a magnetic Rashba gas on a hexagonal lattice, defined by the Hamiltonian [30] \[\mathcal{H}_{0}=\varepsilon_{\mathbf{k}}+\Delta\hat{\mathbf{\sigma}}\cdot\mathbf{m }+\frac{t_{\mathrm{R}}}{a}\mathbf{\eta}_{\mathbf{k}}\cdot(\hat{\mathbf{\sigma}} \times\mathbf{z}). \tag{10}\] The sum is taken over the nearest neighbors, i.e., \(\mathbf{u}=\mathbf{a},\mathbf{b},-\mathbf{a}-\mathbf{b}\), \(\mathbf{a}\) and \(\mathbf{b}\) being the lattice vectors and \(a\) the lattice parameter. Details can be found in Ref. [30]. Explicitly, \(\varepsilon_{\mathbf{k}}=-2t(\cos\mathbf{k}\cdot\mathbf{a}+\cos\mathbf{k} \cdot\mathbf{b}+\cos\mathbf{k}\cdot\mathbf{c})\), and \(\mathbf{\eta}_{\mathbf{k}}=2(\mathbf{a}\sin\mathbf{k}\cdot\mathbf{a}+\mathbf{b} \sin\mathbf{k}\cdot\mathbf{b}+\mathbf{c}\sin\mathbf{k}\cdot\mathbf{c})\). Here, \(t\) is the nearest-neighbor hopping parameter (fixed to \(0.2\) in the present work), \(\Delta\) is the s-d exchange between the conduction electrons and the localized ones, \(t_{R}\) is the linear Rashba spin-orbit coupling coming from inversion symmetry breaking normal to the (**a**, **b**) plane. Because of the coexistence of s-d exchange and spin-orbit coupling, the band structure and Fermi surface of the gas are highly sensitive to the magnetization direction, as shown in Fig. 1(b) and (c), respectively. These figures show the band structure and Fermi surface when the magnetization lies along \(\mathbf{y}\) (red) and \(-\mathbf{y}\) (black) for two different situations, \(\Delta=0.1\) and \(t_{R}=0.1\) (top) and \(\Delta=1\) and \(t_{R}=0.3\) (bottom). One immediately sees that in the adiabatic regime, the high sensitivity of the Fermi surface to the magnetization direction results in a so-called "breathing", i.e., the periodic modulation of the Fermi surface driven by the precessing magnetization [31] (short movies of the breathing can be found in Ref. [32]). During the breathing, states of opposite spin chirality are pumped from one side of the Fermi surface to the other, resulting in a periodic charge current. Notice that this Fermi surface breathing is also at the origin of electron-mediated magnetic damping [33; 28; 34]. Interesting, whereas the Fermi surface remains mostly circular for the first set of parameters [top panels in Fig. 1(c)], there is a region in parameter space when the distortion is much more dramatic [bottom panels in Fig. 1(c)], manifesting the drastic competition between spin-orbit coupling and exchange. As discussed in the next sections, in this particular regime, current harmonics can emerge. ## III dc and ac charge pumping ### Preliminary insights Without loss of generality, we consider a magnetization precessing around the \(\mathbf{x}\) axis (see Fig. 2), \(\mathbf{m}=(\cos\theta,\sin\theta\sin\phi,-\sin\theta\cos\phi)\), with \(\phi=\omega t\). In the standard case of a ferromagnet/nonmagnetic metal bilayer, studied in most of the literature [15; 16; 17; 3; 18], the charge pumping is due to the spin Hall effect in the nonmagnetic metal and computed by combining magnetoelectronic circuit theory with drift-diffusion equation [1; 2]. In this case, the charge current reads \[\mathbf{J}_{c}\approx\tilde{\alpha}_{N}\frac{\tilde{\lambda}_{N}}{d_{N}}(2 \tilde{G}_{\uparrow\downarrow}^{r}\mathbf{z}\times(\mathbf{m}\times\partial_{ t}\mathbf{m})+2\tilde{G}_{\uparrow\downarrow}^{i}\mathbf{z}\times\partial_{t} \mathbf{m}), \tag{11}\] where \[2\tilde{G}_{\uparrow\downarrow}^{r,i} = \frac{2G_{\uparrow\downarrow}^{r,i}}{1+\frac{2G_{\uparrow \downarrow N}}{\sigma_{N}}},\,\tilde{\lambda}_{N}=\frac{\lambda_{N}}{\tanh \frac{d_{N}}{\lambda_{N}}}, \tag{12}\] \[\tilde{\alpha}_{N} = \alpha_{N}\left(1-\cosh^{-1}\frac{d_{N}}{\lambda_{N}}\right). \tag{13}\] \(G_{\uparrow\downarrow}^{r,i}\) are the real and imaginary parts of the spin mixing conductance, and \(\alpha_{N}\), \(\lambda_{N}\), \(d_{N}\) and \(\sigma_{N}\) are the spin Hall angle, spin relaxation length, thickness and conductivity of the nonmagnetic metal. For the precessing magnetization adopted in our calculations and considering \(G_{\uparrow\downarrow}^{r}\gg G_{\uparrow\downarrow}^{i}\)[35], \[\mathbf{J}_{c}^{dc} = \omega\tilde{\alpha}_{N}\frac{\tilde{\lambda}_{N}}{d_{N}}2\tilde {G}_{\uparrow\downarrow}^{r}\sin^{2}\theta\mathbf{y}, \tag{14}\] \[\mathbf{J}_{c}^{ac} = \omega\tilde{\alpha}_{N}\frac{\tilde{\lambda}_{N}}{d_{N}}\left(2 \tilde{G}_{\uparrow\downarrow}^{r}\frac{\sin 2\theta}{2}\sin\omega t+2\tilde{G}_{ \uparrow\downarrow}^{i}\sin\theta\cos\omega t\right)\mathbf{x}. \tag{15}\] As mentioned in the introduction, this theory gives the well-known result that a dc current is injected transverse to the precession axis, Eq. (14), and an ac current is generated along it, Eq. (15). Let us now turn our attention to the case of a magnetic Rashba gas. Although the scattering matrix formalism used in Refs. [1; 2] is well adapted to current-perpendicular-to-plane geometries, it is not suited to current-in-plane geometries. The charge pumping can be readily computed using the semiclassical wave packet formalism developed by Sundaram and Niu [36]. In this approach, the group velocity of state \(n\), \(\mathbf{v}_{n}\), is associated with the time-momentum Berry curvature \(\mathbf{\Omega}_{\mathbf{k}}^{n}\) and reads \[\mathbf{v}_{n}=\mathbf{\Omega}_{\mathbf{\acute{t}}\mathbf{k}}^{n}=2i\text{Im }\left[(\partial_{t}u_{n}|\partial_{\mathbf{k}}u_{n})\right], \tag{16}\] where \(|u_{n}\rangle\) is the periodic part of the Bloch state. In the limit of slow dynamics, the spin remains aligned on the effective field due to exchange and spin-orbit coupling and because of spin-momentum locking, the wave function acquires a geometrical phase which results in adiabatic charge pumping. The induced _intrinsic_ current is therefore \(\mathbf{J}_{c}^{\text{int}}=-e\sum_{n}\int d^{2}\mathbf{k}/(2\pi)^{2}\mathbf{ \Omega}_{\mathbf{\acute{t}}\mathbf{k}}^{n}f(\varepsilon_{\mathbf{k}}^{n})\). Let us now compute this current in the case of the free magnetic Rashba electron gas, defined by the Hamiltonian \[\mathcal{H}=\frac{\hbar^{2}k^{2}}{2m}+\Delta\hat{\boldsymbol{\sigma}}\cdot \mathbf{m}+\alpha_{\text{R}}\hat{\boldsymbol{\sigma}}\cdot(\mathbf{p}\times \mathbf{z}), \tag{17}\] where \(\alpha_{\text{R}}\) is the Rashba strength. Equation (17) can be derived directly from Eq. (10) taking \(|\mathbf{k}|\ll\pi/a\). The eigenstates are \[|+\rangle=\begin{pmatrix}-e^{-i\phi_{k}}\cos\frac{\theta_{k}}{2} \\ \sin\frac{\theta_{k}}{2}\end{pmatrix},\,|-\rangle=\begin{pmatrix}e^{-i\phi_{k}} \sin\frac{\theta_{k}}{2}\\ \cos\frac{\theta_{k}}{2}\end{pmatrix}, \tag{18}\] \[\varepsilon_{\mathbf{k}}^{n}=\frac{\hbar^{2}k^{2}}{2m}+n\lambda_{k },\,n=\pm 1, \tag{19}\] with \[\lambda_{k} = \sqrt{\Delta^{2}+\alpha_{\text{R}}^{2}k^{2}+2\Delta\alpha_{\text {R}}(\mathbf{p}\times\mathbf{z})\cdot\mathbf{m}} \tag{20}\] \[\cos\theta_{k} = -\frac{\Delta}{\lambda_{k}}\sin\theta\cos\phi,\] (21) \[\tan\phi_{k} = \frac{\Delta\sin\theta\sin\phi-\alpha_{\text{R}}k_{x}}{\Delta\cos \theta+\alpha_{\text{R}}k_{y}}. \tag{22}\] With these definitions, the Berry curvature for band \(n\) reads \[\mathbf{\Omega}_{\mathbf{\acute{t}}\mathbf{k}}^{n}=n(\partial_{\mathbf{k}} \theta_{k}\partial_{t}\phi_{k}-\partial_{\mathbf{k}}\phi_{k}\partial_{t}\theta_{k })\frac{\sin\theta_{k}}{2}. \tag{23}\] After some algebra, we find that the intrinsic pumped current density reads, to the lowest order in Rashba strength \(\alpha_{\rm R}\), \[\mathbf{j}_{p}=-\frac{e\omega}{\lambda_{\rm R}}\left(\frac{\sin 2\theta}{2} \sin\omega t\mathbf{x}+\sin^{2}\theta\mathbf{y}\right), \tag{24}\] where \(\lambda_{\rm R}=\hbar/(\alpha_{\rm R}m)\) is the Rashba precession length. Equation (24) shows that a dc current is injected transverse to the precession axis and an ac current is generated along it, which is expected from the standard theory, Eqs. (14)-(15). The very same angular dependence is obtained. We emphasize that this expression is correct in the limit of small spin-orbit coupling and only accounts for the intrinsic contribution to the charge pumping, disregarding the Fermi surface breathing. ### Charge pumping in the magnetic Rashba gas We now compute the charge current pumped by the precessing magnetization using our theory. With the definition of \(\mathbf{m}\) given in the previous section, the torque operator is \(\mathbf{T}\cdot\partial_{t}\mathbf{m}=\omega\Delta\sin\theta\alpha(t)\), with \(\alpha(t)=\sigma_{y}\sin\omega t+\sigma_{z}\cos\omega t\). Therefore, the charge current reads \[\mathbf{J}_{c}^{\rm surf}=-\omega\sin\theta\frac{\Delta}{2\pi} \mathrm{Re}\mathrm{Tr}_{\mathbf{k}}[\mathbf{v}(G_{0}^{R}-G_{0}^{A})\alpha(t)G _{0}^{A}]\partial_{\varepsilon}f(\varepsilon), \tag{25}\] \[\mathbf{J}_{c}^{\rm sea}=\omega\frac{\Delta}{2\pi}\sin\theta \int d\varepsilon\mathrm{Re}\mathrm{Tr}_{\mathbf{k}}[\mathbf{v}G_{0}^{A}[ \alpha(t),G_{0}^{A}]G_{0}^{A}]f(\varepsilon). \tag{26}\] Here we defined \(\mathrm{Tr}_{\mathbf{k}}=\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2}}\mathrm{Tr}\). In the limit of vanishing disorder, Eq. (26) is simply the intrinsic current induced by the time-momentum Berry curvature, Eq. (16). Equation (25) has not been derived before though, and is associated with the Fermi surface breathing, as already mentioned. We start our investigation by adopting the set of parameters \(\Delta=0.1\), \(t_{R}=0.1\), for which the Fermi surface remains mostly circular (see Fig. 1). The pumped current components computed for the Rashba gas are displayed in Fig. 2(b-e) and several remarks are in order. First, the signal of all four current components increases steadily with the cone angle, which is expected. Second, whereas Eqs. (14)-(15) predict an oscillatory current only along \(\mathbf{x}\), our calculations predict that in the case of the Rashba gas an oscillatory current develops along \(\mathbf{y}\), dominated by the extrinsic contribution, i.e., the Fermi surface breathing [Fig. 2(e)]. This distinct feature can be traced back to the competition between the exchange and the Rashba field, depicted in Fig. 1, and which results in a time-dependent modulation of the effective field \(\mathbf{B}_{\rm eff}\) along \(\mathbf{x}\) that produces the oscillatory current along \(\mathbf{y}\). Notice also that the intrinsic current displays a cosine dependence whereas the extrinsic current displays a sine dependence. This phase shift between intrinsic and extrinsic currents is also present in the conventional theory, Eq. (15), where \(J_{c,x}^{\rm ac}\propto\bar{G}_{\uparrow\downarrow}^{r}\sin\omega t+\bar{G}_{ \uparrow\downarrow}^{i}\cos\omega t\). Finally, we find that the extrinsic contribution to the pumped current is much larger than the intrinsic one, typically two orders of magnitudes in Fig. 2(b-e). The extrinsic contribution is, by definition, inversely proportional to the disorder broadening \(\Gamma\), which in our model is a tunable parameter, fixed to \(\Gamma=0.1\) eV. Decreasing this parameter would lead to an enhancement of the extrinsic contribution. The intrinsic contribution, on the other hand, is related to the time-momentum Berry curvature, and is therefore expected to be very sensitive to avoided crossing points in the band structure, akin to anomalous Hall effect (see, for instance discussions in Ref. [37]). Therefore, the relative magnitude of the intrinsic and extrinsic contributions is therefore not only band structure-dependent but also disorder-dependent, which opens particularly appealing avenues for charge pumping engineering in quantum materials, such as magnetic Weyl semimetals. We now investigate the influence of the relative strength of the Rashba and exchange interactions on the dc (Fig. 3) and ac (Fig. 4) currents. The dc current displays two interesting features. First, the overall magnitude of the pumped current increases with the exchange \(\Delta\), which is expected from the linear response theory. Second, and more interestingly, the pumped dc current depends on the Rashba strength \(t_{R}\) in a nontrivial manner. At small Rashba strength, both intrinsic and extrinsic contributions first increase steadily before reaching a Figure 2: (Color online) (a) Sketch of the charge pumping configuration. Time-dependence of the intrinsic (b,c) and extrinsic (d,e) currents as a function of the cone angle, \(\theta\in[5^{\circ},65^{\circ}]\). Both \(J_{x}\) (d,e) and \(J_{y}\) (b,c) are displayed and the parameters are \(\Delta=0.1\), \(t_{R}=0.1\), \(\Gamma=0.1\) and \(E_{F}=-1\). maximum. Once the maximum is reached, the magnitude of the pumped dc current decreases, switches sign and increases again. The Rashba strength at which the sign reversal occurs decreases when increasing the exchange, which is particularly clear for the intrinsic contribution [Fig. 3(a)]. Finally, at large Rashba strength, the pumped current saturates. This behavior is obviously in stark contrast with the conventional theory of spin pumping and illustrates the complex interplay between exchange, Rashba field and spin-to-charge conversion. To complete the physical picture, Fig. 4 displays the magnitude of the ac homodyne (\(\sim\omega\)) charge current as a function of the strength of exchange and Rashba interactions. We emphasize that we show the _absolute value_ of the ac components. The extrinsic contributions [Fig. 4(c,d)] increase steadily with the Rashba strength and saturate at large spin-orbit interaction, similarly to the dc case. In contrast, the intrinsic contributions [Fig. 4(a,b)] reach a maximum, collapse and saturate at large Rashba strength. Since we only account for the absolute value of the ac current, the collapse observed in Fig. 4(a,b) is associated with a \(\pi\) shift (i.e., a sign change) akin to the one observed for the intrinsic dc current [Fig. 3(a)]. As discussed above, the competition between exchange and spin-orbit coupling leads to a regime of parameters when the Fermi surface breathing induces higher harmonics due to the dramatic distortion of the Fermi surface. Importantly, we find that the harmonics are particularly strong for the intrinsic current contributions and rather negligible in the extrinsic contribution [compare Fig. 2(c) with Fig. 2(e), for instance]. In the following, we focus on the intrinsic currents. Figure 5 shows such a situation, obtained when \(\Delta=1\), \(E_{F}=-1\), and \(\theta=65^{\circ}\). The current along \(\mathbf{x}\) possesses odd harmonics (\((2n+1)\omega\)) whereas the current along \(\mathbf{y}\) possesses even harmonics (\(2n\omega\)), due to symmetry (only \(J_{y}\) produces a rectified, dc current). Although their magnitude decreases with the harmonics number (under the adiabatic assumption, our theory is based on a perturbative expansion of the magnetization dynamics), we find that in a certain region of Rashba strength, close to the maximum dc current obtained in Fig. 3(a), i.e., \(t_{R}\in[0.3,0.6]\), the first few harmonics are comparable in magnitude. This results in a rather complex time-dependence of the intrinsic signal, as shown in Figure 5(b,d) for selected cases. ### Magnetic graphene For the sake of completeness, we now consider the case of magnetic graphene, Fig. 6(a), whose band structure presents the peculiarity to be highly sensitive to the magnetization direction. The Hamiltonian is obtained by regularizing Hamiltonian (10) on a honeycomb lattice. Setting \(\theta=45^{\circ}\), \(t=1\) eV, and \(t_{R}=0.15\) eV, we obtain the band structures reported in Fig. 6(c,e) for \(\Delta=0.1\) eV and \(\Delta=0.75\) eV, respectively. For a better comparison with the Rashba gas studied above, we set the lattice parameter to \(a=1\) nm. Figures 6(b,d) show the Fermi surface when setting the magnetization along \(\pm\mathbf{x}\) for two different illustrative cases. Because the breathing Figure 4: (Color online) Dependence of the ac current along (a,c) \(\mathbf{x}\) and (b,d) \(\mathbf{y}\) as a function of the Rashba and exchange interactions. Both (a,b) intrinsic and (c,d) extrinsic contributions are shown. The parameters are the same as in Fig. 3. Figure 5: (Color online) Dependence of the current harmonics as a function of the Rashba strength for \(\Delta=1\) for the intrinsic contribution of the current along (a) \(\mathbf{x}\) and (c) \(\mathbf{y}\). The shaded area emphasizes the parameter region when harmonics are sizable. Time-dependence of the intrinsic current along (b) \(\mathbf{x}\) and (d) \(\mathbf{y}\) for \(t_{R}=0.2,\ 0.3,\ 0.4\) and \(0.5\). Figure 3: (Color online) Dependence of the (a) intrinsic and (b) extrinsic dc current along \(\mathbf{y}\) as a function of the Rashba and exchange interactions. The parameters are \(\Delta\in[0.1,1]\), \(t_{R}\in[0,1.3]\), \(\Gamma=0.1\), \(E_{F}=-1\) and \(\theta=65^{\circ}\). is governed by the Rashba spin-orbit coupling, we obtain a distortion that is qualitatively similar to the one in Fig. 1, suggesting strong charge pumping. The time-dependence of the intrinsic and extrinsic currents are reported in Fig. 7 for parameters comparable to that of the Rashba gas, i.e., \(t=0.2\) eV, \(\Delta=1.0\) eV, \(E_{F}=-0.5\) eV and \(t_{R}=0.3\) (black), \(0.5\) (red) and \(0.7\) eV (blue). For this set of parameters, the time-dependence of the pumped current is radically different from the conventional one reported in Fig. 2, and substantially deviates from an harmonic response (i.e., \(\cos\omega t\), \(\sin\omega t\)). This behavior is directly associated with the high sensitivity of the band structure on the magnetization, as seen in Fig. 6, and results in high harmonics. Finally, taking a hopping parameter closer to that of graphene, \(t=1\) eV, Fig. 8 shows the dependence of the harmonics of the intrinsic current, obtained when \(\Delta=0.75\) eV, \(E_{F}=-1.15\) eV, and \(\theta=45^{\circ}\). As expected, the current along \(\mathbf{x}\) possesses odd harmonics (\((2n+1)\omega\)) whereas the current along \(\mathbf{y}\) possesses even harmonics (\(2n\omega\)). Upon increasing the Rashba strength, the homodyne component along \(\mathbf{x}\) increases steadily up to a maximum that lies out of the range studied here. The other harmonic components all reach a maximum within the range \(t_{R}\in[0,1]\) eV suggesting that second and third harmonic are quite sizable for a reasonable Rashba strength, resulting in the complex time-dependence of the intrinsic signal, see Fig. 8(b,d). ## IV Discussion and Conclusion The formalism described in the present Article extends the traditional theory of spin and charge pumping [1; 2] by covering the full range of exchange and spin-orbit coupling. It can be readily adapted to address spin, charge, and orbital pumping in multiband heterostructures. It is particularly well adapted to compute adiabatic pumping in realistic heterostructures obtained from density functional theory. Among the important features uncovered by the present theory, we point out the importance of both intrinsic and extrinsic contributions to both the dc and ac currents, akin to magnetic damping [28], an aspect that is overlooked by the traditional theory of spin pumping. This theoretical framework is instrumental to investigate spin-charge interconversion in strongly spin-orbit coupled systems such as the surface of topological heterostructures, involving topological insulators and Weyl semimetals, for instance. Finally, we would like to comment on the adiabatic harmonic generation displayed by Figs. 5 and 8. In con Figure 8: (Color online) Dependence of the current harmonics as a function of the Rashba strength for \(t=1\) eV, \(\Delta=0.75\) eV and \(E_{F}=-1.15\) eV for the intrinsic contribution of the current along (a) \(\mathbf{x}\) and (c) \(\mathbf{y}\). Time-dependence of the intrinsic current along (b) \(\mathbf{x}\) and (d) \(\mathbf{y}\) for \(t_{R}\in[0.1,1]\) eV. Figure 6: (Color online) (a) Schematics of spin pumping in magnetic graphene with Rashba spin-orbit coupling. (b,d) Fermi surface for \(\Delta=0.1\) eV (\(\Delta=0.75\) eV) and \(t_{R}=0.15\) eV when setting the magnetization along \(\pm\mathbf{x}\). (c,e) Corresponding band structure when setting the magnetization along \(\mathbf{z}\). The dashed lines correspond to the energy at which the surfaces in (b,d) are taken. Figure 7: (Color online) Time-dependence of the intrinsic (a,b) and extrinsic (c,d) currents for \(t=0.2\) eV, \(\Delta=1.0\) eV, \(E_{F}=-0.5\) eV and \(t_{R}=0.3\) (black), \(0.5\) (red) and \(0.7\) eV (blue). trast with Ref. [24] that displays tens of harmonics, the adiabatic theory only shows a few of them. The present theory differs from Ref. [24] in several aspects: first Ref. [24] computes the time-dependent Schrodinger equation in a finite size ribbon in the Landauer-Buttiker configuration (a conductor connected to two leads). As such, the internal dynamics of the electron spin is computed numerically and the quantum interference between the electronic modes contributes to the complex time-dependent current. In the present theory, it is assumed that the spin is aligned on the effective field (exchange field+Rashba field, \(\mathbf{B}_{\mathrm{eff}}\) in Fig. 1) so that the internal spin dynamics is neglected. In addition, our model Hamiltonian is not defined in real space but in momentum space. It is therefore unsurprising that several features associated with quantum interference and real-time spin evolution are absent in the present work. Our calculations suggest that these harmonics are mostly present in the intrinsic contribution, associated with the time-momentum Berry curvature. A natural research direction is therefore to identify materials systems where such a Berry curvature is enhanced, e.g. in magnetic Weyl semimetals such as Co\({}_{3}\)Sn\({}_{2}\)S\({}_{2}\)[38; 39] or Mn\({}_{3}\)Sn [40]. In addition, the harmonics computed here are obtained for _large precession angles_, which are not achievable using ferromagnetic resonance techniques. Nonetheless, such large angles can be obtained via current-driven autoscillations [41; 42]. This inspires us to propose two devices in Fig. 9 based on (a) spin transfer torque and (b) spin-orbit torque that can excite such large-angle autoscillations. In the presence of strong spin-orbit coupling, large Fermi surface breathing is expected, which could trigger harmonic currents. We acknowledge that the generation of harmonic currents via spin pumping is a daring technical challenge. In fact, to the best of our knowledge, only a handful of experiments have achieved the detection of the homodyne ac current [19; 20; 21]. The difficulty is that not only one needs to get rid of the frequency response of the radiofrequency setup itself, but in addition, in most experiments the magnetization precession is not circular but rather ellipsoidal, resulting in harmonics of purely magnetic origin [19]. Hence, one needs to ensure that the observed harmonics are of purely electronic nature, i.e., from the competition of exchange and spin-orbit coupling, which is certainly an interesting challenge for experiments. ###### Acknowledgements. A.P. acknowledges support from the ANR ORION project, grant ANR-20-CE30-0022-01 of the French Agence Nationale de la Recherche. A. M. acknowledges support from the Excellence Initiative of Aix-Marseille Universite - A*Midex, a French "Investissements d'Avenir" program.
2309.16971
Multi-Resolution Active Learning of Fourier Neural Operators
Fourier Neural Operator (FNO) is a popular operator learning framework. It not only achieves the state-of-the-art performance in many tasks, but also is efficient in training and prediction. However, collecting training data for the FNO can be a costly bottleneck in practice, because it often demands expensive physical simulations. To overcome this problem, we propose Multi-Resolution Active learning of FNO (MRA-FNO), which can dynamically select the input functions and resolutions to lower the data cost as much as possible while optimizing the learning efficiency. Specifically, we propose a probabilistic multi-resolution FNO and use ensemble Monte-Carlo to develop an effective posterior inference algorithm. To conduct active learning, we maximize a utility-cost ratio as the acquisition function to acquire new examples and resolutions at each step. We use moment matching and the matrix determinant lemma to enable tractable, efficient utility computation. Furthermore, we develop a cost annealing framework to avoid over-penalizing high-resolution queries at the early stage. The over-penalization is severe when the cost difference is significant between the resolutions, which renders active learning often stuck at low-resolution queries and inferior performance. Our method overcomes this problem and applies to general multi-fidelity active learning and optimization problems. We have shown the advantage of our method in several benchmark operator learning tasks. The code is available at https://github.com/shib0li/MRA-FNO.
Shibo Li, Xin Yu, Wei Xing, Mike Kirby, Akil Narayan, Shandian Zhe
2023-09-29T04:41:27Z
http://arxiv.org/abs/2309.16971v4
# Multi-Resolution Active Learning of Fourier Neural Operators ###### Abstract Fourier Neural Operator (FNO) is a popular operator learning framework. It not only achieves the state-of-the-art performance in many tasks, but also is highly efficient in training and prediction. However, collecting training data for the FNO can be a costly bottleneck in practice, because it often demands expensive physical simulations. To overcome this problem, we propose Multi-Resolution Active learning of FNO (MRA-FNO), which can dynamically select the input functions and resolutions to lower the data cost as much as possible while optimizing the learning efficiency. Specifically, we propose a probabilistic multi-resolution FNO and use ensemble Monte-Carlo to develop an effective posterior inference algorithm. To conduct active learning, we maximize a utility-cost ratio as the acquisition function to acquire new examples and resolutions at each step. We use moment matching and the matrix determinant lemma to enable tractable, efficient utility computation. Furthermore, we develop a cost annealing framework to avoid over-penalizing high-resolution queries at the early stage. The over-penalization is severe when the cost difference is significant between the resolutions, which renders active learning often stuck at low-resolution queries and inferior performance. Our method overcomes this problem and applies to general multi-fidelity active learning and optimization problems. We have shown the advantage of our method in several benchmark operator learning tasks. ## 1 Introduction Operator learning is emerging as an important topic in scientific machine learning. It intends to estimate function-to-function mappings and can serve as a useful surrogate model for many physical simulation related applications, such as weather forecast (Pathak et al., 2022), control (Bhan et al., 2023), engineering design (Liu et al., 2023) and inverse problems (Kaltenbach et al., 2022). One representative approach is the Fourier neural operator (FNO) (Li et al., 2020d), which uses fast Fourier transform (FFT) and convolution theorem to fulfill global linear transforms in the functional space. The FNO not only shows state-of-the-art performance in many tasks, but also is highly efficient in training and prediction. Despite the advantages, collecting training data for the FNO can be a severe bottleneck in practice because it often requires many physical simulations (_e.g._, running numerical solvers), which is known to be computationally expensive. To reduce the cost, one can consider multi-resolution data. The low-resolution data is cheap to obtain (typically computed with rough meshes) but the provided output function samples are quite inaccurate (large bias). On the contrary, high-resolution data offers accurate output function samples, yet is much more costly to generate (from dense meshes). Although with substantial difference in quality, the low and high resolution examples share the same underlying physics and are strongly correlated. Hence, one can reasonably expect using multi-resolution data to well train the FNO while reducing the data cost. However, blindly collecting examples at different resolutions is hardly optimal in both cost saving and learning efficiency. To reduce the data cost to the greatest extent while optimizing the learning efficiency, we propose MRA-FNO, a novel multi-resolution active learning method, which can dynamically select the best input function and resolution each step, at which to generate new examples. The major contributions of our work are summarized as follows. * **Probabilistic Multi-Resolution FNO.** We first extend the FNO to integrate multi-resolution training data. To capture the influence of the resolution choice on the predictive distribution, we append a resolution embedding to the samples of the input function. After the FNO layers, we create two branches: one generates the prediction mean of the target function and the other the variance. In this way, the prediction is up to not only the input function samples but also the resolution choice. We then use Monte-Carlo ensemble learning (Lakshminarayanan et al., 2017) to fulfill effective uncertainty quantification, which is critical for utility evaluation and active learning. * **Active Learning.** To optimize the learning efficiency while reducing the data cost as much as possible, we maximize the utility-cost ratio to select the best training input and resolution at each step, where the utility is measured by mutual information. The strategy is similar to the state-of-the-art multi-fidelity active learning and Bayesian optimization methods (Li et al., 2022; Takeno et al., 2020; Li et al., 2020a), but there are two severe challenges. The first challenge is that the computation of the utility function is analytically intractable and costly. We use moment matching to approximate the posterior predictive distribution as a multi-variate Gaussian. We then leverage the structure of the covariance matrix, and apply the matrix determinant lemma to fulfill efficient, closed-form mutual information calculation. The second challenge is that, directly maximizing the utility-cost ratio as in previous methods, tends to trap the active learning at low-resolution queries and inferior performance. This is due to that when the data is few (at the early stage), the mutual information measurement for examples at different resolutions is close. High-resolution examples are thereby over-penalized by the large cost. We propose a cost annealing framework, which initializes the same cost for every resolution. The cost for each resolution is scheduled to gradually converge to the true cost along with data accumulation. When the data is enough and mutual information can reflect the true potential of each example, our active learning returns to maximizing the benefit-cost ratio. In this way, our method can flexibly incorporate high-resolution examples at the early stage to enable continuous improvement. Our framework applies to general multi-fidelity learning and optimization problems. * **Experimental Results.** We evaluated MRA-FNO with four benchmark operator learning tasks, based on Burger's, Darcy flow, nonlinear diffusion and Navier-Stoke equations. On fixed training datasets, our multi-resolution FNO shows better or very close prediction error as compared to the standard FNO. Both the prediction accuracy and test log likelihood are much higher than applying other popular Bayesian inference methods, including Monte-Carlo (MC) dropout, stochastic gradient Langevin dynamics and variational inference. It shows our ensemble inference provides much better uncertainty quantification. During the course of each active learning experiment, MRA-FNO consistently achieves much better prediction accuracy with the same accumulated data cost, as compared with random queries, core-set active learning, and our framework with dropout inference. ## 2 Background **Operator Learning.** Suppose our goal is to learn a function-to-function mapping \(\psi:\mathcal{H}\rightarrow\mathcal{Y}\), where \(\mathcal{H}\) and \(\mathcal{Y}\) are two function spaces (_e.g._, Banach spaces). The training dataset comprises pairs of discretized input and output functions, \(\mathcal{D}=\{(\mathbf{f}_{n},\mathbf{y}_{n})\}_{n=1}^{N}\), where each \(\mathbf{f}_{n}\) are samples of a function \(f_{n}\in\mathcal{H}\), and \(\mathbf{y}_{n}\) are samples of \(\psi[f_{n}]\in\mathcal{Y}\). All the input and output functions are discretized (sampled) at a set of evenly-spaced locations, _e.g._, a \(64\times 64\) mesh in the 2D spatial domain \([0,1]\times[0,1]\). **Fourier Neural Operators (FNO).** Given a discretized input function \(\mathbf{f}\), the FNO first applies a feed-forward network (FFN) over each element of \(\mathbf{f}\) and the associated sampling location to lift the input to a higher-dimensional channel space. Then a Fourier layer is used perform a linear transform and nonlinear activation in the functional space, \[v(\mathbf{x})\leftarrow\sigma\left(\mathcal{W}v(\mathbf{x})+\int\kappa( \mathbf{x}-\mathbf{x}^{\prime})v(\mathbf{x}^{\prime})\mathrm{d}\mathbf{x}^{ \prime}\right)\] where \(v(\mathbf{x})\) in the R.H.S is the input function to the Fourier layer and in the L.H.S the output function, \(\kappa(\cdot)\) is the integration kernel and \(\sigma(\cdot)\) is the activation. Based on the convolution theorem \(\int\kappa(\mathbf{x}-\mathbf{x}^{\prime})v(\mathbf{x}^{\prime})\mathrm{d} \mathbf{x}^{\prime}=\mathcal{F}^{-1}\left[\mathcal{F}[\kappa]\cdot\mathcal{F} [v]\right](\mathbf{x})\) where \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) are the Fourier and inverse Fourier transforms, respectively, the Fourier layer performs fast Fourier transform (FFT) over \(v\), multiplies it with the discretized kernel in the frequency domain, and then performs inverse FFT. The local linear transform, \(\mathcal{W}v(\mathbf{x})\), is performed by standard convolution (as in convolution nets). Due to the usage of FFT, the computation of the Fourier layer is highly efficient. After several Fourier layers, another FFN is applied channel-wisely to project back and make the final prediction. The training is typically done by minimizing an \(L_{2}\) loss, \(\Theta^{*}=\operatorname*{argmin}_{\Theta}\frac{1}{N}\sum_{n=1}^{N}\|\mathbf{ g}_{n}-\psi_{\text{FNO}}(\mathbf{f}_{n};\Theta)\|\), where \(\Theta\) are the model parameters, including the discretized kernel in the frequency domain, standard convolution parameters in each Fourier layer, and the parameters of the FNN's for channel lifting and projection. ## 3 Probabilistic Multi-Resolution FNO Despite the advantages of the FNO, the training data collection can be a severe bottleneck for practical usage, because it typically requires many expensive physical simulations. To reduce the cost, we consider using multi-resolution data, which combines accurate yet expensive high-resolution examples with inaccurate (large bias) yet cheap-to-generate low-resolution examples. We then propose an active learning approach to lower the data cost to the fullest extent while reaching a high learning efficiency. To this end, we first propose a probabilistic FNO that can effectively integrate multi-resolution training examples and perform posterior inference. Specifically, suppose a multi-resolution dataset is given, \(\mathcal{D}=\{(\mathbf{f}_{n},\mathbf{g}_{n},r_{n})\}_{n=1}^{N}\) where \(r_{n}\) denotes the resolution of the \(n\)-th example. We have \(R\) different resolutions in total (\(1\leq r_{n}\leq R\)). For example, on a 2D spatial domain \([0,1]\times[0,1]\), we might have two resolutions, \(16\times 16\) and \(128\times 128\). To explicitly model the influence of the resolution choice on the prediction, we introduce an embedding \(\mathbf{e}_{r}\) to represent each resolution \(r\in[1,R]\). In our experiment, we set \(\mathbf{e}_{r}\) to a one-hot encoding. We have also tried other embeddings, such as positional encodings (Vaswani et al., 2017). The performance is close. We apply an FFN to every element of \(\mathbf{f}_{n}\), the corresponding sample location \(\mathbf{x}_{j}\), and the embedding \(\mathbf{e}_{r_{n}}\) to obtain a new representation \(\widehat{\mathbf{f}}_{n}\), each \[[\widehat{\mathbf{f}}_{n}]_{j}=\text{FNN}([\mathbf{f}_{n}]_{j},\mathbf{x}_{j },\mathbf{e}_{r_{n}}). \tag{1}\] Next, we use standard Fourier layers to perform successive linear and nonlinear transforms in the functional space. Denote by \(\mathbf{v}_{n}\) the output (discretized) function. We then create two branches. One branch applies an FNN in each channel to project \(\mathbf{v}_{n}\) back to the target dimension and output the prediction mean, \(\boldsymbol{\mu}_{\Theta}(\mathbf{f}_{n},\mathbf{e}_{n})\) where \(\Theta\) denote the model parameters. The other branch performs a standard convolution and then an FNN to output the prediction variance in the log domain, \(\eta_{\Theta}(\mathbf{f}_{n},\mathbf{e}_{n})\). We then use a Gaussian likelihood to model the observed (discretized) output function, \[p(\mathbf{g}_{n}|\mathbf{f}_{n},r_{n})=\mathcal{N}\left(\mathbf{g}_{n}| \boldsymbol{\mu}_{\Theta}(\mathbf{f}_{n},\mathbf{e}_{r_{n}}),e^{\eta_{\Theta}( \mathbf{f}_{n},\mathbf{e}_{r_{n}})}\cdot\mathbf{I}\right).\] We can see, both the mean and variance are not only dependent on the input \(\mathbf{f}_{n}\) but also up to the resolution choice \(r_{n}\). In this way, our model can capture the influence of the resolution choice on the prediction distribution. Our model is illustrated in Appendix Fig. 5. Next, we use Monte-Carlo ensemble learning (Lakshminarayanan et al., 2017)1 to fulfill effective posterior inference. Specifically, we randomly initialize the model parameters \(\Theta\), and maximize the log likelihood to obtain one point estimate via stochastic mini-batch optimization, Footnote 1: we do not introduce adversarial samples as in (Lakshminarayanan et al., 2017). We empirically found little help with such samples. \[\Theta^{*}=\operatorname*{argmax}_{\Theta}\ \sum_{n=1}^{N}\log\left[ \mathcal{N}\left(\mathbf{g}_{n}|\boldsymbol{\mu}_{\Theta}(\mathbf{f}_{n}, \mathbf{e}_{n}),e^{\eta_{\Theta}(\mathbf{f}_{n},\mathbf{e}_{n})}\mathbf{I} \right)\right].\] We independently repeat this procedure for \(M\) times, and obtain an ensemble of the point estimates of the model parameters, \(\{\Theta_{1}^{*},\ldots,\Theta_{M}^{*}\}\). We then construct a discrete posterior approximation of the model parameters, \(p(\Theta|\mathcal{D})\approx\frac{1}{M}\sum_{m=1}^{M}\delta(\Theta-\Theta_{m}^ {*})\), where \(\delta(\cdot)\) is the Dirac delta measure. Given a test input function \(\mathbf{f}\) and the resolution embedding \(\mathbf{e}\), the predictive distribution of the output function is therefore a Gaussian mixture, \[p(\mathbf{y}(\mathbf{f},\mathbf{e})|\mathcal{D})\] \[=\frac{1}{M}\sum_{m=1}^{M}\mathcal{N}\left(\mathbf{y}|\boldsymbol {\mu}_{\Theta_{m}^{*}}(\mathbf{f},\mathbf{e}),\sigma_{\Theta_{m}^{*}}^{2}( \mathbf{f},\mathbf{e})\cdot\mathbf{I}\right). \tag{2}\] where \(\sigma_{\Theta_{m}^{*}}^{2}(\mathbf{f},\mathbf{e})=e^{\eta_{\Theta_{m}^{*}}( \mathbf{f},\mathbf{e})}\). ## 4 Multi-Resolution Active Learning Now, we present our multi-resolution active learning algorithm. To optimize the learning efficiency while lowering the data cost as much as possible, at each step, we maximize a utility-cost ratio (as the acquisition function) to determine the most valuable input function and its resolution, at which we query a new example. Specifically, we prepare a pool of candidate input functions \(\mathcal{P}\). Denote by \(\lambda_{r}\) the cost of generating the output function at resolution \(r\in[1,R]\). We have \(\lambda_{1}<\ldots<\lambda_{R}\). To measure the value of an example with input function \(h\in\mathcal{P}\) and resolution \(r\), we consider two utility functions. The first one follows (Li et al., 2022b) and quantifies the information the example can bring to predict at the highest resolution \(R\), \[u(h,r)=\mathbb{I}(\mathbf{y}(\mathbf{h}^{r},\mathbf{e}_{r}),\mathbf{y}( \mathbf{h}^{R},\mathbf{e}_{R})|\mathcal{D}) \tag{3}\] where \(\mathcal{D}\) is the current training dataset, \(\mathbb{I}(\cdot,\cdot)\) is the mutual information, \(\mathbf{h}^{r}\) and \(\mathbf{h}^{R}\) are function \(h\) discretized at resolution \(r\) and \(R\), respectively, and \(\mathbf{e}_{r}\) and \(\mathbf{e}_{R}\) are the corresponding resolution embeddings. The utility function (3) only considers how the example can improve the prediction for the same input function. To model its benefit in improving the prediction for other input functions, we follow (Li et al., 2022a) to consider a second utility function \(u(h,r)=\mathbb{E}_{p(h^{\prime})}[\mathbb{I}(\mathbf{y}(\mathbf{h}^{r}, \mathbf{e}_{r}),\mathbf{y}(\mathbf{h}^{\prime R},\mathbf{e}_{R})|\mathcal{D})]\), where \(h^{\prime}\in\mathcal{H}\) and \(p(h^{\prime})\) is a distribution over \(\mathcal{H}\). The expectation usually does not have a closed-form, and we therefore draw \(A\) functions, \(h^{\prime}_{1},\ldots,h^{\prime}_{A}\sim p(h^{\prime})\), and adopt an Monte-Carlo approximation, \[\widehat{u}(h,r)=\frac{1}{A}\sum_{l=1}^{A}\mathbb{I}(\mathbf{y}( \mathbf{h}^{r},\mathbf{e}_{r}),\mathbf{y}(\mathbf{h}^{\prime R}_{l},\mathbf{e }_{R})|\mathcal{D}). \tag{4}\] ### Efficient Utility Computation The utility function in both (3) and (4) demands we compute the mutual information between a pair of predictions from our model. The computation is challenging in that (1) those predictions are typically high-dimensional (_e.g._, a \(100\times 100\) resolution corresponds to \(10K\) dimensional outputs), and (2) the mutual information is analytically intractable due to the Gaussian mixture predictive distribution in (2). To address this problem, we observe that for any two predictions \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\), we have \[\mathbb{I}(\mathbf{y}_{1},\mathbf{y}_{2}|\mathcal{D})\] \[=\mathbb{H}(\mathbf{y}_{1}|\mathcal{D})+\mathbb{H}(\mathbf{y}_{2} |\mathcal{D})-\mathbb{H}(\mathbf{y}_{1},\mathbf{y}_{2}|\mathcal{D}). \tag{5}\] Denote by \((\mathbf{f}_{1},\mathbf{e}_{1})\) the discretized input function and resolution embedding for \(\mathbf{y}_{1}\) and by \((\mathbf{f}_{2},\mathbf{e}_{2})\) for \(\mathbf{y}_{2}\). We first use moment matching to approximate the predictive distributions of \(\mathbf{y}_{1}\), \(\mathbf{y}_{2}\) and \(\widehat{\mathbf{y}}=[\mathbf{y}_{1};\mathbf{y}_{2}]\) as multi-variate Gaussian distributions, and we can thereby compute each entropy with a closed form. Specifically, let us first consider \(\widehat{\mathbf{y}}\). According to (2), we can derive that \(p(\widehat{\mathbf{y}}|\mathcal{D})=\frac{1}{M}\sum_{m=1}^{M}\mathcal{N}( \widehat{\mathbf{y}}|\boldsymbol{\rho}_{m},\boldsymbol{\Lambda}_{m})\), where \(\boldsymbol{\rho}_{m}=[\boldsymbol{\mu}_{\Theta_{m}^{*}}(\mathbf{f}_{1}, \mathbf{e}_{1});\boldsymbol{\mu}_{\Theta_{m}^{*}}(\mathbf{f}_{2},\mathbf{e}_{ 2})]\) and \(\boldsymbol{\Lambda}_{m}=\operatorname{diag}\left(\sigma_{\Theta_{m}^{*}}^{2}( \mathbf{f}_{1},\mathbf{e}_{1})\cdot\mathbf{I},\sigma_{\Theta_{m}^{*}}^{2}( \mathbf{f}_{2},\mathbf{e}_{2})\cdot\mathbf{I}\right)\). The mean and covariance (first and second mo ments) are \[\mathbb{E}(\widehat{\mathbf{y}}|\mathcal{D})=\frac{1}{M}\sum_{m=1}^{M }\boldsymbol{\rho}_{m},\] \[\text{cov}(\widehat{\mathbf{y}}|\mathcal{D})=\frac{1}{M}\sum_{m=1} ^{M}\left(\boldsymbol{\Lambda}_{m}+\boldsymbol{\rho}_{m}\boldsymbol{\rho}_{m}^{ \top}\right)-\mathbb{E}(\widehat{\mathbf{y}}|\mathcal{D})\mathbb{E}(\widehat{ \mathbf{y}}|\mathcal{D})^{\top}.\] Via moment matching, we construct a multi-variate Gaussian approximation, \(p(\widehat{\mathbf{y}}|\mathcal{D})\approx\mathcal{N}(\widehat{\mathbf{y}}| \mathbb{E}(\widehat{\mathbf{y}}|\mathcal{D}),\text{cov}(\widehat{\mathbf{y}}| \mathcal{D}))\), which is the best approximation in the exponential family in the sense of Kullback Leibler divergence (Bishop and Nasrabadi, 2006). Accordingly, the entropy can be computed with a closed-form, \(\mathbb{H}(\widehat{\mathbf{y}})=\frac{1}{2}\log\text{det}\left[\text{cov}( \widehat{\mathbf{y}}|\mathcal{D})\right]+\text{const}\). However, since \(\widehat{\mathbf{y}}\) is high-dimensional, computing the log determinant of its huge covariance matrix is extremely expensive or even infeasible. To address this problem, we observe that \[\text{cov}(\widehat{\mathbf{y}}|\mathcal{D})=\boldsymbol{\Lambda}+ \frac{1}{M}\sum_{m=1}^{M}\boldsymbol{\rho}_{m}\boldsymbol{\rho}_{m}^{\top}- \mathbb{E}(\widehat{\mathbf{y}}|\mathcal{D})\mathbb{E}(\widehat{\mathbf{y}}| \mathcal{D})^{\top}\] \[=\boldsymbol{\Lambda}+\frac{1}{M-1}\sum_{m=1}^{M}\left( \boldsymbol{\rho}_{m}-\mathbb{E}(\widehat{\mathbf{y}}|\mathcal{D})\right) \left(\boldsymbol{\rho}_{m}-\mathbb{E}(\widehat{\mathbf{y}}|\mathcal{D}) \right)^{\top} \tag{6}\] where \[\boldsymbol{\Lambda}=\text{diag}\left(\frac{1}{M}\sum_{m=1}^{M}\sigma_{\Theta _{j}^{*}}^{2}(\mathbf{f}_{1},\mathbf{e}_{1})\cdot\mathbf{I},\frac{1}{M}\sum_{ m=1}^{M}\sigma_{\Theta_{j}^{*}}^{2}(\mathbf{f}_{2},\mathbf{e}_{2})\cdot \mathbf{I}\right)\] is a diagonal matrix, and the second term in the R.H.S of (6) is actually the empirical covariance matrix over \(\{\boldsymbol{\rho}_{m}\}\). We can further derive that \(\text{cov}(\widehat{\mathbf{y}}|\mathcal{D})=\boldsymbol{\Lambda}+\mathbf{B} \mathbf{B}^{\top}\), where \(\mathbf{B}=\frac{1}{\sqrt{M-1}}[\boldsymbol{\rho}_{1}-\mathbb{E}(\widehat{ \mathbf{y}}|\mathcal{D}),\ldots,\boldsymbol{\rho}_{M}-\mathbb{E}(\widehat{ \mathbf{y}}|\mathcal{D})]\), which includes \(M\) columns. We then use the matrix determinant lemma (Harville, 1997) to compute, \[\log\text{det}\left[\text{cov}(\widehat{\mathbf{y}}|\mathcal{D} )\right]=\log\text{det}\left[\boldsymbol{\Lambda}+\mathbf{B}\mathbf{B}^{\top}\right]\] \[=\log\text{det}[\boldsymbol{\Lambda}]+\log\text{det}[\mathbf{I}+ \mathbf{B}^{\top}\boldsymbol{\Lambda}^{-1}\mathbf{B}]. \tag{7}\] The first log determinant is over the diagonal matrix \(\boldsymbol{\Lambda}\), and the complexity is linear in the dimension of \(\widehat{\mathbf{y}}\). The second log determinant is computed over an \(M\times M\) matrix. Since \(M\) is the size of the ensemble and is very small (we take \(M=5\) in our experiments), the computation is highly efficient. It is straightforward to use a similar method to compute \(\mathbb{H}(\mathbf{y}_{1}|\mathcal{D})\) and \(\mathbb{H}(\mathbf{y}_{2}|\mathcal{D})\) in (5). ### Cost Annealing In practice, directly maximizing the utility-cost ratio \(\frac{u(h,r)}{\lambda_{r}}\) or \(\frac{\widehat{u}(h,r)}{\lambda_{r}}\) (see (3) and (4)) tends to make the active learning stuck at low-resolution queries and inferior performance, especially when the cost discrepancy is significant between the low and high resolutions. This is because at the early stage, the training data is few, and the mutual information does not differ much for candidates at different resolutions. In other words, the scales are close. Consequently, the high-resolution examples are over-penalized by the large cost, and the active learning keeps selecting low-resolution examples, which can severely hinder the model improvement. To overcome this problem, we propose a cost annealing method. We schedule a dynamic cost assignment for each resolution. Denote by \(\widehat{\lambda}_{r}(t)\) the cost schedule for resolution \(r\) at step \(t\). For convenience, we normalize the true cost into \([0,1]\), _i.e._, each \(\lambda_{r}\in[0,1]\) and \(\sum_{r=1}^{R}\lambda_{r}=1\). We set \[\widehat{\lambda}_{r}(t)=\frac{\lambda_{r}}{1+(R\lambda_{r}-1)c(t)}, \tag{8}\] where \(c(t)\) is a decaying function such that \(c(0)=1\) and \(c(\infty)=0\). For example, we can use \[c(t)=\exp(-\alpha t),\ \ \text{or}\ \ c(t)=2(1-s(\alpha t)), \tag{9}\] where \(s(\cdot)\) is the sigmoid function and \(\alpha\) controls the decay rate. We can see that all \(\widehat{\lambda}_{r}(0)=\frac{1}{R}\) and \(\lim\limits_{t\to\infty}\widehat{\lambda}_{r}(t)=\lambda_{r}\). At each step \(t\), we select the input and resolution by maximizing the acquisition function, \(\frac{u(h,r)}{\widehat{\lambda}_{r}(t)}\) or \(\frac{\widehat{u}(h,r)}{\widehat{\lambda}_{r}(t)}\). In this way, at the early stage when the data is few and the mutual information does not differ much, our method avoids over-penalizing high-resolution examples, and promote their queries to ensure continuous model improvement. With the accumulation of data, the mutual information is more and more capable of reflecting the true potential/value of new examples, the active learning returns to maximizing the ideal utility-cost ratio to select the input functions and resolutions. Our method is summarized in Algorithm 1. ``` 1:Learn the probabilistic multi-resolution FNO from an initial dataset \(\mathcal{D}\) with the ensemble size \(M\). 2:for\(t=1\dots T\)do 3: Based on the cost schedule (8), select the input function \(h_{t}\in\mathcal{P}\) and the resolution \(r_{t}\) by \[h_{t},r_{t}=\operatorname*{argmax}_{h\in\mathcal{P},1\leq r\leq R}\frac{\beta (h,r)}{\widehat{\lambda}_{r}(t)}\] where \(\beta(h,r)\) is the utility function that can take (3) or (4). 4: Query the output function at \(h_{t}\) with resolution \(r_{t}\) to obtain \(\mathbf{y}_{t}\). 5: Remove \(h_{t}\) from \(\mathcal{P}\). 6:\(\mathcal{D}\leftarrow\mathcal{D}\cup\{(\mathbf{h}_{t},\mathbf{y}_{t},r_{t})\}\) where \(\mathbf{h}_{t}\) is the discretized \(h_{t}\) at resolution \(r_{t}\). 7: Re-train the probabilistic multi-resolution FNO on \(\mathcal{D}\) 8:endfor ``` **Algorithm 1** MRA-FNO (\(M\), \(\mathcal{P}\), \(T\), \(\{\lambda_{r}\}_{r=1}^{R}\)) **Algorithm Complexity.** The time complexity of each active learning step is \(\mathcal{O}(|\mathcal{P}|RM^{2}d)\) where \(|\mathcal{P}|\) is the size of the candidate pool, and \(d\) is the output dimension at the highest resolution. The space complexity is \(\mathcal{O}(Md)\), which is to store the predictive distribution (for any input function) and the parameter estimates in the ensemble. ## 5 Related Work Operator learning is a fast-growing research area. A variety of operator learning methods have been developed, most of which are based on neural networks and henceforth called neural operators. For example, along with FNO, a simple low-rank neural operator (LNO) (Li et al., 2020d) was proposed to employ a low-rank decomposition of the operator's kernel. Li et al. (2020b) proposed GNO that uses Nystrom approximation and graph neural networks to approximate the function convolution. In (Li et al., 2020c), a multipole graph neural operator (MGNO) is developed, which uses a multi-scale kernel decomposition to achieve linear complexity in computing the convolution. Gupta et al. (2021) developed a multiwavelet-based operator learning model that represents the operator's kernel with fine-grained wavelets. Another popular approach is the Deep Operator Net (DeepONet) (Lu et al., 2021), which combines a branch net over the input functions and a trunk net over the sampling locations to predict the target function values. A more stable and efficient version, POD-DeepONet was proposed in (Lu et al., 2022), which replaces the trunk net with the POD (or PCA) bases computed from the training data. (Seidman et al., 2022) used a nonlinear combination (_e.g._, a feed-forward network) of the branch net and trunk net outputs to approximate the target function. A survey of neural operators is given in (Kovachki et al., 2023). The recent works have also developed kernel operator learning approaches (Long et al., 2022; Batlle et al., 2023). Active learning is a classical machine learning topic. The recent research focuses on the active learning of deep neural networks. For example, in (Gal et al., 2017), Monte-Carlo (MC) Dropout (Gal and Ghahramani, 2016) was used to generate the posterior samples and compute the acquisition function. (Geifman and El-Yaniv, 2017; sen) used core-set search to query diverse and representative examples, which are shown to be particularly effective for convolution neural nets. Other examples include (Gissin and Shalev-Shwartz, 2019; Ducoffe and Precioso, 2018) for adversarial active learning, (Ash et al., 2019) using the gradient magnitude to represent the uncertainty and to query new examples, _etc_. Recently, (Li et al., 2022b) proposed the first multi-fidelity active learning approach, which dynamically queries multi-fidelity simulation examples to train a surrogate model that predicts PDE solutions from PDE parameters. (Li et al., 2022a) further developed a batch multi-fidelity active learning algorithm with budget constraints. The key difference is that these works aim to learn a mapping from the PDE parameters (low-dimensional input) to the solution (high-dimensional output), and they employ an auto-regressive architecture to combine examples of multiple fidelities. Their stochastic variational inference is inferior in posterior approximation and uncertainty quantification for operator learning. We therefore develop another posterior inference approach based on ensemble learning, which turn out to be much more effective. We accordingly develop an efficient method for utility function computation. In addition, we discovered the over-penalization problem during the active learning, which was never discovered in these previous works. We proposed a novel and flexible cost annealing framework to overcome the problem. The most recent work (Pickering et al., 2022) proposed an active learning approach for DeepONet. The goal is to query examples that can facilitate the discovery and forecast of rare events. The work does not consider multi-resolution examples and their varying costs. Hence, the goal, model estimation, acquisition function design and computation are all very different from our work. ## 6 Experiment ### Prediction Accuracy on Fixed Training Data We first examined if our probabilistic multi-resolution FNO can achieve good prediction accuracy and uncertainty calibration. To this end, we tested with two benchmark operator learning tasks, one is based on a Burgers' equation and the other a Darcy flow equation. For _Burgers_, we aim to learn a mapping from the initial condition to the solution at time \(t=1\), while for _Darcy_, the goal is to learn a mapping from the coefficient function to the solution. We considered two resolutions for each task. The details are provided in Section A in Appendix. We randomly generated 200 examples for each resolution to obtain a training set. We randomly generated another 200 examples at the highest resolution as the test set. We compared with the standard FNO (point estimation), FNO trained via MC Dropout (FNO-Dropout) (Gal and Ghahramani, 2016), stochastic gradient Langevin dynamics (FNO-SGLD) (Welling and Teh, 2011), and stochastic variational inference (FNO-SVI) (Kingma and Welling, 2013). For all the methods, we set the mini-batch size to \(20\), the learning rate to \(10^{-3}\), and use ADAM optimization and Cosine Annealing schedule. We used the FNO implementation from the original authors ([https://github.com/neuraloperator/neuraloperator](https://github.com/neuraloperator/neuraloperator)). We tuned the dropout rate from {0.1, 0.2, 0.3, 0.4, 0.5}. For SGLD and SVI, we assigned a standard Gaussian prior over the model parameters. For SVI, we employed a fully factorized Gaussian posterior approximation. We repeated the training and test procedure for five times, and examined the average relative \(L_{2}\) error, the average negative log likelihood (NLL), and their standard deviation on the test datasets. The results are reported in Table 1. We can see that the our model (MRA-FNO) achieves the relative \(L_{2}\) error significantly smaller than the competing methods in all the cases, except that in Burger's equation, the \(L_{2}\) error of MRA-FNO is slightly worse than the standard FNO. More important, MRA-FNO consistently outperforms all the probabilistic versions of FNO by a large margin in test log likelihood. Hence, not only does our model give superior prediction accuracy, our ensemble posterior inference also enables much better uncertainty quantification. ### Active Learning Performance Next, we evaluated the active learning performance of MRA-FNO. In addition to the tasks in Section 6.1, we considered two more PDEs, one is a nonlinear diffusion equation, and the other is a 2D Navier-Stokes (NS) equation used in (Li et al., 2020d). For each task, we considered two resolutions. We leave the details in Section 1 of Appendix. In addition, we tested active learning on the same _Darcy_ problem as in Section 6.1 with three resolutions. We summarize the data acquiring cost at different resolutions in Table 2. As we can see, the cost discrepancy is large among different resolutions. We compared with the following active learning methods for FNO. (1) \begin{table} \end{table} Table 1: Prediction accuracy in non-active learning. The results were averaged from five runs. Random-Low/High, randomly selecting an input function from the candidate pool, and querying the example at the lowest/highest resolution. (2) Random-Mix: randomly selecting both the input and resolution. (3) Coreset-Low/High: we used the coreset active learning strategy (sen) to select the input function that maximizes the minimum distance to the existed examples, according to the output of the last Fourier layer as the representation. We fixed the resolution to be the lowest or the highest one. (4) Coreset-Mix, the same coreset active learning strategy as in (3), except that we allow querying at different resolutions. We interpolate the representation to the highest resolution to compute the distance. (5) MR-Dropout: we used MC dropout to perform posterior inference for FNO, and then used the same acquisition function(s), computation method, and annealing framework as in our approach to identify the input function and resolution. (6) MR-PredVar: we averaged the predictive variance of each output function values as the utility function, and the remaining is the same as our approach. For every active learning experiment, we randomly generated 10 examples \begin{table} \begin{tabular}{c|c c} \hline Task & Resolution & Cost Ratio \\ \hline _Burgers_ & 33, 129 & \(1:41.2\) \\ _Darcy_ & \(32\times 32\), \(128\times 128\) & \(1:38.3\) \\ _Darcy3_ & \(32\times 32\), \(64\times 64\), \(128\times 128\) & \(1:21.3:38.3\) \\ _Diffusion_ & \(32\times 32\), \(64\times 64\), \(128\times 128\) & \(1:4.7:17.6\) \\ _NS_ & \(16\times 16\), \(64\times 64\) & \(1:7\) \\ \hline \end{tabular} \end{table} Table 2: Resolution and cost ratio for each active learning task. The cost is measured by the average running time for solving the PDEs (100 runs) at the corresponding resolution. Figure 1: Relative \(L_{2}\) error _vs._ accumulated data cost. Each method ran 500 active learning steps. Note that different methods can end up with different total data cost (after running the same number of steps). Figure 2: Relative \(L_{2}\) error _vs._ accumulated data cost. for each resolution to obtain an initial dataset. We randomly generated 990 input functions at the highest resolution, which we used as the candidate pool for active learning. If one example is queried at a lower resolution, the input function is downsampled accordingly at which to run the simulation. We randomly generated another 200 examples at the highest resolution for testing. We then ran active learning with each method. For our method and MR-Dropout, we tested two annealing schedules, one is based on the exponential decay and the other sigmoid decay; see (9). We tuned the decaying rate \(\alpha\) from \(\{0.002,0.005,0.01,0.02,0.5,1.0\}\). We ran \(500\) active learning steps (queries) for all the experiments except for the _NS_ problem, we ran \(300\) steps. We examined the relative \(L_{2}\) error of each method _vs._ the accumulated data cost. To avoid cluttered figures, we show the result of our method with the exponential-decay-based schedule in Fig. 1 and 2, and the result of using the sigmoid decay and MR-PredVar in Fig. 6 in Appendix. **Prediction Accuracy.** As we can see, at the beginning, the performance of each method is identical or very close. As the active learning progresses, MRA-FNO improves rapidly and constantly. It soon achieves a superior prediction accuracy to all the competing methods, and consistently outperforms them during the remaining course of the active learning. Accordingly, MRA-FNO can reach the smallest prediction error under the same data cost, or use the least data cost to achieve the same performance. We empirically observed that using the utility function (3) or (4), denoted by MRA-FNO (\(u\)) and MRA-FNO (\(\widehat{u}\)), respectively, result in close performance, except that on the diffusion problem, MRA-FNO (\(u\)) appears to be better. This might be because the Monte-Carlo approximation in (4) (we set \(A=5\)) still has a significant gap from the true expectation. It is worth noting that both Random-Low and Coreset-Low were quickly trapped at large prediction errors. It therefore shows only using low-resolution examples, the predictive performance will soon meet a bottleneck and can hardly improve, though the data cost grows very slowly. On the other hand, Random-High and Coreset-High enables steady improvement because they only query high-resolution examples at each step. However, the data cost accumulation is much greater, _e.g._, Fig. 1b and 1c. In addition, the performance of MR-Dropout tends to stuck at large prediction errors early, especially in _Burgers_, _Darcy_ and _Darcy3_. We observed that MR-Dropout mainly selected low-resolution examples. This might be because the uncertainty quantification by dropout is not reliable for FNO, and even using our annealing framework cannot correct its bias. From Fig. 6 of Appendix, we can see that the performance of MRA-FNO with the sigmoid-based cost schedule is close to that with exp-based schedule (see (9)), except in _Darcy3_, the exp-based schedule shows a slight yet consistent advantage. Interestingly, MR-PredVar outperforms the other competing methods in all the cases, confirming the importance of effective uncertainty quantification in utility evaluation (it also uses our ensemble posterior inference). While MR-PredVar achieves close performance to our method in _Burgers_, in all the other cases, MR-PredVar is apparently worse. This might be because MR-PredVar ignores the (strong) correlation between the output function values, and hence the quality of utility evaluation is worse. All these results have demonstrated the advantage of our multi-resolution active learning approach. **Influence of Cost Schedule.** Next, we investigated how the cost annealing schedule influences the active learning. To this end, we used the exponential decay function in our schedule, and varied the decaying rate \(\alpha\in\{0.002,0.005,0.01,0.02,0.5,1.0\}\). We show the cost schedule for different choices of \(\alpha\) in Fig. 3a. We then run MRA-FNO on _Burgers_ with 500 steps. The \(L_{2}\) relative error _vs._ the accumulated data cost is reported in Fig. 3b and 3c. We can see that when \(\alpha\) is too small, _e.g._, \(\alpha=0.002\), though the active learning ensures steady improvement of the prediction accuracy, the data cost is suboptimal. To obtain the same performance, a too small \(\alpha\) consumes a much bigger data cost, or under the same cost, it gives worse performance. The reason is that the convergence of the cost annealing is too slow; see Fig. (a)a. Even when the mutual information has become sufficiently discriminative, the cost assignments for different resolutions are still not far, which actually over-penalize low-resolution examples and lead to a selection bias toward high-resolution examples. Another extreme is to use a too big \(\alpha\), _e.g._, \(\alpha=0.5\) and \(\alpha=1.0\). In such case, the schedule will converge to the true cost very fast, even at the early stage when data is few. Accordingly, the high-resolution examples are soon over-penalized, making the learning stuck at low-resolution queries. The prediction accuracy is fluctuating yet hard to increase substantially. On the contrary, an appropriate decay rate in between, _e.g._, \(\alpha=0.01\) and \(\alpha=0.02\), can sidestep these problems, and lead to superior performance in both cost saving and prediction accuracy. **Point-wise Error.** Finally, we investigate the local errors of the prediction. We randomly selected six test cases for _NS_ and _Diffusion_. We examined the post-wise error of each method after active learning. We show the results in Fig. 4 and Appendix Fig. 7. We can see that the point-wise error of MRA-FNO is quite uniform across the domain and is close to zero (white). By contrast, the Figure 4: Point-wise error on _NS_. Figure 3: The influence of the cost schedule on active learning. We report the result with the exponential decay; see (9). The larger \(\alpha\), the faster the schedule converges to the true cost. other methods exhibit large errors in many local regions. Together these results have shown that MRA-FNO not only gives a superior global accuracy, but locally better recovers individual output function values. ## 7 Conclusion We have presented MRA-FNO, a multi-resolution active learning method for Fourier neural operators. On several benchmark operator learning tasks, MRA-FNO can save the data cost substantially while achieving superior predictive performance. Currently, the selection of the decay rate in our cost annealing framework is done by manual tuning/cross-validation. In the future, we plan to develop novel methods, such as reinforcement learning, to automatically determine the best rate. ## 8 Acknowledge We thank Andrew Stuart for valuable discussion and suggestions.
2310.20378
Self-consistent treatment of thermal effects in neutron-star post-mergers: observational implications for third-generation gravitational-wave detectors
We assess the impact of accurate, self-consistent modelling of thermal effects in neutron-star merger remnants in the context of third-generation gravitational-wave detectors. This is done through the usage, in Bayesian model selection experiments, of numerical-relativity simulations of binary neutron star (BNS) mergers modelled through: a) nuclear, finite-temperature (or ``tabulated'') equations of state (EoSs), and b) their simplifed piecewise (or ``hybrid'') representation. These cover four different EoSs, namely SLy4, DD2, HShen and LS220. Our analyses make direct use of the Newman-Penrose scalar $\psi_4$ outputted by numerical simulations. Considering a detector network formed by three Cosmic Explorers, we show that differences in the gravitational-wave emission predicted by the two models are detectable with a natural logarithmic Bayes Factor $\log{\cal{B}}\geq 5$ at average distances of $d_L \simeq 50$Mpc, reaching $d_L \simeq 100$Mpc for source inclinations $\iota \leq 0.8$, regardless of the EoS. This impact is most pronounced for the HShen EoS. For low inclinations, only the DD2 EoS prevents the detectability of such modelling differences at $d_L \simeq 150$Mpc. Our results suggest that the usage a self-consistent treatment of thermal effects is crucial for third-generation gravitational wave detectors.
Verónica Villa-Ortega, Ana Lorenzo-Medina, Juan Calderón Bustillo, Milton Ruiz, Davide Guerra, Pablo Cerdá-Duran, José A. Font
2023-10-31T11:37:23Z
http://arxiv.org/abs/2310.20378v1
# Self-consistent treatment of thermal effects in neutron-star post-mergers: ###### Abstract We assess the impact of accurate, self-consistent modelling of thermal effects in neutron-star merger remnants in the context of third-generation gravitational-wave detectors. This is done through the usage, in Bayesian model selection experiments, of numerical-relativity simulations of binary neutron star (BNS) mergers modelled through: a) nuclear, finite-temperature (or "tabulated") equations of state (EoSs), and b) their simplified piecewise (or "hybrid") representation. These cover four different EoSs, namely SLy4, DD2, HShen and LS220. Our analyses make direct use of the Newman-Penrose scalar \(\psi_{4}\) outputted by numerical simulations. Considering a detector network formed by three Cosmic Explorers, we show that differences in the gravitational-wave emission predicted by the two models are detectable with a natural logarithmic Bayes Factor \(\log\mathcal{B}\geq 5\) at average distances of \(d_{L}\simeq 50\)Mpc, reaching \(d_{L}\simeq 100\)Mpc for source inclinations \(\iota\leq 0.8\), regardless of the EoS. This impact is most pronounced for the HShen EoS. For low inclinations, only the DD2 EoS prevents the detectability of such modelling differences at \(d_{L}\simeq 150\)Mpc. Our results suggest that the usage a self-consistent treatment of thermal effects is crucial for third-generation gravitational wave detectors. Neutron-star mergers - Gravitational waves - Numerical relativity - Bayesian inference ## Introduction Mergers of stellar compact binaries, having at least a neutron star as one of the components, lie at the intersection of several very active fields of research in relativistic astrophysics. They are prime targets for multi-messenger observations, being visible in both gravitational wave (GW) and electromagnetic (EM) radiation. The LIGO-Virgo-KAGRA (LVK) Collaboration has reported observations from both types of systems (see e.g. Abbott et al. (2021)). In particular, the detection of the first binary neutron star (BNS) merger, GW170817, along with its post-merger EM emission (Abbott et al., 2017, 2017, 2017), has been instrumental to begin addressing some long-standing questions. It has placed tight constraints on: i) the equation of state (EoS) at supranuclear densities (Shibata et al., 2017; Margalit & Metzger, 2017; Rezzolla et al., 2018; Ruiz et al., 2018); ii) the radius and tidal deformability of a spherical neutron star (Bauswein et al., 2017; Most et al., 2018); and iii) the speed of GW, which in turn disfavours a large class of scalar-tensor theories and other theories predicting varying GW speed (Ezquiaga & Zumalacarregui, 2017; Pardo et al., 2018). Besides, it also provided an independent measure for the expansion of the Universe (Abbott et al., 2017; Dietrich et al., 2020) and the most direct evidence that BNS mergers are progenitors of the central engines of short gamma-ray bursts (sGRBs), followed by a longer optical transient afterglow known as a kilonova, powered by the radioactive decay of heavy r-process nuclei (Metzger et al., 2010; Cowperthwaite et al., 2017; Kasen et al., 2017; Pian et al., 2017). Many of those results are drawn from the analysis of the information encoded in the GW signal at the late BNS inspiral stage. While searches have been conducted by the LVK Collaboration (Abbott et al., 2017) no observational evidence of the post-merger GW signal is yet available, as the sensitivity of current detectors at the kHz frequency range is severely hampered by photon shot noise. Estimates on the detectability of the post-merger signal with third-generation detectors have been reported by Clark et al. (2016) and Miravet-Tenes et al. (2023). The BNS inspiral is accurately modelled assuming a cold (zero temperature) EoS. However, during and after merger, shocks heat up the binary remnant to temperatures \(\gtrsim 10\) MeV adding a thermal pressure support that may change the internal structure of the remnant and its subsequent evolution. It is expected that these thermal effects are encoded in the emitted GWs (Raithel et al., 2021). Therefore, observations during the post-merger stage may shed light on the microphysical EoS of NSs at finite temperature. These observations may become possible with third-generation GW observatories (Chatziioannou et al., 2017). Advances in our knowledge of the dynamics of BNS mergers and the subsequent post-merger evolution rely on numerical relativity simulations (for recently reviews see e.g. Baiotti & Rezzolla (2017); Shibata & Hotokezaka (2019); Sarin & Lasky (2021); Bauswein et al. (2020); Ruiz et al. (2021)). In those simulations NS matter is described using mainly two approaches. The first one is a "hybrid" approach that assumes that pressure and internal energy can be divided into two contributions (Janka et al., 1993; Dimmelmeier et al., 2002): i) a cold part, computed through a zero-temperature EoS, \(P_{\rm cold}=\kappa_{i}\,{\rho_{0}}^{\Gamma_{i}}\), in a set of intervals in rest-mass density \(\rho_{0}\) (typically referred as a piecewise-polytropic representation of a nuclear EoS (Read et al., 2009)); and ii) a thermal (ideal-gas-like) part to account for shock heating, \(P_{\rm th}=\epsilon_{\rm th}\,(\Gamma_{\rm th}-1)\), which is computed through a thermal index \(\Gamma_{\rm th}\). Here, \(\kappa_{i}\) and \(\Gamma_{i}\) are the polytropic constant and index, respectively, and \(P_{\rm th}\) and \(\epsilon_{\rm th}\) the thermal pressure and thermal energy density. Moreover, \(\Gamma_{\rm th}\) is a constant ranging between 1 and 2 (Constantinou et al., 2015). Above half nuclear saturation density \(\Gamma_{\rm th}\) strongly depends on the nucleon effective mass (Lim & Holt, 2019). Hence a constant value of \(\Gamma_{\rm th}\) may overestimate the thermal pressure by a few orders of magnitude (Raithel et al., 2021) which, in turn, may induce significant changes in the GW frequencies (Bauswein et al., 2010; Figura et al., 2021). The second approach is based on the use of microphysical, finite-temperature EoS tables constructed using "tabulated" data from observations and nuclear physics experiments (Oechslin et al., 2007; Bauswein et al., 2010; Sekiguchi et al., 2011; Fields et al., 2023; Espino et al., 2022; Werneck et al., 2023). This approach provides a self-consistent method for probing the impact of thermal effects on the fate of the binary remnant. In this work, we assess the importance of a self-consistent (or "tabulated") treatment of thermal effects in BNS post-merger remnants within a fully Bayesian framework. We use the numerical-relativity waveforms recently obtained by Guerra et al. (2023) both as "reference" signals (or injections) and as recovery templates, considering sources where thermal effects are modelled both through a simplified "hybrid" approach and a more realistic "tabulated" one. Our sample of EoSs includes the SLy4, DD2, HShen and LS220 cases. Given reference data consisting on a synthetic "tabulated" signal, we can obtain Bayesian evidences for both the "tabulated" and "hybrid" waveform models. With this, we evaluate the level to which the difference between the two treatments of thermal effects leads to observable effects in the GW emission. We do this through a novel technique that makes direct use of the Newman-Penrose \(\psi_{4}\) scalar directly outputted by numerical-relativity simulations (Bustillo et al., 2022), as opposed to traditional analyses making use of the GW strain \(h(t)\). This prevents the occurrence of systematic numerical errors that might arise when computing the latter from the former through a double time integration (Reisswig & Pollney, 2010). We show that, regardless of the EoS, differences between "tabulated" and "hybrid" treatments of thermal effects lead to differences in the post-merger GW that are observable by third-generation detectors at source distances \(d_{L}\leq 50\) Mpc (averaged over the source sky-location). Such distances reach 150 Mpc for orbital inclinations \(\iota<0.8\), with the exception of BNSs modelled with the DD2 EoS. On a similar note, Fields et al. (2023) also studied the impact of thermal effects on the GWs from BNS mergers by changing the effective masses of neutrons and protons, and thus the specific heat capacity of the binary. They found that an increased heat capacity results in denser, cooler remnants, which leaves imprints on the GWs. However, it is not clear if these imprints are actually due to thermal effects or to changes in the nuclei properties triggered by changes in the effective mass. In addition, Miravet-Tenes et al. (2023) have recently used the same simulations of Guerra et al. (2023) to study the detectability of thermal effects in post-merger signals using Bayeswave(Cornish & Littenberg, 2015), a Bayesian data-analysis algorithm that reconstructs signals injected into noise through a morphology-independent approach. Miravet-Tenes et al. (2023) find that differences in the distribution of the main frequency peaks in the spectra in hybrid and tabulated models can be resolved in third-generation detectors up to distances similar to those reported in this work. ## Numerical setup Our study is based on the recent numerical-relativity simulations performed by Guerra et al. (2023). We briefly describe them here, addressing interested readers to Guerra et al. (2023) for full details. The binaries consist of two equal-mass, irrotational NSs modeled by finite-temperature, microphysical EoSs (see Table 1). The initial temperature is fixed to \(T=$0.01\,\mathrm{MeV}$\). We use the tables by Schneider et al. (2017), freely available at stellarcollapse.org. We choose four different EoS that span a suitable range of NS central densities, radii and maximum gravitational masses for irrotational NSs. We also consider a piecewise-polytropic representation of the cold part of these EoSs using a piecewise regression as in Pilgrim (2021) with seven polytropic pieces (Read et al., 2009). To this cold part we add an ideal-gas part with a constant adiabatic index \(\Gamma_{\mathrm{th}}=1.8\) to account for thermal effects. Initial data are obtained using LORENE (Gourgoulhon et al., 2001; Taniguchi and Gourgoulhon, 2002). Evolutions were performed using the publicly available IllinoisGRMHD code (Werneck et al., 2023; Etienne et al., 2015) embedded on the Einstein Toolkit infrastructure (Loffler et al., 2012). The code evolves the Baumgarte-Shapiro-Shibata-Nakamura equations for the spacetime fields (Baumgarte and Shapiro, 1998; Shibata and Nakamura, 1995) coupled to the puncture gauge conditions setting the damping coefficient appearing in the shift condition to \(1/M_{\mathrm{ADM}}\). The illinoisGRMHD code adopts the Valencia formalism for the general relativistic hydrodynamics equations (Banyuls et al., 1997) which are integrated numerically with a state-of-the-art finite-volume algorithm. The simulations used fourth-order spatial stencils and a fourth-order Runge-Kutta scheme for time integration with a Courant-Friedrichs-Lewy factor of 0.5. ## Analysis setup To assess the observational importance of an accurate implementation of thermal effects in BNS mergers we perform parameter inference and model selection on numerically simulated GW signals \(h(\theta_{\mathrm{true}})\), with source parameters \(\theta_{\mathrm{true}}\). These signals correspond to the very late inspiral, merger and post-merger emission of the BNS models of Table 1. We inject them in an idealised three-detector network composed of three Cosmic Explorers (Reitze et al., 2019; Abbott et al., 2017) placed at the locations of LIGO Hanford, LIGO Livingston and Virgo. We perform Bayesian parameter inference on these signals using numerically simulated templates \(h_{\mathcal{M}}(\theta)\) according to two different emission models \(\mathcal{M}\), respectively including the two alternative implementations of thermal effects (i.e. hybrid and tabulated). The posterior Bayesian probability for source parameters \(\theta\) according to a waveform model \(\mathcal{M}\) is given by \[p_{\mathcal{M}}(\theta|\theta_{\mathrm{true}})=\frac{\pi(\theta)\mathcal{L}( \theta|h_{\mathcal{M}}(\theta_{\mathrm{true}}))}{\mathcal{Z}_{\mathcal{M}}( \theta|\theta_{\mathrm{true}})}. \tag{1}\] Here, \(\mathcal{L}(\theta|h(\theta_{\mathrm{true}}))\) denotes the likelihood for the parameters \(\theta\) while \(\pi(\theta)\) denotes their prior probability. Finally, the term \(\mathcal{Z}_{\mathcal{M}}\) denotes the Bayesian evidence for the model \(\mathcal{M}\), given by \[\mathcal{Z}_{\mathcal{M}}=\int\pi(\theta)\mathcal{L}(\theta|h_{\mathcal{M}}( \theta_{\mathrm{true}}))d\theta. \tag{2}\] For two competing waveform models, their relative Bayes Factor is simply given by \[\mathcal{B}_{\mathcal{M}_{2}}^{\mathcal{M}_{1}}=\frac{\mathcal{Z}_{\mathcal{ M}_{1}}}{\mathcal{Z}_{\mathcal{M}_{2}}}. \tag{3}\] It is commonly considered that \(\mathcal{M}_{1}\) is strongly preferred with respect to \(\mathcal{M}_{2}\) when \(\log\mathcal{B}_{\mathcal{M}_{2}}^{\mathcal{M}_{1}}>5\). We use the standard frequency-domain likelihood for GW transients (Finn, 1992; Romano and Cornish, 2017) \[\log\mathcal{L}(\theta|h(\theta_{\mathrm{true}}))\propto-\sum_{N}(h(\theta_{ \mathrm{true}})-h(\theta)|h(\theta_{\mathrm{true}})-h(\theta)), \tag{4}\] where \(N\) runs over the different detectors of our network. As usual, \((a|b)\) represents the inner product (Cutler and Flanagan, 1994) \[(a|b)=4\Re\int_{f_{min}}^{f_{max}}\frac{\tilde{a}(f)\tilde{b}^{*}(f)}{S_{n}(f) }df, \tag{5}\] \begin{table} \begin{tabular}{c|c c c c c c c} EoS & \(M\) & \(\mathcal{C}\) & \(k_{2}\) & \(\Lambda\) & \(M_{\mathrm{ADM}}\) & \(J_{\mathrm{ADM}}\) & \(\Omega\) \\ \hline \hline \(\mathrm{SLy4}\) & 1.28 & 0.13 & 0.086 & 536.09 & 2.54 & 6.63 & 1.77 \\ DD2 & 1.29 & 0.11 & 0.105 & 1100.92 & 2.56 & 6.73 & 1.77 \\ HShen & 1.30 & 0.10 & 0.109 & 1805.63 & 2.58 & 6.82 & 1.78 \\ LS220 & 1.29 & 0.12 & 0.106 & 915.00 & 2.55 & 6.68 & 1.77 \\ \end{tabular} \end{table} Table 1: **Summary of the initial properties of the tabulated BNS configurations**: We list the EoS, the gravitational mass \(M\,[M_{\odot}]\), the compactness \(\mathcal{C}\equiv M/R_{\mathrm{eq}}\), the second Love number \(k_{2}\), and the tidal deformability \(\Lambda=(2/3)\kappa_{2}\,\mathcal{C}^{-5}\) for an isolated NS with the same \(M\). Here \(R_{\mathrm{eq}}\) is the equatorial coordinate stellar radius. The last three columns report the ADM mass \(M_{\mathrm{ADM}}\,[M_{\odot}]\), the ADM angular momentum \(J_{\mathrm{ADM}}\,[M_{\odot}^{\alpha}]\) and the angular velocity \(\Omega\,[\mathrm{krad/s}]\). In all cases the NSs have a rest-mass \(M_{0}=1.4M_{\odot}\), and an initial temperature of \(0.01\mathrm{MeV}\). The initial binary coordinate separation is \(\sim$44.3\,\mathrm{km}$\). The hybrid BNS models have similar initial properties (see Guerra et al. (2023)). where \(\tilde{a}(f)\) denotes the Fourier transform of \(a(t)\) and \({}^{*}\) the complex conjugate. The factor \(S_{n}(f)\) is the one-sided power spectral density of the detector. We use the predicted Cosmic Explorer sensitivity (Abbott et al., 2017) with a lower frequency cutoff of \(f_{\rm min}=600\) Hz and a sampling frequency of 16 kHz so that \(f_{\rm max}=8\) kHz. Our analysis only includes the merger and post-merger emission, together with the very late inspiral of the process, which is where thermal effects will be most impactful. We note that this part of the signal provides little information about the masses and spins of the binary. In addition, given our usage of numerical relativity simulations, we cannot sample over the individual masses and spins, but only over the extrinsic source parameters, namely the source sky-location, orientation, luminosity distance, polarization angle and time-of-arrival. Nevertheless, it is sensible to assume that masses and spins would be accurately measured from the minutes-long inspiral signal (Smith et al., 2021; Branchesi et al., 2023). While the same is true for the extrinsic parameters, we choose to actually sample over these, so that we obtain "conservatively pessimistic" results that actually underestimate the importance of a "tabulated" treatment of thermal effects. We assume standard prior probabilities for the sky-location, source orientation and polarisation angle, together with a distance prior uniform in co-moving distance and a uniform prior on the time-of-arrival, with a width of 0.2 s, centered on the true value. We sample the likelihood across Figure 1: Each panel displays the rest-mass density (left) and thermal specific internal energy (right) of the binary remnant on the equatorial plane at selected times. Those are shown for the tabulated (top) and hybrid (bottom) cases of the DD2 EoS (top row) and HShen (bottom row) EoS. Isocontours correspond to rest-mass densities \(\rho_{0}=\{10^{11},10^{12},10^{14}\}\,\mathrm{g\,cm^{-3}}\). The boundary of the bulk of the star is displayed as dashed (contour) lines defined as regions where \(\rho_{0}=10^{-3}\,\rho_{0,\rm max}\), where \(\rho_{0,\rm max}\) is the initial maximum value of the rest-mass density. the parameter space using a custom version of the publicly available software Parallel Bilby(Ashton et al., 2019; Smith et al., 2020), sampling the parameters with the algorithm Dynesty(Speagle, 2020). Finally, we note we do not perform our analysis under the strain formalism using \(h(t)\). Instead, we perform injection and recovery of the Newman-Penrose scalar \(\psi_{4}=d^{2}h(t)/dt^{2}\) directly outputted by numerical simulations, using the formalism described in Bustillo et al. (2022). This is done to avoid potential systematic errors arising during the computation of the GW strain \(h\) from \(\psi_{4}\)(Reisswig and Pollney, 2010). ## Results ### Merger dynamics and waveforms The initial data for the BNS reported in Table 1 were evolved in Guerra et al. (2023) for over \(t-t_{\rm mer}\sim 140\,\)ms after merger. Each panel in Fig. 1 displays two snapshots in the evolution of the rest-mass density (left half) and the thermal specific energy (right half) for both, tabulated (top half) and hybrid cases (bottom half). The top two panels correspond to the DD2 EoS and the bottom ones to the HShen EoS. As discussed in Guerra et al. (2023), we use the thermal specific energy \(\epsilon_{\rm th}\) as a proxy for the temperature. Only at low densities/high temperatures, the definition of temperature usually employed in BNS merger evolutions based on piecewise polytropes, \(T=(\Gamma_{\rm th}-1)\,\epsilon_{\rm th}\) (see e.g. De Pietri et al. (2020)) is equivalent to that of microphysical, finite-temperature EOS evolutions. Fig. 1 shows that for tabulated evolutions the BNS remnant is hotter near its surface, defined by the density isocontour where \(\rho=10^{-3}\rho_{0,\rm max}\), with \(\rho_{0,\rm max}\) being the initial maximum value of the rest-mass density. This induces a slightly more extended remnant than that of the hybrid evolutions (Guerra et al., 2023). Fig. 2 displays the dominant \(\ell=m=2\) GW quadrupole mode of the late inspiral, merger and post-merger emission of the simulations corresponding to the same two EoSs of Fig. 1. The yellow (red) curves indicate hybrid (tabulated) evolutions. Differences between the two EoSs are noticeable. The effect of different thermal treatments are clearly more visible in the post-merger signal emitted by the HShen case (left panel) for which the amplitude, associated with the oscillations of the remnant, decays significantly faster. Moreover, small differences between hybrid and tabulated evolutions are noticeable in the last cycles of the inspiral signals also in the case of the HShen EoS. This suggests that for this EoS thermal effects might alter the tidal deformability in a different way for both treatments, possibly leading to some observational difference. \begin{table} \begin{tabular}{c|c c c} EoS & \multicolumn{3}{c}{\(\log{\cal B}_{\rm H}^{\rm T}\) (\(d_{L}\) [Mpc], \(\rho_{\rm opt}^{\rm net}\))} \\ \hline & \(\iota=0.3\) & \(\iota=0.8\) & \(\iota=\pi/2\) \\ SLy & 10.1 (100, 15.3) & 22.1 (50, 22.3) & 38.6 (10, 27.9) \\ LS220 & 9.7 (100, 14.3) & 22.1 (50, 20.9) & 33.7 (10, 26.1) \\ HShen & 16.2 (100, 15.1) & 37.0 (50, 22.1 ) & 65.2 (10, 27.4 ) \\ DD2 & 5.6 (100, 15.3) & 11.7 (50, 22.4) & 14.3 (10, 27.5) \\ \end{tabular} \end{table} Table 2: **Impact of thermal-effects implementation in gravitational-wave observations** We show the natural log Bayes Factors between tabulated and hybrid models, obtained when these are used to recover a signal from a BNS post-merger using the tabulated EoS. We show results for source inclinations of \(\iota=\{0.3,0.8,\pi/2\}\) and respective distances of \(d_{L}=\{100,50,10\}\)Mpc. We assume a three-detector network composed by three Cosmic Explorers. Figure 2: Quadrupolar GW signals of two BNS mergers of our sample, HShen EoS (left) and DD2 EoS (right). Yellow and red waveforms refer to hybrid and tabulated models, respectively. #### Target binary neutron star sources We choose target sources corresponding to four equal-mass BNSs with total mass \(M\approx 2.56M_{\odot}\) (see Table 1) characterized by four different EoSs. For each source, we consider three orbital inclinations, namely \(\iota=0.3\), \(\iota=0.8\) and \(\iota=\pi/2\). Varying the orbital inclination changes the contribution of higher-order harmonics to the observed waveform, which may influence the impact of the thermal effects. While higher-order harmonics are highly suppressed for equal-mass systems, it has been shown that these can be triggered by the so-called "one-arm" spiral instability (East et al., 2016; Radice et al., 2016; Lehner et al., 2016), facilitating the estimation of the source orientation (Bustillo et al., 2021). In particular, our waveforms include the dominant quadrupole modes, \((\ell,m)=(2,\pm 2)\), together with the \((\ell,m)=(2,\pm 1),(3,\pm 3),(3,\pm 2)\) modes. Ideally, we would perform a wide injection campaign considering sources placed at a large variety of sky-locations, distances and source inclinations. However, the high computational cost of our parameter estimation runs, which make use of numerical relativity, prevents this. For this reason, we only use one fiducial sky-location for our runs. Moreover, the luminosity distance and sky-location do mainly control the signal loudness, without altering the mode content of the signal. Given this, it is expected that the ratio of the maximum signal-to-noise ratios (SNRs) \(\rho^{\rm net}\) across the detector network (and therefore the ratio of the respective maximum likelihoods) obtained by the tabulated and hybrid models, should show a weak dependence on these parameters. Since the likelihood, which is what controls the Bayes Factor \(\mathcal{B}\), goes as \((\rho^{\rm net})^{2}/2\), we expect the relative Bayes Factor for the hybrid and tabulated models to go as \((\rho_{\rm opt})^{2}/2\). Next, the Bayes Factor for each analysis roughly goes as \(\log\mathcal{B}\simeq\log\mathcal{L}_{\rm max}-\mathcal{C}\), where \(\mathcal{C}\) accounts for the Occam penalty paid by the model. Finally, \(\log\mathcal{L}_{\rm max}\simeq\mathcal{M}\times(\rho_{\rm opt}^{\rm net})^{2}/2\), with \(\mathcal{M}\) denoting the match between our injection and the best-fitting template, and \(\rho_{\rm opt}^{\rm net}\) indicating the SNR of the injection, equal to the maximal (or optimal) SNR that any analysis can recover (Lange et al., 2017). In this situation, for each inclination we perform parameter inference for three fiducial combinations of sky-location and distances yielding reasonably different \(\rho_{\rm opt}\). With this, we perform a linear fit \(\log\mathcal{B}=\alpha\rho_{\rm max}^{2}+\beta\). Next, for each of our target sources, we compute the average SNR over sky-locations at a fiducial distance \(d_{L}^{\rm ref}\). Finally, using that the SNR is inversely proportional to the distance, we compute the distance \(d_{L}^{\rm det}\) at which the averaged relative Bayes Factor between the tabulated and hybrid models satisfies \(\log\mathcal{B}_{\rm H}^{\rm T}=5\), which we will call "detection" distance. #### Model selection Table 2 shows \(\log\mathcal{B}_{\rm H}^{\rm T}\) for selected fiducial runs corresponding to each of our selected source inclinations. The parentheses show the luminosity distance chosen for the injection and the corresponding optimal SNR across the detector network. We find that the HShen EoS is most impacted by the choice of implementation of thermal effects, for all source inclinations. This is not a trivial result, as the impact of thermal-effects treatment in the waveform for a given EoS may strongly depend on the source inclination. This may cause some EoS to produce more observable differences for different inclinations. This is observed for the two next cases that are most impacted by the treatment of thermal-effects, namely the SLy and LS220 EoSs. Both cases return similar values of \(\log\mathcal{B}_{\rm H}^{\rm T}\) for the two lowest inclinations. However, the SLy EoS is most affected for edge-on systems. Finally, the emission from BNS mergers with DD2 EoS is the least influenced. Figure 3 shows the average "detection" distance \(d_{L}^{\rm det}\), as a function of EoS and source inclination, obtained after performing the fit described in the previous section. We have checked empirically that typical devia Figure 3: **Impact of thermal effects implementation averaged over source sky-location and azimuthal angle.** Average source distance for which the difference between the log-evidences obtained by the “tabulated” and “hybrid” models reaches \(\log\mathcal{B}_{\rm H}^{\rm T}=5\), when recovering a signal modeled through the “tabulated” implementation of thermal effects (solid lines). Averaging is performed over source sky-locations and observer’s azimuthal angle. Distances at which \(\log\mathcal{B}_{\rm H}^{\rm T}=8\) are shown in dashed lines. We show results for and different EoSs and varying source inclinations. tions from such linear fit are of order \(\Delta\log\mathcal{B}\simeq 1\). To add conservative results we also show lines corresponding to \(\log\mathcal{B}_{\rm H}^{\rm T}=8\). We find that for all EoS, the signal features coming from the "tabulated" implementation of thermal effects can be detected at distances \(d_{L}\leq 50\) Mpc regardless of the source inclination, and at distances \(d_{L}=100\) Mpc for inclinations \(\iota<0.8\). In other words, the correct analysis of a source like GW170817 would require of the more realistic tabulated modeling of its EoS, were the source consistent with one of the EoS of our sample and assuming that other effects our simulations do not yet incorporate, like magnetic fields, bulk and shear viscosity, and neutrinos, do not play a major role1. Moreover, the same is true even for weak edge-on sources, for averaged distances of order 50 Mpc. Footnote 1: It should be noted that the accurate numerical modelling of the actual GW spectrum of BNS post-merger remnants is still a matter of debate. In particular, the development of MHD-driven turbulence in the remnant, triggered by the magneto-rotational instability, is likely to affect the evolution of the system, influencing the emission of high-frequency GWs (see e.g. Shibata & Kiuchi (2017); Ruiz et al. (2021); Chabanov & Rezzolla (2023) for recent studies). To close this section, we note that the use of a simplified hybrid thermal treatment leads, in certain cases, to significant parameter biases in the sampled parameters, specially the luminosity distance. The biases are due to the different signal amplitudes predicted by tabulated and hybrid models, as it is clear in Fig. 2, specially in the case of the HShen EoS. Nevertheless, we stress again that these parameters are expected to be measured with great accuracy from the minute-duration inspiral signal. ## Conclusions Using a combination of a Bayesian framework and numerical-relativity simulations, we have shown the dramatic importance of an accurate treatment of thermal effects in the post-merger GW emission of BNS mergers. We have found that, for all EoSs and source inclinations considered, self-consistent "tabulated" implementations lead to modifications of the post-merger GW emission with respect to simplified "hybrid" approaches that are observable with a detector network formed by three Cosmic Explorers, for sources at distances \(d_{L}\leq 50\) Mpc. In particular, for inclinations \(\iota\leq 0.8\), consistent with GW170817, the differences are visible for distances \(d_{L}\leq 150\) Mpc, with the exception of the DD2 EoS. Out of the four EoSs considered and within the limitations of our simulations, the post-merger GW signal of the HShen EoS is the most influenced by a "hybrid" implementation of thermal effects. We note that our results rather underestimate the importance of a self-consistent implementation of thermal effects, as we perform inference on parameters that are expected to be accurately constrained from the long inspiral signal, like sky-location or source orientation. This allows hybrid models to exploit parameter degeneracies to achieve higher Bayesian evidences than they would do otherwise, if such parameters were accurately constrained through the inspiral signal. Our work can be regarded as a "proof of principle" of the application of parameter inference to BNS merger remnants using numerical-relativity waveforms, which is obviously limited by the fact that we do not sample over the actual EoS. A natural extension of our work would be to compare injections including thermal effects for a given EoS with a large set (ideally continuous) of templates covering a wide range of EoSs (as somewhat done in Bustillo et al. (2021, 2022) for the case of collisions of a different matter source, Proca-stars2). However, the high cost of our numerical simulations, together with the intrinsic discreteness and low number of simulations including thermal effects prevents such extension at present. Nevertheless, our study proves that waveforms sourced by the EoSs of our sample are sufficiently different so as to allow the identification of the underlying EoS for the range of signal loudness we consider. Footnote 2: Note that, in the Proca-star case, the simulations do not span a continuous range of EoSs, as for neutron stars. Instead, they span a very dense grid in the individual frequencies of the Proca fields. ## Acknowledgements It is a pleasure to thank Zachariah Etienne and Leonardo Werneck for useful discussions. This work was supported by a fellowship from "la Caixa" Foundation (ID100010434), the European Union's Horizon2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 847648 (fellowship code LCF/BQ/PI20/11760016), the Generalitat Valenciana through the grants CIDE-GENT/2021/046 and Prometeo CIPROM/2022/49, and the Spanish Agencia Estatal de Investigacion through the grants PRE2019-087617, PID2020-118635GB-I00 and PID2021-125485NB-C21 funded by MCIN/AEI/10.13039/501100011033 and ERDF A way of making Europe. Further support has been provided by the EU's Horizon 2020 Research and Innovation (RISE) programme H2020-MSCA-RISE-2017 (FunFiCO-77740) and by the EU Staff Exchange (SE) programme HORIZON-MSCA-2021-SE-01 (NewFunFiCO-101086251). VVO acknowledges support from the Ministry of Science, Innovation and Universities (MICIU) of Spain (FPI grant PRE2018-085436 and ) The authors acknowledge computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY0823459, the support of the NSF CIT cluster for the provision of computational resources for our parameter inference runs, and the computational resources and technical support of the Spanish Supercomputing Network through the use of MareNostrum at the Barcelona Supercomputing Center (AECT-2023-1-0006) where the BNS merger simulations were performed. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. This document has LIGO DCC number LIGO-P2300344.
2309.13809
Pulsar Scintillation through Thick and Thin: Bow Shocks, Bubbles, and the Broader Interstellar Medium
Observations of pulsar scintillation are among the few astrophysical probes of very small-scale ($\lesssim$ au) phenomena in the interstellar medium (ISM). In particular, characterization of scintillation arcs, including their curvature and intensity distributions, can be related to interstellar turbulence and potentially over-pressurized plasma in local ISM inhomogeneities, such as supernova remnants, HII regions, and bow shocks. Here we present a survey of eight pulsars conducted at the Five-hundred-meter Aperture Spherical Telescope (FAST), revealing a diverse range of scintillation arc characteristics at high sensitivity. These observations reveal more arcs than measured previously for our sample. At least nine arcs are observed toward B1929$+$10 at screen distances spanning $\sim 90\%$ of the pulsar's $361$ pc path-length to the observer. Four arcs are observed toward B0355$+$54, with one arc yielding a screen distance as close as $\sim10^5$ au ($<1$ pc) from either the pulsar or the observer. Several pulsars show highly truncated, low-curvature arcs that may be attributable to scattering near the pulsar. The scattering screen constraints are synthesized with continuum maps of the local ISM and other well-characterized pulsar scintillation arcs, yielding a three-dimensional view of the scattering media in context.
S. K. Ocker, J. M. Cordes, S. Chatterjee, D. R. Stinebring, T. Dolch, V. Pelgrims, J. W. McKee, C. Giannakopoulos, D. J. Reardon
2023-09-25T01:36:51Z
http://arxiv.org/abs/2309.13809v2
Pulsar scintillation through thick and thin: Bow shocks, bubbles, and the broader interstellar medium ###### Abstract Observations of pulsar scintillation are among the few astrophysical probes of very small-scale (\(\lesssim\) au) phenomena in the interstellar medium (ISM). In particular, characterization of scintillation arcs, including their curvature and intensity distributions, can be related to interstellar turbulence and potentially over-pressurized plasma in local ISM inhomogeneities, such as supernova remnants, HII regions, and bow shocks. Here we present a survey of eight pulsars conducted at the Five-hundred-meter Aperture Spherical Telescope (FAST), revealing a diverse range of scintillation arc characteristics at high sensitivity. These observations reveal more arcs than measured previously for our sample. At least nine arcs are observed toward B1929+10 at screen distances spanning \(\sim 90\%\) of the pulsar's 361 pc path-length to the observer. Four arcs are observed toward B0355+54, with one arc yielding a screen distance as close as \(\sim 10^{5}\) au (\(<1\) pc) from either the pulsar or the observer. Several pulsars show highly truncated, low-curvature arcs that may be attributable to scattering near the pulsar. The scattering screen constraints are synthesized with continuum maps of the local ISM and other well-characterized pulsar scintillation arcs, yielding a three-dimensional view of the scattering media in context. keywords: stars:neutron - pulsars:general - ISM:general - turbulence - scattering - ISM: bubbles ## 1 Introduction Pulsars emit coherent radio beams that are scattered by electron density fluctuations along the line-of-sight (LOS), leading to wavefield interference at the observer. The resulting intensity modulations, or scintillation, are observed in pulsar dynamic spectra (intensity as a function of frequency and time) as highly chromatic and variable on minute to hour timescales. The character of the scintillation pattern (rms intensity modulations and timescale) is related to the Fresnel scale \(\sim\sqrt{\lambda D}\), which is \(\sim 0.01\) astronomical units (au) for typical pulsar distances \(D\) and observing wavelengths \(\lambda\)(Rickett, 1990). Pulsar scintillation can thus probe \(\lesssim\) au structure in the interstellar medium (ISM). Numerous studies have demonstrated that such structure not only exists but may be ubiquitous (Stanimirovic & Zweibel, 2018). Pulsar dynamic spectra often exhibit organized interlacing patterns, which manifest as parabolas of power in the secondary spectrum obtained by 2D Fourier Transform (FT) of the dynamic spectrum (Stinebring et al., 2001). These scintillation arcs are widely observed (Stinebring et al., 2022; Main et al., 2023), and their parabolic form is understood to be a generic result of the square-law relationship between time delay and angle in small-angle scattering (Walker et al., 2004; Cordes et al., 2006). However, observed secondary spectra show a broad range of arc characteristics for different LOSs and for single LOS at different epochs and observing frequencies, including variable arc widths, inverted arclets, clumps and asymmetries in arc intensity, and multiple arcs of different curvature (for examples see Figures 1-2, along with Stinebring et al., 2022 and Main et al., 2023). Efforts to connect these features to the underlying physics of the ISM have largely focused on two scenarios that need not be mutually exclusive (Cordes et al., 1986): diffraction through a turbulent cascade of density fluctuations, and refraction through discrete structures not necessarily associated with ISM turbulence (e.g. Pen & Levin, 2014). We emphasize that both diffraction and refraction can produce scintillation arcs in their most generic forms. One method to assess the plasma properties of the scintillating medium involves mapping the secondary spectrum to the pulsar scattered image. This method has been applied to both single-dish pulsar observations under limiting assumptions (e.g. Stinebring et al., 2019; Rickett et al., 2021; Zhu et al., 2022), and to Very Long Baseline Interferometry (VLBI) observations yielding scattered images at much higher resolution, the most well-studied example being PSR B0834+06 (Brisken et al., 2010; Liu et al., 2016; Simard et al., 2019; Baker et al., 2022). This pulsar exhibits reverse and arc asymmetries (Hill et al., 2005), with a reconstructed scattered image that is highly anisotropic and contains discrete inhomogeneities on sub-au scales (Brisken et al., 2010). These features have been attributed to either highly over-pressurized plasma structures (Hill et al., 2005) or plasma "sheets" viewed at large inclination angles relative to the LOS (Pen & Levin, 2014; Liu et al., 2016; Simard & Pen, 2018). Complementary constraints on the plasma responsible for scintillation arcs can be obtained from the secondary spectrum itself. The primary constraint of interest is the arc curvature, which can be used to infer the distance to the scattering medium and hence determine the spatial scale of the scattered image, which is related to the spatial scale of the plasma density fluctuations. Precise scattering screen distances are typically obtained by measuring temporal variations in arc curvature, which occur periodically due to the Earth's motion around the Sun and the pulsar's orbital motion around its companion, if it has one (e.g. Main et al., 2020; Reardon et al., 2020). Additional constraints from the secondary spectrum include comparing the observed arc frequency dependence and arc power distributions to theoretical expectations for a turbulent medium, which have yielded mixed evidence for and against Kolmogorov turbulence among different LOSs (Hill et al., 2003; Stinebring et al., 2019; Reardon et al., 2020; Rickett et al., 2021; Turner et al., 2023). In some cases, arc power is observed to vary systematically over time, presumably due to discrete ISM structures crossing the LOS (Hill et al., 2005; Wang et al., 2018; Sprenger et al., 2022). Scattering screens have been connected to foreground structures observed at other wavelengths, including an HII region (Mall et al., 2022), local interstellar clouds (McKee et al., 2022), a pulsar supernova remnant (Yao et al., 2021), and the Monogem Ring (Yao et al., 2022). Scintillation arcs have also been associated with screens near the edge of the Local Bubble (e.g. Bhat et al., 2016; Reardon et al., 2020; McKee et al., 2022; Yao et al., 2022; Liu et al., 2023), but the connection between these arcs and Local Bubble properties remains unclear. A less explored source of scintillation arcs is stellar bow shocks, including those of pulsars. While many pulsars have transverse velocities in excess of the fast magnetosonic speed expected in the ISM, only a handful of pulsar bow shocks have been observed through direct imaging of the forward shock and/or the ram pressure confined pulsar wind nebula (PWN; Kargaltsev et al., 2017). Recently, scintillation arcs have been detected from the bow shock of a millisecond pulsar (D. Reardon et al., submitted). This result raises the possibility of using scintillation arcs to probe pulsar bow shocks, independent of more traditional direct imaging techniques (e.g. Brownsberger & Romani, 2014). We have used the Five-hundred-meter Aperture Spherical Telescope (FAST) to observe eight pulsars with flux densities, spin-down luminosities, and transverse velocities favorable for carrying out a survey of scintillation arcs from near-pulsar screens, including bow shocks (see Table 1 and Section 2). Scintillation arcs are faint features and are typically analyzed from data recorded with narrow (\(\sim 100\) kHz) channel widths, leading to a very low per-channel signal-to-noise (S/N). Therefore the high gain of FAST allows for highly sensitive measurements. Our observations yielded a rich array of scintillation arcs probing a broad distribution of screens between the pulsars and observer. In this study we present the results of our FAST observing campaign, with an emphasis on constraining the roles of the extended ISM and discrete local structures, including pulsar bow shocks, in observed secondary spectra. Our sample is well-suited to address these issues as it spans a range of distances, dispersion measures (DMs), and scintillation regimes. The paper is organized as follows: Section 2 presents the basic theory of scintillation arcs and requirements to detect near-pulsar screens; Section 3 describes the FAST observations; Section 4 shows the data analysis techniques implemented; and results for each pulsar are given in Section 5. Section 6 discusses the connection between scintillation arc properties and the ISM, including turbulent density fluctuations, bow shocks, and known discrete structures. We present conclusions in Section 7. ## 2 Theory of dynamic & secondary spectra ### Basic Phenomenology The pulsar dynamic spectrum consists of fringe patterns that form from interference of scattered waves. We assume that the scattering occurs in a thin screen, and discuss the potential relevance of extended media in Section 7. The interference for two scattering angles (or equivalently, two angular locations in the pulsar scattered image) \(\mathbf{\theta}_{1}\) and \(\mathbf{\theta}_{2}\) leads to a sinusoidal fringe that corresponds to a single Fourier component in the secondary spectrum at coordinates \(f_{\rm f}\), \(f_{\nu}\)(Stinebring et al., 2001; Walker et al., 2004; Cordes et al., 2006):1 Footnote 1: Note the minus sign in Eq. 1: \(f_{\rm r}<0\) when \(\theta_{2}>0\) and vice versa, due to the relative motion of the pulsar and the deflector. E.g., Hill et al. (2005) observed individual arclets that moved from negative to positive \(f_{\rm f}\) over the course of several months, due to the pulsar LOS passing through a discrete scattering structure. \[f_{\rm f}=-\frac{1}{s\lambda}(\mathbf{\theta}_{2}-\mathbf{\theta}_{1})\cdot\mathbf{V}_{\perp} \tag{1}\] \[f_{\nu}=\left\lfloor\frac{D(1-s)}{2cs}\right\rfloor(\mathbf{\theta}_{2}^{2}-\mathbf{ \theta}_{1}^{2}). \tag{2}\] Here \(D\) is the distance between the pulsar and observer, \(c\) is the speed of light, \(\lambda\) is the observing wavelength at the center of the band, \(s\) is the fractional screen distance (\(s=0\) at the pulsar and \(s=1\) at the observer), and \(\mathbf{V}_{\perp}\) is the transverse velocity of the screen where it intersects the direct LOS, \[\mathbf{V}_{\perp}=(1-s)\mathbf{V}_{\rm PSR\perp}+s\mathbf{V}_{\rm obs\perp}-\mathbf{V}_{\rm src \perp}. \tag{3}\] The secondary spectrum coordinates given by Eq. 1-2 are the Fourier conjugates of time \(t\) and frequency \(\nu\), and are equivalent to the differential Doppler shift and the differential Doppler delay. It can also be convenient to consider the Fourier conjugate to observing wavelength (Cordes et al., 2006; Fallows et al., 2014; Reardon et al., 2020), \[f_{\lambda}=cf_{\nu}/\lambda^{2}=\left[\frac{D(1-s)}{2s\lambda^{2}}\right](\bm {\theta}_{2}^{2}-\mathbf{\theta}_{1}^{2}). \tag{4}\] From Eq. 1-2 it is apparent that each interfering pair of points \((\mathbf{\theta}_{1},\mathbf{\theta}_{2})\) lies on a parabola \(f_{\nu}\propto f_{\rm f}^{2}\) due to the linear and quadratic relationships between \(\mathbf{\theta}\) and \(f_{\rm f}\) and \(f_{\nu}\), respectively. A scintillation arc can result from interference between all angular pairs (Cordes et al., 2006). Figure 1: Dynamics and secondary spectra for top left to bottom right: PSR B0355-54, 30913-406, B03950-08, and 11643–1224. Each dynamic spectrum has a 3.2 s sublinear time and a 0.06 MHz frequency resolution, and has been normalized to unit mean linear intensity. Some dynamic spectra have strong radio frequency interference (RFI) at 1000 MHz that is marked and set to the mean intensity for visualization here, but to calculate secondary spectra the dynamic spectra were interpolated across the hot frequency channel using a 2D Gaussian kernel, each secondary spectrum was calculated using the dynamic spectrum interpolated onto an equispaced wavelength grid, and is shown here in logarithmic power units. The mean (log.) of our noise baseline has been subtracted in the secondary spectra shown, and the \(f_{\rm r}=0\) channel masked. For B03950-08, secondary spectra for the second epoch (MDD 95923) are shown. For 11643–1224, an inset shows a 6 MHz and 35 min-long portion of the dynamic spectrum on MDD 95927. The secondary spectra shown for 12643–1224 correspond to the first, longer epoch (MDD 95923). All of these secondary spectra only show a fraction of their full Nyquist range. Figure 2: Dynamic and secondary spectra for (top left to bottom right): PSR 1173-1077, 11740-1000, 181929-10, and 181957-250, Spectra were formed using the same methods as Figure 1, except a subintegration time of 6.4 s and frequency resolution of 1 MHz was used for B19574-20 due to its low SNR. The dynamic spectrum shown here for B1957+20 was further smoothed by a factor of 3 in frequency and time using a Gaussian kernel, for improved demonstration of the scatters (but the secondary spectrum was formed from the unsmoothed dynamic spectrum). We define the arc curvature \(\eta\) in frequency-independent coordinates \(f_{\lambda}=\eta f_{\rm r}^{2}\)(e.g. Reardon et al., 2020): \[\eta=\frac{Ds(1-s)}{2V_{\rm L}^{2}\cos^{2}\!\psi}, \tag{5}\] where \(\psi\) arises from the dot product between \(\mathbf{\theta}_{i}\) and \(\mathbf{V_{\rm L}}\) (Eq. 1) and is the angle between the screen's effective velocity \(\mathbf{V}_{\perp}\) and the vector direction of points \(\mathbf{\theta}_{i}\) in the scattered image, if the image is anisotropic (Walker et al., 2004). If the image is isotropic then \(\cos\psi=1\). For comparison to previous studies, the frequency-independent curvature \(\eta\) may be converted to the curvature that would be measured in \(f_{\rm r}-f_{\rm r}\) coordinates (\(\eta_{\rm r}\)) simply using \(\eta=c\eta_{\rm r}/\lambda^{2}\). For \(\eta_{\rm r}\) in \(\rm{{}^{3}}\) and \(\nu\) in GHz, this gives \(\eta\approx 33\times 10^{2}\) m\({}^{-1}\)mHz\({}^{-2}\times(\eta_{\rm r}/s)^{3}\times(\nu/\)GHz\()^{2}\). The above discussion tacitly assumes that scintillation arises from the mutual interference of an angular spectrum of plane waves. An alternative description based on the Fresnel-Kirchoff diffraction integral nonetheless yields the same expressions given in Eq. 1-2 (Walker et al., 2004). Scintillation arc theory based solely on refraction leads to these same expressions but refers to interference between multiple images of the pulsar, rather than points in a single scattered image (e.g. Pen & Levin, 2014). It is clear that the manifestation of parabolic arcs in the secondary spectrum is geometric and therefore generic; however, the power distribution along these arcs is not generic because it depends on the shape of the scattered image. In Section 6 we assess the distribution of power within observed secondary spectra, in the context of interstellar electron density fluctuations. ### Requirements to Detect Scintillation Arcs from Pulsar Bow Shocks If pulsar bow shocks produce scintillation arcs, then there is an optimal range of both pulsar and observing parameters to detect them. In this section we demonstrate that our observations meet these requirements in principle, barring S/N constraints. A scintillation arc is only detectable if the highest point on the arc exceeds at least one sample in \(f_{\rm r}\). Assuming the arc fills the entire Nyquist range \([-f_{\rm r,Ny},+f_{\rm r,Ny}]\), then we require \(f_{\rm r}(f_{\rm r,Ny})>\Delta f_{\rm r}\), where \(f_{\rm r,Ny}=1/(2\Delta t)\) for a subintegration time \(\Delta t\) and \(\Delta f_{\rm r}=1/B\) for a total frequency bandwidth \(B\). The subintegration time \(\Delta t\) corresponds to the time resolution of the dynamic spectrum, which is typically several seconds so that multiple pulses can be averaged together to achieve high S/N (see Section 4). The parabola \(f_{\rm r}(f_{\rm r,Ny})=\eta_{\rm r}(f_{\rm r,Ny})^{2}\) yields a minimum detectable arc curvature \[\eta_{\rm r,min}=4(\Delta t)^{2}/B, \tag{6}\] or in the equivalent wavelength-derived curvature, \(\eta_{\rm min}=4c(\Delta t)^{2}/B\lambda^{2}\). To further solve for the minimum detectable screen distance \(s\), we must consider the relative velocities of the pulsar, screen (or bow shock), and observer. For a screen at the bow shock, \(V_{\rm scrL}\lesssim V_{\rm brsl.}\). Assuming an observer velocity much smaller than \(V_{\rm brsl.}\) and that \(V_{\rm brsl.}-V_{\rm scrL}\equiv eV_{\rm brsl.}\), the screen's effective transverse velocity (Equation 3) reduces to \(\mathbf{V}_{\perp}\approx(\epsilon-s)\mathbf{V}_{\rm brsl.}\). For a bow shock, \(s\ll 0.1\) and \(\epsilon\lesssim 1\), and Equation 5 reduces to \[\eta_{\rm r,min}=Dsc/2\nu^{2}e^{2}V_{\rm brsl.}^{2}, \tag{7}\] assuming \(\cos^{2}\!\psi=1\) for simplicity. Equations 6 and 7 thus yield a minimum detectable screen distance \[s_{\rm min}=\frac{8\epsilon^{2}}{c}\frac{(\nu V_{\rm brsl.}\Delta t)^{2}}{DB}, \tag{8}\] which in physical units is \[d_{\rm sl,min}=sD\approx 1.8\ {\rm au}\times\frac{(\epsilon\nu_{\rm GHz}V_{\rm br,100}\Delta t)^{2}}{B_{\rm GHz}} \tag{9}\] evaluated for frequencies in GHz, \(\Delta t\) in seconds, and \(V_{\rm brsl.}\) in units of 100 km/s. If the scintillation arc only extends to a fraction \(\kappa\) of the Nyquist range in \(f_{\rm r}\), then \(d_{\rm sl,min}\) will be larger by a factor \(\kappa^{2}\) in the denominator of Equation 9. We thus find that fast subintegration times, large bandwidths, and lower velocity objects are most favorable for detection of arcs from pulsar bow shocks, assuming these arcs are high enough intensity to be detected. Low observing frequencies (\(<1\) GHz) are likely less favorable, as arcs are generally observed to become increasingly diffuse at lower observing frequencies (Wu et al., 2022). Placing \(d_{\rm sl,min}\) at the stand-off radius of the bow shock, we can solve for the range of spin-down luminosities and pulsar transverse velocities needed to resolve the bow shock as a scintillating screen. Assuming that the entire pulsar spin-down energy loss is carried away by the relativistic wind, the thin shell limit gives the bow shock stand-off radius: \[R_{0}=\sqrt{\frac{\dot{E}}{4\pi c\rho v_{*}^{2}}}\] \[\approx 225\ {\rm au}\times\left[\left(\frac{\dot{E}}{10^{33}\ {\rm erg\ s^{-1}}}\right)\left(\frac{n_{H}}{{\rm cm^{-3}}}\right)^{-1}\left( \frac{v_{*}}{100\ {\rm km\ s^{-1}}}\right)^{-2}\right]^{1/2}, \tag{10}\] where \(\dot{E}\) is the spin-down luminosity of the pulsar, \(c\) is the speed of light, \(v_{*}\) is the pulsar velocity, and \(\rho=n_{H}\gamma_{H}m_{H}\) is the ISM density that depends on the number density of atomic hydrogen \(n_{H}\), the cosmic abundance \(\gamma_{H}\), and the mass of the hydrogen atom \(m_{H}\)(Wilkin, 1996; Chatterjee & Cordes, 2002). Figure 3 shows the phase space of \(\dot{E}\) vs. \(V_{\rm brsl.}\) for the pulsars observed, assuming \(v_{*}=V_{\rm brsl.}\), and compared to different ISM electron densities \(n_{e}\) calculated assuming \(d_{\rm sl,min}=R_{0}\), \(n_{H}\approx n_{e}\), and \(\gamma_{H}=1.37\). In principle, all of the pulsars observed in our study have high enough \(\dot{E}\) and small enough \(V_{\rm brsl.}\) to yield a detectable scintillation arc from their bow shocks (if the bow shocks exist), for typical ISM densities. This statement does not account for the S/N of the arcs, which depends on the unknown scattering strength of the bow shocks. ## 3 Observations We observed eight pulsars between October-December 2021 at FAST; the source list, pulsar properties, and observation dates are shown in Table 1. Our source list was primarily chosen based on the requirements described in Section 2.2, and a few of our sources are bright pulsars with previously detected scintillation arcs (although specific arc properties were not factored into the pulsar selection). The sample includes both pulsars that have bow shocks previously observed in H\(\alpha\) (B1957+20) or ram pressure confined PWNe observed in nonthermal radio or X-ray emission (B0355+54, B0950+08, and B1929+10), in addition to pulsars that do not have known bow shocks but do have spin-down luminosities and transverse velocities favorable for producing bow shocks and detectable scintillation arcs (see Section 2.2). None of these pulsars have supernova remnant associations, making a bow shock the most likely source of any scintillation arc with a screen distance very close to the pulsar. Each source was observed for 2 hours at a single epoch, except for J1643\(-\)1224 and B0950+08, which were observed in two epochs separated by a few weeks. Data were recorded in filterbank format at a time resolution of \(98~{}\mu\)s and a frequency resolution of \(0.06~{}\)MHz. FAST covers a frequency band of \(1-1.5~{}\)GHz, but bandpass roll-off at the upper and lower \(10\%\) of the band yields an effective bandwidth from \(1.05-1.45~{}\)GHz. A noise diode injected an artificial modulated signal for one minute at the start and end of each observation, in order to verify gain stability and perform flux calibration. ## 4 Data reduction & analysis ### Formation of Dynamic & Secondary Spectra Dynamic spectra consist of the on-pulse intensity averaged over multiple pulses. After de-dispersion, the filterbank data were folded in 3.2 second long subintegrations using phase-connected timing solutions generated by tempo(Nice et al., 2015). This subintegration time was chosen to provide sufficient coverage of very low-curvature arcs in the secondary spectrum, based on Eqs. 7-9; for \(\Delta t=3.2~{}\)s and \(B=150~{}\)MHz, \(\eta_{\rm min}=1.8\times 10^{-3}~{}\)m\({}^{-1}\) mHz\({}^{-2}\) at 1.4 GHz. For B1957+20 a longer subintegration time of 6.4 seconds was used due to the pulsar's low S/N; in this case, \(\eta_{\rm min}=7\times 10^{-3}~{}\)m\({}^{-1}\) mHz\({}^{-2}\). The on-pulse signal was extracted from each folded subintegration using the phase range containing intensities within \(90\%\) of the peak pulse intensity. The mean on-pulse flux density \(S(t_{i},v_{i})\) for each subintegration \(t_{i}\) and frequency channel \(v_{i}\) was calibrated by subtracting the mean off-pulse flux density of each subintegration and dividing by the bandpass of the entire observation, which was also calculated using off-pulse data. In some epochs, the bandpass changed slightly over the 2-hour observing period due to instrumental effects, so multiple bandpasses were calculated for calibration. In all observations, wideband radio frequency interference (RFI) persisted between \(1140\) and \(1300~{}\)MHz. We subsequently divided all dynamic spectra into two frequency bands: \(1050-1140~{}\)MHz and \(1300-1450~{}\)MHz. Transient RFI was masked and replaced with values interpolated from neighboring data points using a 2D Gaussian convolution kernel. Before forming a secondary spectrum, each dynamic spectrum was interpolated onto a grid equispaced in wavelength \(\lambda=c/v\), and a 2D Hanning window was applied to the outer edges of the dynamic spectrum to reduce sidelobe response in the secondary spectrum. The secondary spectrum was then formed from the squared magnitude of the 2D FFT of the dynamic spectrum: \(S_{2}(f_{\rm f},f_{\lambda})=|\hat{S}(t,\lambda)|^{2}\). Gridding the dynamic spectrum in \(\lambda\) yields a frequency-independent arc curvature across a single contiguous frequency band (see Equation 5), and thus mitigates smearing of arc features in secondary spectra formed from broad bandwidths (Gwinn, 2019). However the interpolation kernel used to resample the dynamic spectrum in \(\lambda\) can have a dramatic effect on the secondary spectrum; e.g., we found that linear interpolation induces a non-linear drop-off in the (logarithmic) noise baseline of the secondary spectrum. In order to ensure a flat noise baseline in subsequent analysis, we subtracted the mean loga \begin{table} \begin{tabular}{l c c c c c c c c c c} PSR & \(l,b\) & \(P\) & DM & \(D\) & \(V_{\rm BFI\times}\) & \(\dot{E}\) & \(S_{1400}\) & \(R_{0}\) & Bow Shock/PWN & Epochs \\ & (deg) & (ms) & (pc cm\({}^{-3}\)) & (kpc) & (km/s) & (erg s\({}^{-1}\)) & (mJy) & (au) & & (MJD) \\ \hline \hline B0355+54 & \(148.0\) & \(156\) & \(57.14\) & \(1.09^{+0.23}_{-0.25}\) & \(61^{+9}_{-9}\) & \(4.5\times 10^{43}\) & 23 & 7900 & X-ray & 59509 \\ B0919+06 & \(225.36\) & \(430\) & \(27.29\) & \(1.21^{+0.05}_{-0.16}\) & \(505\times 80\) & \(6.8\times 10^{33}\) & 10 & 370 & – & 594909 \\ B0950+08 & \(228.43\) & \(253\) & \(2.97\) & \(0.262^{+0.005}_{-0.005}\) & \(36.6\pm 0.7\) & \(5.6\times 10^{32}\) & 100 & 1480 & Radio & 59500, 59523 \\ J1643\(-1224\) & \(5.7,21\) & \(4.62\) & \(62.41\) & \(0.763^{+0.18}_{-0.09}\) & \(4^{+15}_{-4}\) & \(7.4\times 10^{33}\) & 4 & 4800 & – & 59523, 59527 \\ J1713+0747 & \(29,25\) & \(4.57\) & \(15.92\) & \(1.05^{+0.06}_{-0.09}\) & \(30.2\) & \(3.5\times 10^{33}\) & 8 & 4500 & – & 59509 \\ J1740+1000 & \(34,20\) & \(154\) & \(23.89\) & \(1.2\) & \(184\) & \(2.3\times 10^{35}\) & 3 & 19000 & X-ray & 59510 \\ B1929+10 & \(47,-3.9\) & \(226\) & \(3.18\) & \(0.361^{+0.09}_{-0.09}\) & \(177^{+4}_{-5}\) & \(3.9\times 10^{33}\) & 29 & 805 & Radio, X-ray & 59515 \\ B1957+20 & \(59,-4.7\) & \(1.61\) & \(29.12\) & \(2.57^{+0.77}_{-0.48}\) & \(366^{+0.62}_{-0.88}\) & \(1.6\times 10^{35}\) & 0.3 & 8000 & H\(\alpha\), X-ray & 59506 \\ \hline \end{tabular} \end{table} Table 1: The sample of observed pulsars and their properties. From left to right: Pulsar name, Galactic longitude and latitude, period, DM, distance, transverse velocity, spin-down luminosity, typical flux density at \(1400~{}\)MHz, bow shock stand-off radius, wavelengths at which the bow shock or PWN has been detected, and observing epochs. Distances and transverse velocities are derived from VLBI parallax and proper motions (from Brisken et al., 2002; Chatterjee et al., 2001, 2004, 2009; Ding et al., 2023; Romani et al., 2022), except for J1740+1000 (see main text for details). For pulsars without confirmed H\(\alpha\) bow shocks, the stand-off radii shown are rough estimates based on \(\dot{E}\) and \(V_{\rm BFI\times}\) assuming an ISM electron density of \(0.1~{}\)cm\({}^{-3}\) and an inclination of \(90^{\circ}\) (note for B1957+20, the resulting stand-off radius estimate would be \(7880~{}\)au, consistent with H\(\alpha\) imaging; Romani et al. 2022). All other parameters are retrieved from the ATNF catalogue (Manchester et al., 2005). Figure 3: Spin-down luminosity \(\dot{E}\) vs. transverse velocity \(V_{\rm psz,\perp}\) along with bow shock detectability for different values of the ISM electron density (dashed lines) assuming \(\epsilon=0.5\) (see Section 2.2). Pulsars with parallax measurements are shown in grey and our observed sources are shown in green and marked as O23 in the legend. Star symbols represent pulsars with confirmed H\(\alpha\)-emitting bow shocks. Sources with PWNe are shown as triangles. B2224+465 (Guitar Nebula), J2124\(-\)3358, and J0437\(-\)4715 are also highlighted as examples of other pulsars not observed in this work with known bow shocks. Pulsars that fall above the dashed lines meet the required \(\dot{E}\) and \(V_{\rm PW\times}\). for scintillation arcs from their bow shocks to be resolved using the FAST observing parameters, assuming those arcs have sufficient S/N. Changing \(\epsilon\) by a factor of two would have roughly the same impact on the minimum required \(\dot{E}\) as changing the ISM density by one order of magnitude. rithmic noise as a function of \(f_{\lambda}\), calculated in a 5 mHz window at each edge of the secondary spectrum, and we note that inference of the power distribution along scintillation arcs can be biased by the choice of interpolation kernel if the shape of the off-noise baseline is not accounted for. The dynamic and secondary spectra for all eight pulsars are shown in Figures 1 and 2. ### Arc Identification and Curvature Measurements Scintillation arcs were identified by calculating the mean logarithmic intensity along parabolic cuts through the secondary spectrum for a range of arc curvatures. This procedure is also known as the generalized Hough transform (Ballard, 1981; Bhat et al., 2016). For low arc curvatures (shallow arcs), the mean intensity was calculated out to a maximum \(f_{\lambda}\approx 10\%\) of the total Nyquist range of the secondary spectrum, in order to improve sensitivity to weaker arcs near the origin of the spectrum. For high arc curvatures, the mean intensity was calculated using a maximum \(f_{\lambda}\approx 90\%\) of the total Nyquist range. In all but one case, curvatures correspond to a reference frequency of 1375 MHz because they were fit using the upper frequency band, which covered a larger contiguous bandwidth and contained less RFI. For B0919+06, curvatures were fit in the lower frequency band because it contained an additional arc that was not detected at 1375 MHz. An example of the resulting power distribution \(\langle\log_{10}\)S\({}_{2}(f_{t},\eta f_{t}^{2})\rangle\) vs. curvature \(\eta\) is shown in Figure 4 for B1929+10. Candidate arcs were identified as local maxima in the power distribution that were at least 1\(\sigma\) greater than their neighboring pixels, where \(\sigma\) is the rms off-arc noise and each local maximum was required to span at least three pixels to avoid noise spikes. Each local maximum and its neighbors were then fit with an inverted parabola, and the value of \(\eta\) at the peak of the fitted parabola was taken to be the best-fit curvature for the candidate arc. The associated error on \(\eta\) was determined from the range within which the fitted, inverted parabola was \(<1\sigma\) below its peak (where \(\sigma\) again is the rms off-arc noise). Similar procedures have been used by, e.g., Reardon et al. (2020) and McKee et al. (2022). Candidate arcs were then sorted by their uniqueness; i.e., for candidates with the same \(\eta\) to within the errors, only the highest S/N candidate was selected for the final set of arcs reported for each pulsar. In some cases (e.g., B0919+06, B0950+08, and B1957+20), the power distribution increases to a power maximum that extends over a broad range of \(\eta\) values, corresponding to a "bounded" arc that contains diffuse power filling in the entire arc's extent in \(f_{\lambda}\). In these cases, the arc curvature is reported as a lower limit based on the curvature at which the mean power distribution reaches 95% of its maximum. The methods described above assume that the noise in the power distribution follows Gaussian statistics; however, this is not generally true for very low curvatures because interpolating the dynamic spectrum onto an equi-spaced wavelength grid introduces correlated noise that has a correlation length greater than a few samples at low \(f_{\lambda}\). While our procedure for identifying arcs does not directly account for this correlated noise, we note that all of our methods were repeated on secondary spectra formed from the original, radio frequency-domain dynamic spectra. We found no difference in the results other than a reduced precision in the arc curvature measurements, due to the frequency-dependent smearing of arc features. Demonstrations of the best-fit curvatures compared to original secondary spectra are shown in Figures 5-7 for B1929+10, B0355+54, and B0950+08, which display the range of arc traits observed. ## 5 Results Scintillation arcs were detected for all of the pulsars in this study. The secondary spectra shown in Figures 1-2 reveal diverse scintillation characteristics, ranging from thin, highly defined arcs (e.g. B1929+10, J1713+0747) to diffuse, broad arcs (B0355+54, J1643\(-\)1224), filled-in arcs (B0919+06, B0950+08, B1957+20), and in one case dramatic reverse arclets (J1740+1000). We find a number of additional arcs beyond those previously reported for pulsars in the dataset, including low-curvature, truncated arcs for B1929+10, B0355+54, B0919+06, and B0950+08. B1929+10 notably shows an extremely large concentration of arcs, discussed further below. For each pulsar we report arc curvatures, and infer estimates of the fractional screen distance \(s\) using Equations 3 and 5. The effective screen velocity \(V_{\perp}\) was calculated using the pulsar's transverse velocity (Table 1) and Earth's transverse velocity relative to the LOS based on the Moisson & Bretagnon (2001) ephemeris model implemented in astropy and scintools (Astropy Collaboration et al., 2022; Reardon et al., 2020). The estimated error in the Earth velocity term is negligible compared to the uncertainty in the pulsar velocity term. In general, the screen velocity \(V_{\rm{src\perp}}\) and the angle of anisotropy \(\psi\) are not independently measurable, and scintillation studies use multi-epoch measurements of arc curvature variations to break the degeneracy of these parameters when inferring \(s\)(e.g. Main et al., 2020; Reardon et al., 2020). Lacking enough multi-epoch measurements to do such an analysis, we instead make fiducial estimates of \(s\) by assuming \(V_{\rm{src\perp}}\ll V_{\rm{SPL\perp}}V_{\rm{obs\perp}}\) (where appropriate) and \(\psi=0^{\circ}\). Figure 8 shows \(\eta\) vs. \(s\) for different values of \(\psi\) and \(V_{\rm{src\perp}}\), using B1929+10 as an example (\(V_{\rm{pr\perp}}\) = 177 km/s, \(V_{\rm{obs\perp}}\) = 16 km/s). Larger values of \(V_{\rm{src\perp}}\) and \(\psi\) both result in larger \(\eta\) for a given \(s\), although \(V_{\rm{src\perp}}\) has the largest impact for screens near the observer (\(s\gtrsim 0.8\)) when it is comparable to or larger than \(V_{\rm{obs\perp}}\). In most cases, assuming \(\psi=0^{\circ}\) and \(V_{\rm{src\perp}}\ll(V_{\rm{pr\perp}},V_{\rm{obs\perp}})\) that yields an upper limit on \(s\). However, we note several instances below where the measured arc curvature requires either larger \(\psi\) and/or larger \(V_{\rm{src\perp}}\), and in Section 6.2 we consider potential bow shock screens with larger \(V_{\rm{src\perp}}\). In some cases, the depth of the intensity valley within an arc may also be an indicator of anisotropy (see e.g. Appendix B of Reardon et al., 2020). Figure 8 demonstrates that Equation 5 technically yields two possible values of \(s\) for a given \(\eta\). For several pulsars in our dataset, \(V_{\rm{pr\perp}}\gg V_{\rm{obs\perp}}\) and we can ignore the solution at \(s\approx 1\). The results for each pulsar are elaborated below and briefly compared to previous relevant observations. ### B0355+54 B0355+54 displays four scintillation arcs whose curvatures are shown in Table 2. The best-fit curvatures are also shown overlaid on the original secondary spectrum in Figure 6. A fifth, one-sided arc at a curvature of about 60 m\({}^{-1}\) mHz\({}^{-2}\) is marginally visible in Figure 6 but was not considered significant based on our detection criteria (Section 4). The arcs have a smooth and diffuse visual appearance, although Arc C (curvature \(327\pm 38\) m\({}^{-1}\) mHz\({}^{-2}\)) contains multiple power enhancements spanning \(\lesssim 25\times 10^{3}\) m\({}^{-1}\) in \(f_{\lambda}\). For B0355+54, \(V_{\rm{obs\perp}}\approx 25\) km/s, a significant fraction of the pulsar's transverse velocity \(V_{\rm{ps\perp}}=61^{+12}_{-9}\) kms (Chatterjee et al., 2004). We therefore estimate a range of screen locations for each arc, considering the possibility that some screens may be either closer to the pulsar or closer to the observer, and that the screen orientation \(\psi\) is not constrained. The estimated ranges of \(s\) are shown in Table 2. For the lowest curvature arc, the near-pulsar solution is \(s\leq(6\pm 3)\times 10^{-4}\) for \(\psi\geq 0^{\circ}\), where error bars correspond to the propagated uncertainties on the arc curvature. For \(D=1.09\) kpc (Chatterjee et al., 2004) this \(s\) corresponds to a physical distance \(\leq 0.98\) pc \(\approx 2\times 10^{5}\) au from the pulsar, and the screen could be substantially closer if \(\psi>0^{\circ}\). Conversely, the near-observer solution for this arc yields \(s>0.9998\), which would correspond to a screen extremely close (\(<1\) pc) from the observer (after accounting for uncertainties in the pulsar distance). Previous studies of this pulsar have observed single scintillation arcs with variable power enhancements over hour to month timescales (Xu et al., 2018; Wang et al., 2018; Stinebring, 2007). Wang et al. (2018) measure an arc curvature of 0.0216 s\({}^{3}\) at 2.25 GHz, equivalent to 361 m\({}^{-1}\) mHz\({}^{-2}\), which is broadly consistent with the curvature of Arc C. Figure 4: Mean logarithmic power along parabolic arcs in the secondary spectrum of B1929+10, as a function of arc curvature. The lefthand panel shows the mean power calculated up to a maximum delay of 2500 m\({}^{-1}\), to improve the S/N of faint arcs at low curvatures. The righthand panel shows mean power calculated up to a maximum delay of 25700 m\({}^{-1}\). In each panel the blue curve indicates the mean power along a parabolic cut through the secondary spectrum at a given curvature, while the red dashed lines and shaded orange regions respectively show the best-fit values and \(1\sigma\) errors on the curvature of each arc in the secondary spectrum. The vertical axis scale is identical in the two frames and the intensity rise after feature E is part of feature F, but there is an apparent discontinuity between the frames because the averaging is done over different numbers of pixels. A light grey curve in the lefthand panel shows the mean power smoothed with a boxcar filter. The horizontal solid grey line in both panels shows the mean noise level in the secondary spectrum away from any scintillation arcs. The top axis indicates an approximate screen distance, assuming the pulsar’s transverse velocity is much greater than those of the screen and observer. Figure 5: A demonstration of the best-fit curvatures for B1929+10. The lefthand panels show the secondary spectrum at 1.4 GHz for two ranges in \(f_{\rm 2}\), up to \(3\times 10^{4}\) m\({}^{-1}\) (top) and up to \(4\times 10^{3}\) m\({}^{-1}\) (bottom). The red curves and shaded regions in the righthand panels show the best-fit curvatures and \(1\sigma\) errors overlaid on the secondary spectrum. Letters identify features of interest from the Hough transform (Figure 4). ### B0919+06 Two arcs are detected for B0919+06, one shallow and highly truncated arc arc at \(\eta=13\pm 2\) m\({}^{-1}\) mHz\({}^{-2}\) and a diffuse, filled in arc with \(\eta\geq 69\) m\({}^{-1}\) mHz\({}^{-2}\) (1.1 GHz). Two marginal arcs may also be present at 0.1 and 2 m\({}^{-1}\) mHz\({}^{-2}\), but were just 1\(\sigma\) above the noise baseline and did not meet the detection threshold criteria. Due to this pulsar's extremely large transverse velocity, \(V_{\rm{B}\rm{\it{sr}}\perp}=505\pm 80\) km/s (Chatterjee et al., 2001), we ignore the near-observer solutions for screen distance \(s\), and subsequently find \(0.07\lesssim s\lesssim 0.2\) (\(0^{\circ}\leq\psi\leq 45^{\circ}\)) and \(0.3\lesssim s\lesssim 0.5\) (\(0^{\circ}\leq\psi\leq 45^{\circ}\)). For \(D=1.21\) kpc (Chatterjee et al., 2001), these screens span physical distances \(\approx 0.6\) to 1.1 kpc from the observer. Scintillation arcs have been previously observed for this pulsar at different radio frequencies by Stinebring et al. (2001), Putney & Stinebring (2006), and Stinebring (2007). Chatterjee et al. (2001) argued that the scintillation velocity is consistent with a scattering screen \(\sim 250\) pc from observer. The two arcs reported by Putney & Stinebring (2006) are broadly consistent with the arcs reported here. ### B0950+08 B0950+08 displays remarkably similar scintillation arcs to B0919+06: one thin, highly truncated arc is detected at \(\eta=17\pm 1\) m\({}^{-1}\) mHz\({}^{-2}\), in addition to a broad filled in arc at \(\eta\geq 200\) m\({}^{-1}\) mHz\({}^{-2}\). The best-fit curvatures are shown overlaid on the secondary spectrum in Figure 7. Due to its low transverse velocity (\(V_{\rm{B}\rm{\it{sr}}\perp}=36.6\pm 0.7\) km/s; Brisken et al., 2002), we estimate two ranges of \(s\) for each arc: \(0.003\lesssim s\lesssim 0.006\) and \(0.03\lesssim s\lesssim 0.06\) (assuming \(0^{\circ}\leq\psi\leq 45^{\circ}\)) for the near-pulsar solutions of the two arcs, respectively, and \(s\gtrsim 0.997\) for the near-observer solutions. The shallowest arc corresponds to a screen either \(<1.6\) pc from the pulsar or \(<0.3\) pc from the observer, regardless of \(\psi\), and could be twice as close to either the pulsar or observer if \(\psi\gtrsim 45^{\circ}\). The second screen is either 0.8 pc or \(\sim 240-260\) pc from the observer, depending on \(\psi\) and the uncertainty in the pulsar distance. Wu et al. (2022) observed a single scintillation arc for B0950+08 with LOFAR in 2016, with curvature \(\eta_{\nu}=4.8\pm 0.7\) s\({}^{3}\) at 150 MHz, equivalent to \(\eta\approx 356\) m\({}^{-1}\) mHz\({}^{-2}\) at 1.4 GHz, and estimated a screen distance of \(230\pm 35\) pc for \(\psi=0^{\circ}\) by ignoring the velocity of the screen. Our results for Arc B (\(\eta_{\nu}\geq 200\) m\({}^{-1}\) mHz\({}^{-2}\)) are broadly consistent, suggesting that the scattering screen responsible for this arc may have persisted for over five years of longer. Smirnova et al. (2014) used VLBI to resolve the scattered image of B0950+08 at 324 MHz and found evidence for scattering in two layers at distances \(\sim 10\) pc and \(26-170\) pc from the observer, neither of which appear to be consistent with our observations. \begin{table} \begin{tabular}{c|c|c|c} Feature & Curvature & Fractional Screen & Assumed \\ Identifier & (m\({}^{-1}\) mHz\({}^{-2}\)) & Distance & \(\psi\) (deg) \\ \hline \hline A & \(2.7\pm 1.2\) & \(\begin{array}{c}s\lesssim 9\times 10^{-4}\\ s>0.9998\end{array}\) & \(\psi\geq 0\) \\ \hline B & \(27\pm 8\) & \(\begin{array}{c}(2\lesssim s\lesssim 8)\times 10^{-5}\\ s>0.9986\end{array}\) & \(0\leq\psi\leq 45\) \\ \hline C & \(327\pm 38\) & \(\begin{array}{c}0.031\lesssim s\lesssim 0.08\\ s>0.985\end{array}\) & \(0\leq\psi\leq 45\) \\ \hline D & \(2869\pm 1603\) & \(\begin{array}{c}0.1\lesssim s\lesssim 0.5\\ 0.91\lesssim s\lesssim 0.97\end{array}\) & \(0\leq\psi\leq 45\) \\ \hline \end{tabular} \end{table} Table 2: Scintillation arc curvatures and fractional screen distances \(s\), inferred at 1.4 GHz for B0355+54. A range of screen distances is shown for both near-pulsar and near-Earth solutions based on fiducial assumptions of \(\psi\) shown in the table, and assuming \(V_{\rm{B}\rm{\it{sr}}\perp}<V_{\rm{B}\rm{\it{sr}}\perp}\). The uncertainties in the arc curvature were propagated into uncertainties on \(s\), to determine \(\pm 1\sigma\) upper and lower limits that give the ranges of \(s\) shown. Note that increasing \(V_{\rm{B}\rm{\it{sr}}\perp}\) would decrease \(s\). Assuming a value or range of \(\psi\) does not break the twofold degeneracy relating \(\eta\) and \(s\), due to the significant observer velocity contribution along this LOS; in future, multi-epoch curvature measurements will be needed to uniquely determine \(s\). Figure 6: A demonstration of the best-fit curvatures for B0355+54 at 1.4 GHz (similar to Figure 5). In this case, four arcs are detected in the power summation procedure (see Section 4), although a third low curvature are may be marginally visible in the secondary spectrum at negative \(f_{t}\). ### J1643\(-\)1224 J1643\(-\)1224 shows a single, very broad scintillation arc with \(\eta=5089\pm 3536\) m\({}^{-1}\) mHz\({}^{-2}\) (1.4 GHz). In this case, \(V_{\rm obs,\perp}\approx 0.5V_{\rm psr,\perp}\)(Ding et al., 2023) and the single-epoch measurement combined with large curvature uncertainties yield an extremely large range of possible screen distances; e.g., for \(\psi=45^{\circ}\)\(s\approx 0.6^{+0.3}_{-0.5}\). Mall et al. (2022) previously measured the scintillation arc curvature as it varied over a five-year period and found a best-fit screen distance \(\approx 114-223\) pc from the observer, consistent with the distance to a foreground HII region (Harvey-Smith et al., 2011; Ocker et al., 2020). Our scintillation arc measurement is broadly consistent with the Mall et al. (2022) result. While our secondary spectrum has a Nyquist limit of about \((0.12\,{\rm MHz})^{-1}\approx 8\)\(\mu\)s, the arc observed by Mall et al. (2022) extends up to \(f_{\nu}\approx 20-30\)\(\mu\)s, implying that our observation is sensitive to only a small portion of the full scintillation arc. ### J1713+0747 Visually, the secondary spectrum for J1713+0747 shows scintillation arc structure on two scales: one high-contrast, thin arc that rises above an exterior, diffuse region of power closer to the origin. The Hough transform detects three arcs at curvatures of \(1184\pm 33\), \(8269\pm 2491\), and \(38531\pm 13640\) m\({}^{-1}\) mHz\({}^{-2}\) (1.4 GHz). Similar to J1643\(-\)1224, this pulsar's transverse velocity is just twice \(V_{\rm obs,\perp}\)(Chatterjee et al., 2009), and the near-pulsar and near-observer screens are indistinguishable with the data in hand. For the shallowest arc, we find either \(s\lesssim 0.07\) or \(s\geq 0.96\) for \(\psi\geq 0^{\circ}\), corresponding to physical distances \(\lesssim 70\) pc from the pulsar or \(\lesssim 40\) pc from the observer. However, for the pulsar and observer velocity configuration of this LOS, the maximum arc curvature yielded by Equation 5 for \(\psi=0^{\circ}\) is just 5820 m\({}^{-1}\) mHz\({}^{-2}\), which is too small to explain the curvatures of the two steepest arcs in the secondary spectrum. We thus find that larger \(\psi\) and/or larger \(V_{\rm scr,\perp}\) are required for the higher curvature arcs along this LOS. Assuming \(\psi=45^{\circ}\) and \(V_{\rm scr,\perp}=0\) km/s, we find screen solutions at \(s=0.3\pm 0.1\) (near-pulsar) or \(s=0.8\pm 0.1\) (near-observer) for the arc at curvature \(8269\pm 2491\) m\({}^{-1}\) mHz\({}^{-2}\). For the highest curvature (\(38531\pm 13640\) m\({}^{-1}\) mHz\({}^{-2}\)), assuming \(V_{\rm scr,\perp}=0\) km/s requires \(\psi>60^{\circ}\), and we find \(s\approx 0.6^{+0.2}_{-0.3}\) for \(\psi=65^{\circ}\). A scintillation arc has been measured for this pulsar only once before (Main et al., 2023), making this LOS a prime target for dedicated follow-up observations. ### J1740+1000 J1740+1000 is the only pulsar in the dataset to display well-defined reverse arclets, which could be due to interference between discrete sub-components and/or a high degree of anisotropy in the scattered image. Sprenger et al. (2021) and Baker et al. (2022) developed a method to measure the curvature of such reverse arclets, assuming a 1D scattered image, by linearizing the secondary spectrum Figure 8: Arc curvature \(\eta\) vs. screen distance \(s\) for different values of the screen velocity \(V_{\rm scr,\perp}\) and angle of anisotropy \(\psi\), assuming \(V_{\rm psr,\perp}=177\) km/s and \(V_{\rm obs,\perp}=16\) km/s (as for B1929+10). Larger \(\psi\) and \(V_{\rm scr,\perp}\) both increase the possible values of \(\eta\) for a given \(s\), and \(V_{\rm scr,\perp}\) has the largest effect on \(\eta\) near \(s\approx 1\) when it is comparable to or greater than the observer velocity term. Figure 7: A demonstration of the best-fit curvatures for B0950+08 at 1.4 GHz (similar to Figure 5), for the observation on MJD 59523. In this case, one thin and highly truncated arc is detected at low curvature, and a broader distribution of power within a parabolic boundary is detected at higher curvature. In the latter case, a lower limit is reported on the arc curvature, indicated here by the solid red line. so that the forward arc and reverse arclets all lie along straight lines through a transformation of \(f_{1}-f_{\nu}\) space. Application of this "\(\theta-\theta\) transform" to the J1740+1000 secondary spectrum yields a best-fit curvature of \(\eta=72\pm 5\) m\({}^{-1}\) mHz\({}^{-2}\) (see Appendix A). Although J1740+1000 lacks parallax and proper motion measurements, its scintillation speed implies a transverse velocity of \(\approx 184\) km/s, which could be much larger based on its location far above the Galactic plane (McLaughlin et al., 2002). NE2001 (Cordes and Lazio, 2002) and YMW16 (Yao et al., 2017) both predict a distance of 1.2 kpc for this pulsar. We estimate a screen distance \(s\leq 0.13\) or 160 pc from the pulsar (for \(\psi\geq 0^{\circ}\)), but a parallax distance is required to obtain a more accurate estimate of the screen distance in physical units. Scintillation arcs have not been previously reported for this pulsar, although Rozko et al. (2020) observed a turnover in the pulsar spectrum below 300 MHz that may be due to interstellar absorption along the LOS. ### B1929+10 This pulsar shows the largest concentration of arcs among our observations. The Hough transform detection criteria yield 12 arc candidates; however, the three highest-curvature candidates (J-L in Figure 4 and Table 3) are all superposed on a broad power distribution and we remain agnostic as to whether these are three independent arcs tracing distinct scattering screens. The best-fit curvatures are shown in Table 3. Table 3 also shows estimates of the screen distance for each arc, assuming \(V_{\rm{psxL}}\gg(V_{\rm{obsL}},V_{\rm{crxL}})\), a reasonable assumption given \(V_{\rm{psxL}}=177\) km/s (Chatterjee et al., 2004). Allowing different values of \(\psi\) for each screen can yield overlapping screen distances for different arcs. In Table 3 we show two possible screen distances for \(\psi=0^{\circ}\) and \(\psi=45^{\circ}\). Presuming that distinct arcs are observed when the scattering screens do not overlap, then some combination of \(\psi\) (and \(V_{\rm{scrxL}}\)) is required that yields unique values of \(s\) for each arc. In practice, disentangling the degeneracy between \(\psi\) and \(V_{\rm{scrxL}}\) for each of \(>9\) arcs will require many repeated observations. Nonetheless, our fiducial estimates suggest that the LOS to B1929+10 contains a high filling fraction of scattering material, with screens spanning \(\sim 90\%\) of the 361 pc path-length to the pulsar. In addition, the large curvatures of arcs H, I, and candidate arcs J-L all appear to require \(\psi\gtrsim 45^{\circ}\) (and/or velocity vector alignment such that \(|\mathbf{V_{\rm{eff}}}|\) is small, which could result from larger \(V_{\rm{scrxL}}\)). Up to three distinct scintillation arcs have been observed for B1929+10 in the past (Putney and Stinebring, 2006; Cordes et al., 2006; Fadeev et al., 2018; Yao et al., 2020; Wu et al., 2022). The high sensitivity of FAST reveals numerous additional arcs, including low-curvature arcs (features A-E in Figures 4-5) that precede the highest intensity Arc F. These low-curvature arcs were identified using the dynamic spectrum in the 1.4 GHz band. The secondary spectrum formed from the 1.1 GHz band only showed one arc where features A-B are, which is likely due to the smaller bandwidth of the 1.1 GHz dynamic spectrum. Figure 9 shows the low-curvature arcs in enhanced detail. They are \(\sim 100\times\) weaker than Arc F and confined to \(f_{\lambda}<30\) m\({}^{-1}\). The shallowest of these, Arc A, has a screen distance \(s<0.027\), equivalent to \(<9.7\) pc from the pulsar. Increasing \(\psi\) to \(>45^{\circ}\) could bring the screen distance to within 1 pc of the pulsar. Wu et al. (2022) find a single arc with a curvature of \(3.0\pm 0.1\) s\({}^{3}\) at 150 MHz, equivalent to \(223\pm 74\) m\({}^{-1}\) mHz\({}^{-2}\) and consistent with the curvature of Arc G. This arc curvature is also broadly consistent with the arc observed by Fadeev et al. (2018) and Yao et al. (2020). _A priori_, Arc F would be a plausible arc to associate with previous observations given that it contains the most power of any arc in our secondary spectrum. Follow-up observations that track the curvatures of all of the arcs reported here will confirm which have indeed been observed in prior studies. B1929+10 is the only pulsar in our sample that has shown conclusive evidence of tiny-scale atomic structure (TSAS) detected in HI absorption of the pulsar spectrum. Stanimirovic et al. (2010) measured up to four distinct TSAS features in the pulsar spectrum, with spatial scales \(\approx 6-45\) au based on the temporal variability of the HI absorption features. While the distances of the TSAS features could not be directly determined from the pulsar spectrum, Stanimirovic et al. (2010) suggested that they are \(\approx 106-170\) pc from the observer based on the similarity between the TSAS velocities and the velocity of NaI absorption features observed towards stars within \(3^{\circ}\) of the pulsar LOS. These TSAS features could be related to the same physical processes that are responsible for the large concentration of scintillation arcs along this LOS. ### B1957+20 A single, very weak and diffuse scintillation arc is detected for B1957+20 at curvature \(\eta\geq 220\) m\({}^{-1}\) mHz\({}^{-2}\) (1.4 GHz). While the scintles in the dynamic spectrum do appear to be resolved in both frequency and time (see Figure 2), the low S/N of the pulsar required a longer integration time in the dynamic spectrum than for the other pulsars, yielding reduced resolution in the secondary spectrum. Due to the pulsar's large transverse velocity (\(V_{\rm{psxL}}=366^{+62}_{-98}\) km/s; Ro \begin{table} \begin{tabular}{c|c|c|c} Feature Identifier & Curvature (m\({}^{-1}\) mHz\({}^{-2}\)) & Fractional Screen Distance & Assumed \(\psi\) (deg) \\ \hline \hline A & \(4.0\pm 0.7\) & \(0.022\pm 0.005\) & 0 \\ & \(0.011\pm 0.002\) & 45 \\ \hline B & \(5.9\pm 0.6\) & \(0.032\pm 0.003\) & 0 \\ & \(0.016\pm 0.002\) & 45 \\ \hline C & \(16.2\pm 0.2\) & \(0.085\pm 0.001\) & 0 \\ & \(0.0439\pm 0.0004\) & 45 \\ \hline D & \(25\pm 4\) & \(0.13\pm 0.02\) & 0 \\ & \(0.07\pm 0.01\) & 45 \\ \hline E & \(41\pm 3\) & \(0.19\pm 0.01\) & 0 \\ & \(0.105\pm 0.006\) & 45 \\ \hline F & \(125\pm 3\) & \(0.448\pm 0.007\) & 0 \\ & \(0.274\pm 0.005\) & 45 \\ \hline G & \(285\pm 13\) & \(0.71\pm 0.02\) & 0 \\ & \(0.49\pm 0.02\) & 45 \\ \hline H & \(532\pm 30\) & \(0.68\pm 0.02\) & 45 \\ \hline I & \(987\pm 51\) & \(0.92\pm 0.04\) & 45 \\ \hline J & \(2164\pm 308\) & & \(\gtrsim 60\) \\ K & \(3054\pm 446\) & \(s\gtrsim 0.9\) & if \(V_{\rm{scrxL}}\) = 0 km/s \\ L & \(5706\pm 704\) & & \(=0\) km/s \\ \end{tabular} \end{table} Table 3: Scintillation arc curvatures and fractional screen distances \(s\) at 1.4 GHz for B1929+10. Due to the pulsar’s large transverse velocity (Chatterjee et al., 2004), single solutions for \(s\) were obtained assuming \(V_{\rm{psxL}}\gg(V_{\rm{obsL}},V_{\rm{crxL}})\). Since \(\psi\) was largely unconstrained by the observations, we show values of \(s\) that would be obtained for characteristic values of \(\psi\) noted in the righthand column. Features H-L have curvatures greater than the maximum possible values for \(\psi=0^{\circ}\), \(V_{\rm{scrxL}}=0\) km/s, implying that either greater \(\psi\) and/or greater \(V_{\rm{scrxL}}\) are required for these high-curvature arcs. Features J–L would require \(\psi\gtrsim 60^{\circ}\) for \(V_{\rm{scrxL}}=0\) km/s; however, it is unclear whether these arcs really trace distinct screens (see main text). mani et al., 2022), a single screen distance is estimated from the arc curvature to be \(s\leq 0.44\) (\(\psi\geq 0^{\circ}\)), which corresponds to a physical screen distance \(\gtrsim 1.5\) kpc from the observer. This pulsar is the only source in the dataset with an H\(\alpha\)-emitting bow shock, the stand-off radius of which is \(\approx 7700-9300\) au, depending on the shock thickness (Romanni et al., 2022). The arc curvature that is inferred is far too large to be connected to the pulsar bow shock. B1957+20 is a well-studied black widow pulsar that exhibits strong plasma lensing near eclipse by its companion (Main et al., 2018; Bai et al., 2022; Lin et al., 2023). Our observations were several hours away from eclipse, and do not display evidence of any scattering through the pulsar's local environment, whether that be the companion outflow or the pulsar bow shock. Previous observations away from eclipse measured a scattering timescale of 12 \(\mu\)s at 327 MHz (equivalent to \(\approx 0.04\)\(\mu\)s at 1.4 GHz or a scintillation bandwidth of \(\approx 4\) MHz; Main et al., 2017). Our observations imply a scintillation bandwidth \(\Delta v_{\rm d}\approx 10\) MHz, based on fitting a 1D Lorentzian to the autocorrelation function of the dynamic spectrum, which is larger than the equivalent \(\Delta v_{\rm d}\) for Main et al. (2017). ## 6 Constraints on Scattering Screens Scintillation arc properties can be translated into physical constraints on the scattering medium. In the following sections, we consider possible interpretations of the scintillation arcs in our sample, including the relationship between their intensity distributions and interstellar density fluctuations (Section 6.1) and potential associations between arcs and pulsar bow shocks (Section 6.2). In Section 6.3, we contextualize the scattering media in relation to the larger-scale ISM by utilizing 3D models of discrete structures identified in continuum maps. ### Power Distribution in the Secondary Spectrum Many secondary spectra contain a bright core of power near the origin that is up to \(\sim 10^{5}\) times brighter than the power distributed along a scintillation arc. In our data set, this feature is most prominent for B0355+54, B0919+06, J1713+0747, and B1929+10. This bright central core can be interpreted as individual, weakly deviated ray paths (\(\mathbf{\theta}_{1}=\mathbf{\theta}_{2}\) in Eq. 1), whereas arcs are sensitive to lower intensity radiation that can trace a much larger extent of the scattered disk than other scattering measurements (e.g., scintillation bandwidths or pulse broadening times). Arc properties can be evaluated in the context of weak and strong scintillation, which correspond to the regimes where the modulation index (rms intensity variation / mean intensity) is \(m\ll 1\) and \(m\sim 1\), respectively. In our dataset, B0950+08 and B1929+10 are both weakly scintillating (\(m\approx 0.1\) and \(m\approx 0.2\), respectively), whereas all of the other pulsars have \(0.7\lesssim m\lesssim 1\). Multiple, high-contrast (thin) arcs are usually detected in weak scintillation because there is still significant undeviated radiation incident on each scattering screen; typically this regime applies to lower DM pulsars, as seen here for B1929+10. Higher DM pulsars often fall in the strong scintillation regime, where arcs tend to be more diffuse and lower contrast (Steinbring et al., 2022), as seen for B0355+54 and J1643\(-\)1224 (for which we find \(m\approx 0.9\) and \(m\approx 1\), respectively). However, this trend does not appear to be clear cut; e.g., J1713+0747 displays a thin, highly defined arc despite having \(m\approx 0.7\) in our dataset. Given that scattering is highly chromatic, arc properties also tend to evolve with frequency (e.g. Sinebring et al., 2019), and many scintillation arcs detected at low (\(<500\) MHz) frequencies appear to be thicker and lower contrast (Wu et al., 2022; Stinebring et al., 2022). We therefore expect that the strongly scintillating pulsars in our dataset, such as B0355+54, J1643\(-\)1224, and J1740+1000, could yield multiple additional arcs if observed at higher frequencies. #### 6.1.1 Relevance of Power-Law Electron Density Fluctuations In the limit of weak scintillation, the power distribution along an arc can be derived for a density fluctuation wavenumber spectrum of index \(\beta\) to be \(S_{2}(f_{\lambda})\propto f_{\lambda}^{-(\beta+1)/2}\), or \(S_{2}(f_{\lambda})\propto f_{\lambda}^{-7/3}\) for \(\beta=11/3\), a Kolmogorov spectrum (Cordes et al., 2006, Appendix D; Reardon et al., 2020). In full, \(S_{2}(f_{\lambda},f_{\lambda})\) also includes a constant factor that depends on the screen distance \(s\), the transverse velocity \(V_{\perp}\), and a resolution function that accounts for the effect of finite sampling of the dynamic spectrum. To assess whether arcs are broadly consistent with arising from a power-law wavenumber spectrum of electron density fluctuations, we examine the distribution of power within the brightest scintillation arcs as a function of \(f_{\lambda}\). We examine four pulsars: B1929+10, B0950+08, B0355+54, and J1643\(-\)1224. Of these, two are weakly scintillating (B1929+10, B0950+08) and two are strongly scintillating (B0355+54, J1643\(-\)1224) based on the modulation indices of their dynamic spectra. The power contained within each arc was summed along the \(f_{\lambda}\) axis and fit as a power-law with two free parameters, an amplitude and a spectral index \(\alpha=-(\beta+1)/2\), where \(\beta\) is the index of the electron density fluctuation spectrum. Figure 10 shows the arc power compared to the best-fit models for B1929+10, B0950+08, and B0355+54. For B1929+10, all three arcs examined (arcs C, F, and G, which Figure 9: The secondary spectrum for B1929+10 at 1.4 GHz, viewed on a logarithmic scale in \(f_{\lambda}\). The white dashed curves show three examples of where arcs would fall in the secondary spectrum for different combinations of \(s\), \(\psi\), and \(V_{\rm scz1}\). Nominal estimates of the bow shock stand-off radius for B1929+10 would suggest \(s\sim 10^{-5}\), although \(s\) could be larger depending on the inclination angle of the bow shock relative to the pulsar LOS. Regardless, the scattering screen at the shock would need to have large \(\psi\) and \(V_{\rm sczz}\) to explain the lowest curvature arc detected here. had the most precise curvature measurements) have best-fit spectral indices consistent with \(\alpha=-7/3\), the expectation for Kolmogorov density fluctuations. The brightest arc examined, arc F, deviates from the power-law at low \(f_{\lambda}\). B0950+08 yields a best-fit power-law index \(\alpha=-2.8\pm 0.1\) for arc B, which corresponds to \(\beta>4\) and may be indicative of refraction (Goodman & Narayan, 1985). For B0355+54, we find \(\alpha=-2.5\pm 0.1\) for both arcs C and D, although arc C has discrete clumps that deviate significantly from a uniform power-law intensity distribution. Both arcs C and D for B0355+54 also show roll-offs at low \(f_{\lambda}\), similar to arc F for B1929+10. While sampling near the origin of the secondary spectrum is more limited, it is possible that these roll-offs are related to multi-scale structure in the scattered image. Interestingly, the arc intensity distributions for B0355+54 are consistent to within \(2\sigma\) from a Kolmogorov power-law at large \(f_{\lambda}\). We also investigate the power distribution for J1643\(-\)1224 and find \(\alpha=-1.6\), a large departure from the Kolmogorov expectation that could partially be due to our limited resolution of the arc's full extent in \(f_{\lambda}\) (see Section 5.4). One possible interpretation of these arc intensity distributions is that they have been modified from a Kolmogorov form by some combination of astrophysical and instrumental effects. Here we consider the potential relevance of three main effects, following Section 5.2 of Cordes et al. (2006): 1. _Inner scale:_ Arcs are truncated when the diffraction spatial scale \(l_{\rm d}\) becomes comparable to the inner scale \(l_{\rm i}\) of the density wavenumber spectrum. For a 1D scattering angle \(\theta_{\rm d}\), the diffraction scale is \(l_{\rm d}\sim(\theta_{\rm d}k)^{-1}\) for a wavenumber \(k\). For a scattering time \(\tau_{\rm d}\sim\theta_{\rm d}^{2}/c\), the diffraction scale is then \[l_{\rm d} \approx\frac{1}{2\pi\nu}\left[\frac{c}{\tau_{\rm d}}\left(\frac{d_ {\rm so}d_{\rm lo}}{d_{\rm sl}}\right)\right]^{1/2}\] (11) \[\approx\frac{1.5\times 10^{4}\ {\rm km}}{\nu_{\rm GHz}}\left[\frac{1}{ \tau_{\rm d,\mu s}}\left(\frac{d_{\rm so}d_{\rm lo}}{d_{\rm sl}}\right)_{\rm kpc }\right]^{1/2},\] (12) where \(d_{\rm so}\), \(d_{\rm sl}\), and \(d_{\rm lo}\) are the source-observer, source-lens, and lens-observer distances. For a single screen, the maximum arc extent in \(f_{\rm r}\) due to this effect is approximately \(f_{\rm r,inner}\sim(\nu_{\perp}/\lambda s)q_{\rm i}l_{\rm d}\), where \(q_{\rm i}=2\pi/l_{\rm i}\). For B0950+08, \(s\approx 0.05\) and \(\tau_{\rm d}<1\ \mu\)s, implying \(l_{\rm d}\gtrsim 2\times 10^{4}\ {\rm km}\). We thus find that \(l_{\rm i}\) would have to be implausibly large, given typical inferred values \(<1000\ {\rm km}\)(Spangler & Gwinn, 1990; Armstrong et al., 1995; Bhat et al., 2004; Rickett et al., 2009) to explain the steep drop-off in arc power for B0950+08. For B1929+10, taking nominal screen parameters for arc F (\(s\approx 0.4\)) implies \(l_{\rm d}\approx 3000\ {\rm km}\), which places an upper limit on the inner scale that is consistent with other inferred values. 2. _Finite source size and multiple screens:_ Arc extent depends on the angular scale of coherent radiation incident on the scattering screen, which we denote \(\theta_{\rm scr}\). This angular scale is determined by both the finite size of the pulsar emission region and any scattering through additional screens. Scintillations will be quenched when the coherence length of radiation incident on the scattering screen, \(l_{\rm c}\approx\lambda/2\pi\theta_{\rm scr}\), is of order the size of the scattering cone at the screen, \(l_{\rm cone}\approx\theta_{\rm obs}d_{\rm lo}\), where \(\theta_{\rm obs}=s\theta_{\rm scr}\). In the simplest (single screen) case, arcs will be suppressed beyond \(f_{\rm r,\,500a}=V_{\perp}[D(1-s)\theta_{\rm obs}]^{-1}\)(Cordes et al., 2006). A screen close to the source could have larger \(\theta_{\rm scr}\) yielding smaller \(f_{\rm r,\,500a}\), if additional screens are not present. On the other hand, scattering through one screen can reduce \(l_{\rm c}\) for a subsequent screen, which could in principle lead to weaker, more truncated arcs for larger screen distances \(s\). Of the pulsars considered here, this effect is most likely relevant to B0355+54, as the pulsar is in strong scintillation and hence more likely to have significant scattered radiation incident on each of the four screens along the LOS. First, we consider the possibility that the shallowest arc corresponds to a screen close to the pulsar, and examine whether the finite size of the pulsar emission region could affect the arc extent (ignoring, for now, the presence of additional screens). For an emission region size \(\sim 100\ {\rm km}\) and a screen \(\lesssim 1\ {\rm pc}\) from the pulsar, \(f_{\rm f,\,sou}\sim 1\ {\rm Hz}\), orders of magnitude greater than the observed extent of the arc. To explain the observed arc extent, the screen would need to be \(\lesssim 100\) au from the pulsar, far smaller than the estimated bow shock stand-off radius of \(\sim 8000\) au. Next, we consider the possibility that scattering through multiple screens modifies the intensity distributions of the brightest arcs for B0355+54, arcs C and D (Figure 10). While both arcs are broadly consistent with the same power-law, \(\alpha=-2.5\pm 0.1\), arc C shows a stronger deviation at lower \(f_{\lambda}\). Unfortunately, the Figure 10: Arc intensity as a function of \(f_{\lambda}\) for B1929+10 (left), B0950+08 (middle), and B0355+54 (right). For B1929+10, arcs C (blue), F (orange), and G (green) are shown, and arcs F and G are purposely offset from the mean off-arc noise (grey dashed line). Arcs C (blue) and D (orange) are shown for B0355+54, with an arbitrary vertical offset for arc C. The black curves in each panel show power-law models for the arc intensity, with black solid curves corresponding to \(\alpha=-(B+1)/2=-7/3\), where \(B=11/3\) is the expectation for Kolmogorov density fluctuations. For B1929+10, the best-fit spectral indices for all three arcs are consistent to within \(1\sigma\) with \(\alpha=-7/3\). For B0950+08 and B0355+54, the black dashed curves show the best-fit power laws with spectral indices \(\alpha\) shown in the legends. Neither arc for B0355+54 is consistent with a single power-law intensity distribution, and the power laws shown were fit using \(f_{\lambda}>10^{3}\ {\rm m}^{-1}\). twofold ambiguity in screen location means that it is unclear which order the screens are encountered; i.e., arc C could be produced prior to arc D, or after. However, both arcs extend across the full Nyquist range in \(f_{A}\) and have similar amplitudes of intensity, suggesting that neither arc is significantly suppressed by the presence of a preceding screen. 3. _Sensitivity limitations_: If an arc is low intensity and/or poorly resolved in the secondary spectrum, then it can appear to be truncated because its power-law drop-off makes it indistinguishable from the noise at smaller (\(f_{t}\), \(f_{\lambda}\)) than for a higher-intensity arc. This effect is likely most relevant to the shallowest arcs detected for B0355+54, B0919+06, B0950+08, and B1929+10. These findings suggest that while B1929+10 has scintillation arc intensities consistent with diffractive scintillation produced by a turbulent density fluctuation cascade, B0950+08 is likely affected by additional refraction. Similarly, the discrete clumps of power in arc C for B0355+54, coupled with the significant roll-off in arc intensity at small \(f_{\lambda}\), suggest non-uniform, multi-scale structure in the scattered image that is also produced by refraction. Overall, these features can be interpreted as resulting from a superposition of refracting plasma structures (blobs or sheets) and the nascent density fluctuations associated with interstellar turbulence. ### Near-Pulsar Screens & Candidate Bow Shocks Three pulsars in our sample have low-curvature arcs that could arise within the pulsars' local environments, B0355+54, B0950+08, and B1929+10. B0950+08 does not have a directly imaged PWN or bow shock, although recently Ruan et al. (2020) have argued that off-pulse radio emission detected up to \(\sim 100^{\prime\prime}\) from the pulsar location is consistent with arising from a PWN. Both B0355+54 and B1929+10 have ram pressure confined PWNe identified in X-ray and radio, and are likely to have bow shocks (Wang et al., 1993; Becker et al., 2006; McGowan et al., 2006). In the case of B0355+54, we have acquired optical H\(\alpha\) imaging data from Kitt Peak National Observatory (KPNO) with the Nicholas U. Mayall 4-meter Telescope on 25-Oct 2017, using the Mosaic-3 detector. The observations were part of a larger campaign to search for H\(\alpha\) bow-shocks which are a publication in preparation. The target list included B0355+54 for 600 s, deeper available data than from the INT/WFC Photometric H-Alpha Survey (Barentsen et al., 2014). On the same night, we observed the Guitar Nebula for the same amount of time at the same detector location. No bow-shock structure was observed for B0355+54. By fractionally adding the Guitar Nebula image to the sky background nearby until it became visible and then decreasing the fraction by increments of 0.05 until the known bow shock faded into the background, we estimate a non-detection limit of about 15% of the Guitar Nebula apex flux. In units of H\(\alpha\) photons, any bow-shock from B0355+54 would therefore have an apex surface brightness flux of \(\lesssim 5.4\times 10^{-4}\)\(\gamma/\rm cm^{2}s^{-1}\), using the known flux of the Guitar Nebula from Brownsberger & Romani (2014). We now consider the range of screen conditions (\(\psi,V_{\rm scr\perp}\)) that would be needed for the low-curvature arcs to be associated with these pulsars' bow shocks, if the bow shocks exist. For B0355+54, the lowest arc curvature is \(\eta=2.7\pm 1.2\) m\({}^{-1}\) mHz\({}^{-2}\). For the pulsar's measured spin-down luminosity and transverse velocity (Table 1), we estimate a bow shock stand-off radius \(R_{0}\approx 7900-24000\) au for electron densities \(\sim 0.1-0.01\) cm\({}^{-3}\) (Equation 10). Given the parallax distance of \(D=1.09^{+0.23}_{-0.16}\) kpc (Chatterjee et al., 2004), we thus estimate an upper limit on the fractional screen distance \(s\approx R_{0}/D\sim 9\times 10^{-5}\) for \(R_{0}\approx 24000\) au and \(D=(1.09-0.16)\) kpc. Given that the bow shock nose is likely inclined relative to the LOS, \(s\) could be even larger. The measured arc curvature can accommodate \(s\sim 9\times 10^{-5}\) for \(\psi\approx 50^{\circ}\) leaving \(V_{\rm scr\perp}\) small, or alternatively, for small \(\psi\) (\(<45^{\circ}\)) and \(V_{\rm scr\perp}\gtrsim 20\) km/s. Previous studies have inferred screen velocities ranging up to tens of km/s and similarly wide ranges of screen angles (e.g. Reardon et al., 2020; McKee et al., 2022). We thus conclude that the lowest curvature arc for B0355+54 could be consistent with a scattering screen at the bow shock, but more observations are needed to determine whether the screen is indeed close to the pulsar or close to the observer. For B0950+08, the lowest arc curvature is \(\eta=17\pm 1\) m\({}^{-1}\) mHz\({}^{-2}\), and the nominal stand-off radius ranges from \(R_{0}\approx 1480-4440\) au for \(n_{e}\approx 0.1-0.01\) cm\({}^{-3}\). Following a similar line of reasoning as for B0355+54, we find that the lowest curvature are can be consistent with a scattering screen at the bow shock if \(\psi\gtrsim 70^{\circ}\) and \(V_{\rm scr\perp}\sim 20\) km/s. These constraints can be relaxed somewhat if the ISM density is even lower and/or if the shock widens significantly where it is intersected by the pulsar LOS. For B1929+10 the lowest arc curvature is \(\eta=4.0\pm 0.7\) m\({}^{-1}\) mHz\({}^{-2}\) and the nominal stand-off radius ranges from \(R_{0}\approx 805-2400\) au for \(n_{e}\approx 0.1-0.01\) cm\({}^{-3}\), or \(s\sim 10^{-5}\). In this case, we find that even if the shock is widened significantly at the pulsar LOS, large values of \(\psi\) and \(V_{\rm scr\perp}\) are still needed to bring the screen distance into agreement with the measured arc curvature (see Figure 9). E.g., assuming the shock corresponds to a screen distance \(s\sim 10^{-4}\) (equivalent to a distance of about 7500 au from the pulsar), the arc curvature would imply \(\psi>75^{\circ}\) and \(V_{\rm scr\perp}>100\) km/s. While all three pulsars could have arcs associated with their putative bow shocks, both B0950+08 and B1929+10 require more restricted ranges of \(\psi\) and \(V_{\rm scr\perp}\) in order for the screen distance to be broadly consistent with the bow shock. Nonetheless, there is a considerable range of ISM densities and shock inclination angles that are possible. If future observations are able to constrain the arc curvatures over time and determine that these arcs are from the pulsars' sub-parsec environments, then the resulting screen constraints could be used to infer the inclination angles of the bow shocks, the radial velocity components of the pulsars, and a more restricted range of local ISM densities. ### Associations with Foreground Structures A search for associations between each pulsar LOS and foreground continuum sources catalogued in the Simbad database recovered known associations for several pulsars, including the HII region Sh \(2-27\) for J1643\(-\)1224 (Harvey-Smith et al., 2011) and the HII region Sh \(2-205\) for B0355+54 (Mitra et al., 2003). As shown in Figure 11, B0355+54 intersects the edge of Sh \(2-205\), which is approximately 1 kpc away and 24 pc in diameter (Romero & Cappa, 2008). While there is twofold ambiguity in the screen distances inferred for B0355+54, one of the near-pulsar screen solutions does coincide with the HII region. We also find a new potential association for the LOS to B1957+20, which passes within \(1.4^{\prime\prime}\) of a star in Gaia DR3 (ID:1823773960079217024) that has a parallax of \(0.7\pm 0.5\) mas (Gaia Collaboration, 2020), which is not the white dwarf companion of the pulsar (Gaia ID: 1823773960079216896). Given the large uncertainties on the foreground star's parallax, it is unclear whether the star is intersected by the pulsar LOS; however, the nominal screen distance inferred from the arc curvature for B1957+20 is 1.5 kpc, somewhat similar to the nominal distance of the star, 1.4 kpc. No other novel associations were found in Simbad for the pulsars in the sample. The boundary of the Local Bubble has long been attributed a role in pulsar scattering (e.g. Bhat et al., 1998). Recent studies have leveraged Gaia to map dust extinction and molecular clouds demarcating the edge of the Bubble in exquisite detail (Lallement et al., 2019; Pelgrims et al., 2020; Zucker et al., 2022), in addition to revealing large-scale structures such as the Radcliffe Wave (Alves et al., 2020), the "Split" (Lallement et al., 2019), and the Per-Tau Shell (Bialy et al., 2021). In Figure 12 we compare the pulsar LOSs in our sample to modeled foreground structures, including the inner surface of the Local Bubble (Pelgrims et al., 2020), the superbubble GSH 238+00+09 (Heiles, 1998; Lallement et al., 2014), the Per-Tau Shell (Bialy et al., 2021), and several HII regions confirmed to intersect pulsar LOSs (Mitra et al., 2003; Harvey-Smith et al., 2011; Ocker et al., 2020; Mall et al., 2022). We have also included all of the local molecular clouds catalogued by Zucker et al. (2020), which trace the large-scale structure of the Radcliffe Wave and the Split. While the molecular clouds themselves are not expected to induce scattering, electron density enhancements in the partially ionized gas surrounding these clouds are, in theory, potential locations of enhanced scattering. The spatial parameters used to model each ISM feature are explained in Appendix B. Figure 12 shows the locations of scattering screens inferred from scintillation arcs. For simplicity, the screen locations are shown as point estimates for \(\psi=0^{\circ}\) and only the near-pulsar solutions where relevant; these screen distance estimates thus have substantial uncertainties and are only notional. Formally, the uncertainties on the screen distance estimates are often dominated by the uncertainties of the arc curvatures, as all of the pulsars (barring J1740+1000, which has no parallax) have fractional distance and transverse velocity uncertainties \(\lesssim 20\%\). For pulsars with large transverse velocities (B0919+06, J1740+1000, B1929+10, and B1957+20), the screen distance estimates shown in Figure 12 correspond to lower limits, and any of these screens could be closer to the pulsar for larger \(\psi\) or \(V_{\rm scr\perp}\). For pulsars with low transverse velocities (B0355+54, B0950+08, J1643\(-\)1224, and J1713+0747) the uncertainties on the screen distance estimates shown in Figure 12 are even less constrained, as there are both near-pulsar and near-observer solutions each with unknown \(\psi\) and \(V_{\rm scr\perp}\). Examples of the screen distance uncertainties for B0355+54 and B1929+10 are shown in Figure 13. Despite these uncertainties, we are able to make some initial comparisons to known ISM features below, which highlight LOSs of interest for future study. More precise screen locations are also shown in Figure 12 for seven additional pulsars with scintillation arcs that are well-characterized in previous works: J0437\(-\)4715 (Reardon et al., 2020), J0538\(+\)2817 (Yao et al., 2021), J0613\(-\)0200 (Main et al., 2023), B0834+06 (Brisken et al., 2010), B1133+16 (Mcke et al., 2022), B1508+55 (Sprenger et al., 2022), and J1909\(-\)3744 (Askew et al., 2023). These pulsars were selected from the literature because their scintillation properties were characterized to high precision using either arc curvature variations or VLBI scintillometry, but in future work we will expand our analysis to a broader sample. Readers are strongly encouraged to view a 3D interactive version of the figure that has been optimized for the complexity of the data.2 Footnote 2: [https://stella-ocker.github.io/scattering_ism3d_ocker2023](https://stella-ocker.github.io/scattering_ism3d_ocker2023) Several of the pulsars shown in Figure 12 have scattering screens well within the dust boundary of the Local Bubble, including J0437\(-\)4715, B1133+16, J1643\(-\)1224, B1508+55, and B1929+10. Of these, B1133+16 and B1929+10 both have screens within 30 pc of the Sun, which could lie near or within local interstellar clouds (Frisch et al., 2011; Linsky et al., 2022). B0355+54, B0950+08, and J1713+0747 could also have screens associated with the local interstellar clouds, if follow-up observations resolve their twofold screen location ambiguities. Pulsars B0919+06, B0834+06, and J0613\(-\)0200 all have LOSs near the superbubble GSH 238+00+09, with J0613\(-\)0200 actually intersecting the bubble for as much as 500 pc. This superbubble may extend farther above the Galactic plane (higher \(Z\)) than the rough representation in the 3D version of Figure 12(Ocker et al., 2020). One pulsar LOS in Figure 12 directly intersects a cluster of local molecular clouds, but shows no evidence of scattering accrued by the intersection: J1909\(-\)3744 passes through Corona Australis at about 150 pc from the observer, but shows evidence for only one dominant scattering screen at a distance of about 600 pc (Askew et al., 2023). It remains difficult to associate any of the scattering screens presented here with the boundary of the Bubble, due not only to uncertainties in the scattering screen distances but also the modeled Bubble surface. The Local Bubble surface shown in Figure 12 represents the inner surface of the Bubble (not the peak extinction), which could be offset from any related ionized scattering structure by as much as 25 pc or more. The exact offset expected between the inner surface of the Bubble traced by dust and any plasma boundary relevant to radio scattering is difficult to estimate, as it depends on the 3D distribution of stars, their parallax uncertainties, the uncertainties on individual extinction to the stars, and the specifics of the inversion algorithms used to infer the dust extinction boundary. Recently, Liu et al. (2023) argued that scattering screens for J0613\(-\)0200 and J0636+5128 are associated with the edge of the Local Bubble, based on the same dust extinction maps that informed the model used here (Lallement et al., 2019; Pelgrims et al., 2020). Given that the Bubble is such a large-scale feature, one would expect there to be evidence of scattering screens at the edge of the Bubble for many more pulsar LOSs, and it remains possible that follow-up observations of the pulsars studied here will reveal additional evidence connecting pulsar scintillation arcs to the Bubble's boundary. However, making such connections will require ruling out the possible chance coincidence of many small Figure 11: H\(\alpha\) emission observed in a \(10^{\circ}\) by \(10^{\circ}\) area around the LOS to B0355+54 (from the all-sky map provided by Finkbeiner, 2003). The pulsar LOS intersects the edge of an HII region included in the Sharpless catalog, Sh \(2-205\)(Sharpless, 1959), and may plausibly account for one of the scintillation arcs observed from this pulsar. scattering structures, as our observations of B1929+10 indicate that scintillation arcs can evidently be produced in large numbers far from the Bubble surface. ## 7 Discussion ### Key Results In this study we have conducted sensitive observations of scintillation arcs for eight pulsars using FAST. Scintillation arcs were detected from all pulsars in the study, tracing a broad distribution of scattering structures in the local ISM. Several pulsars in our sample show low-curvature, truncated arcs. For B0355+54, B0950+08, and B1929+10 these arcs could be associated with their putative bow shocks for a plausible range of screen configurations and ISM densities. Comparison of scattering screen constraints to local ISM structures observed in multi-wavelength continuum maps also suggests that one of the scattering screens for B0355+54 could coincide with the HII region Sh 2\(-\)205. Follow-up observations are needed to confirm or deny these associations. At least nine arcs are observed toward B1929+10, which is just \(361\pm 9\) pc away (Chatterjee et al., 2004). This finding demonstrates that with sufficient sensitivity, weakly scintillating, nearby pulsars can reveal a remarkably high concentration of scattering screens. B1929+10 is also one of only a few pulsars that shows evidence of TSAS detected via time-variable HI absorption of the pulsar spectrum (Stanimirovic et al., 2010; Stanimirovic and Zweibel, 2018). The possible prevalence of arc "forests" (as seen for another pulsar by D. Reardon et al., submitted) illustrates a strong need for scintillation arc theory that can accommodate \(\gg 2\) screens. A high number density of arcs and screens for nearby pulsars may support a picture in which more distant, strongly scintillating pulsars trace an extended medium made up of many screens (e.g. Stinebring et al., 2022). However, it remains possible that highly specific conditions are needed to observe many arcs at once (e.g., some combination of observing conditions including radio frequency and sensitivity, and astrophysical conditions including screen strength and alignment). One possibility is that packed distributions of screens only occur in certain ISM conditions. For example, B0950+08, the other nearby, weakly scintillating pulsar in our sample, shows only two arcs and Figure 12: Locations of pulsar LOSs, scattering screens inferred from scintillation arcs, and simple models of discrete ISM features based on continuum maps, in heliocentric Galactic Cartesian coordinates looking down onto the Galactic plane. An inset shows a close-up of the region \(\pm\)300 pc around the origin. A total of 14 pulsars are shown, eight from this work (O23) and seven from previous studies noted in the legend. Pulsar names in the legend are ordered clockwise starting from the LOS to B1508+55, which is located nearly parallel to the \(Y\)-axis at \(X=0\) pc. Scattering screens shown for pulsars from this work correspond to the near-pulsar solutions for \(\psi=0^{\circ}\) and \(V_{\rm scri}\ll V_{\rm obs,\,V_{\rm spr}\perp}\). The near-pulsar solutions are favored for pulsars with large transverse velocities (B0919+06, J1740+1000, B1929+10, and B1957+20), whereas for pulsars with lower transverse velocities (B0355+54, B0950+08, J1643\(-\)1224, and J1713+0747) the two-screen solution cannot be formally broken by our observations. These screen distances are thus rational and have formal uncertainties largely dominated by the uncertainties on the arc curvature (see Figure 13 for examples). The screens shown from other pulsar studies are more precisely determined from either arc curvature variations or VLBI scintillometry. Models for discrete ISM features include the Local Bubble, based on a spherical harmonic decomposition of dust extinction boundaries (here we show the decomposition mode \(l=6\) from Peltgrinus et al., 2020), the superbubble GSH 238+00+09 (Heiles, 1998; Lallement et al., 2014), and the Per-Tau Shell (Bialy et al., 2021). Local molecular clouds are also shown (Zucker et al., 2020). Three HII regions (Sh 2\(-\)7, 2\(-\)27, and 2\(-\)205) and one supernova remnant (S147, associated with pulsar J0538+2817) are shown. The spatial parameters used to model each ISM feature are explained in Appendix B. A 3D interactive version of this figure is available at [https://stella-ocker.github.io/scattering_ismd_ocker2023](https://stella-ocker.github.io/scattering_ismd_ocker2023). The interactive version can be zoomed, rotated, and modified to only show specific legend entries. has an overall deficit of scattering compared to other pulsars at comparable distances, suggesting that its LOS may be largely dominated by the hot ionized gas thought to pervade the Local Bubble. These mixed findings imply a clear need for a uniform, deep census of scintillation arcs towards pulsars within 500 pc of the Sun, ideally through a commensal study of both arcs and TSAS to elucidate the relationship between small-scale structure in both ionized and atomic phases of the ISM. ### Origins of Scattering Screens One of the core questions at the heart of scintillation arc studies is to what extent arcs are produced by scattering through nascent density fluctuations associated with extended ISM turbulence, or through non-turbulent density fluctuations associated with discrete structures. Both of these processes can produce arcs, albeit of different forms. The variety of arc properties seen even within our sample of just eight pulsars broadly affirms a picture in which pulsar scattering is produced through a mixture of turbulence and refractive structures whose relevance depends on LOS, and likely also observing frequency. Of the pulsars shown in Figure 12, there are few direct and unambiguous connections between their scattering screens and larger-scale ISM features, even for those pulsars with precise scattering screen distances. To some degree this lack of association is to be expected, as scintillation traces ISM phenomena at much smaller spatial scales than typical telescope resolutions. In future work we will expand upon the local ISM features shown in Figure 12 to examine a larger census of potential scattering media (e.g., the Gum Nebula, known HII regions, etc.). The ISM contains a zoo of structures that are not always readily visible in imaging surveys and may not appear except in targeted searches. One example is stellar bow shocks, which can sustain turbulent wakes and emissive nebulae up to 1000s of au in scale, such as those seen for the H\(\alpha\)-emitting bow shock of B2224+65 (Cordes et al., 1993) and the X-ray PWN of B1929+10 (Kim et al., 2020). An updated Gaia census of stars within the solar neighborhood suggests a mean number density of stars \(\sim 0.06-0.08\) pc\({}^{-3}\), depending on the stellar types included (Reyle et al., 2022). Of these, only a fraction will have magnetosonic speeds fast enough to generate bow shocks (e.g. Shull and Kulkarni (2023) assume a mean number density \(\sim 0.01\) pc\({}^{-3}\) for stars with bow shocks). Bow shock nebulae \(\sim 1000\) au in size will have a volume filling factor \(f_{V}\approx N_{\rm bs}(R_{\rm bs}/R_{\rm ISM})^{3}\sim 0.01(1000\ {\rm au}/1\ {\rm pc})^{3}\sim 10^{-9}\) for a number of bow shocks \(N_{\rm bs}\) with spatial extents \(R_{\rm bs}\) within an ISM volume of radius \(R_{\rm ISM}\). The equivalent mean free path is \(\sim 1\) Mpc. Allowing for larger bow shock sizes could bring the mean free path down to \(\sim\) kpc. Regardless, this rough estimation suggests that bow shock nebulae could only comprise a very small fraction of scattering media along pulsar LOSs. High-resolution magnetohydrodynamic simulations of thermally unstable turbulent gas suggest that dense, elongated plasmoids may be a ubiquitous feature of both the cold and warm phases of the ISM (Fielding et al., 2023). These plasmoids have been simulated down to spatial scales \(\sim 10^{3}\) au and can result in density deviations \(\sim 10^{3}\times\) nominal values, in addition to changes in magnetic field direction across their current sheets. It thus appears possible that the extended ISM spontaneously produces some scattering structures through plasmoid instabilities, in addition to turbulence and deterministic processes involving stars and nebulae. Future work should compare the rate at which these plasmoids form, their lifetimes, and volume filling factor to the distribution of known scattering screens. Pulsar scintillation remains one of the few astrophysical probes of sub-au to au-scale structures in the ISM. While the ubiquity of scintillation arcs is now well-established for many LOSs (Stinebring et al., 2022; Wu et al., 2022; Main et al., 2023b), high-resolution studies of pulsar scattered images using scintillometry have only been Figure 13: Screen distance estimates for B0355+54 (top) and B1929+10 (bottom). The pulsar locations at \(1.09^{+0.23}_{-0.16}\) kpc (B0355+54) and \(361\pm 9\) pc (B1929+10) are shown by the black dashed lines and shaded grey error regions. Screen locations are shown with an arbitrary vertical offset for visualization purposes. The errors shown on the screen distances include the uncertainties in pulsar distance, transverse velocity, and scintillation arc curvature, but do not account for the unconstrained \(\dot{\psi}\) and \(V_{\rm wzr.L}\). In all of these cases, the error bars shown are largely dominated by the uncertainty in arc curvature. For B0355+54 there is degeneracy between the near-pulsar solutions (green points) for screen distance and the near-observer solutions (brown points). The near-pulsar screen distances should be regarded as lower limits, whereas the near-observer distances are upper limits. B1929+10 has a large enough transverse velocity to yield single distance estimates for each screen, and for this pulsar only screens for Arcs A–I are shown; these screen distances are lower limits. The blue dashed lines indicate where each LOS crosses the inner surface of the Local Bubble, and the shaded blue regions indicate \(\pm 25\) pc around the intersection point. For B0355+54, the adopted location and width of the HII region Sh \(2-205\) are shown in yellow. applied to a limited number of pulsars. While inferences of ISM structure at the spatial scales probed by scintillation will benefit greatly from application of scintillometry to a broader sample of LOSs, our study demonstrates that single-dish observations of scintillation arcs continue to provide insight, particularly as increasing telescope sensitivity and spectral resolution appears to reveal more arcs than previously identified for some pulsars. ## Acknowledgements SKO, JMC, and SC are supported in part by the National Aeronautics and Space Administration (NASA 80NSSC20K0784). SKO is supported by the Brinson Foundation through the Brinson Prize Fellowship Program. TD is supported by an NSF Astronomy and Astrophysics Grant (AAG) award number 2009468. VP acknowledges funding from a Marie Curie Action of the European Union (grant agreement No. 101107047). SKO, JMC, SC, DS, and TD are members of the NANOGrav Physics Frontiers Center, which is supported by NSF award PHY-2020265. The authors acknowledge the support staff at FAST for managing the observations used in this work, and David Pawelczyk and Bez Thomas at Cornell for their technical contributions to data transport and delivery. This work is based in part on observations at Kitt Peak National Observatory at NSF's NOIRLab (NOIRLab Prop. ID 17B-0333; PI: T. Dolch), which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. The authors are honored to be permitted to conduct astronomical research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham. TD and CG thank the Hillsdale College LAUREATES program and the Douglas R. Eisenstein Student Research Gift for research and travel support. This work also benefited from the input of Thankful Cromartie, Ross Jennings, Robert Main, Joseph Lazio, Joris Verbiest, Henry Lennington, Joseph Petullo, Parker Reed, and Nathan Sibert. This work made use of Astropy ([http://www.astropy.org](http://www.astropy.org)), a community-developed core Python package and an ecosystem of tools and resources for astronomy (Astropy Collaboration et al., 2013, 2018, 2022). ## Data Availability Data is available upon request to the corresponding author (SKO), and unprocessed observations in filterbank format are available through the FAST Data Center by contacting [email protected]. The Python program and input data for creating the 3D version of Figure 12 are available at [https://github.com/stella-ocker/ism-viz](https://github.com/stella-ocker/ism-viz). The Local Bubble model is available on Harvard Dataverse: [https://doi.org/10.7910/DVN/RHPVNC](https://doi.org/10.7910/DVN/RHPVNC). KPNO data are available on the NOIRLab Astro Data Archive: [https://astroarchive.noirlab.edu/](https://astroarchive.noirlab.edu/). The molecular cloud distance catalog is available on Harvard Dataverse: [https://doi.org/10.7910/DVN/07L7YZ](https://doi.org/10.7910/DVN/07L7YZ).
2309.06592
Mobile Object Tracking in Panoramic Video and LiDAR for Radiological Source-Object Attribution and Improved Source Detection
The addition of contextual sensors to mobile radiation sensors provides valuable information about radiological source encounters that can assist in adjudication of alarms. This study explores how computer-vision based object detection and tracking analyses can be used to augment radiological data from a mobile detector system. We study how contextual information (streaming video and LiDAR) can be used to associate dynamic pedestrians or vehicles with radiological alarms to enhance both situational awareness and detection sensitivity. Possible source encounters were staged in a mock urban environment where participants included pedestrians and vehicles moving in the vicinity of an intersection. Data was collected with a vehicle equipped with 6 NaI(Tl) 2 inch times 4 inch times 16 inch detectors in a hexagonal arrangement and multiple cameras, LiDARs, and an IMU. Physics-based models that describe the expected count rates from tracked objects are used to correlate vehicle and/or pedestrian trajectories to measured count-rate data through the use of Poisson maximum likelihood estimation and to discern between source-carrying and non-source-carrying objects. In this work, we demonstrate the capabilities of our source-object attribution approach as applied to a mobile detection system in the presence of moving sources to improve both detection sensitivity and situational awareness in a mock urban environment.
M. R. Marshall, R. J. Cooper, J. C. Curtis, D. Hellfeld, T. H. Y. Joshi, M. Salathe, K. Vetter
2023-09-12T20:38:23Z
http://arxiv.org/abs/2309.06592v1
Mobile Object Tracking in Panoramic Video and LiDAR for Radiological Source-Object Attribution and Improved Source Detection ###### Abstract The addition of contextual sensors to mobile radiation sensors provides valuable information about radiological source encounters that can assist in adjudication of alarms. This study explores how computer-vision based object detection and tracking analyses can be used to augment radiological data from a mobile detector system. We study how contextual information (streaming video and LiDAR) can be used to associate dynamic pedestrians or vehicles with radiological alarms to enhance both situational awareness and detection sensitivity. Possible source encounters were staged in a mock urban environment where participants included pedestrians and vehicles moving in the vicinity of an intersection. Data was collected with a vehicle equipped with 6 NaI(Tl) \(2~{}\mathrm{in}\times 4~{}\mathrm{in}\times 16~{}\mathrm{in}\). detectors in a hexagonal arrangement and multiple cameras, LiDARs, and an IMU. Physics-based models that describe the expected count rates from tracked objects are used to correlate vehicle and/or pedestrian trajectories to measured count-rate data through the use of Poisson maximum likelihood estimation and to discern between source-carrying and non-source-carrying objects. In this work, we demonstrate the capabilities of our source-object attribution approach as applied to a mobile detection system in the presence of moving sources to improve both detection sensitivity and situational awareness in a mock urban environment. Source attribution, object detection, radiological search, mobile object tracking ## I Introduction Radiological surveillance for gamma-ray emitting material in large-scale urban environments, such as city blocks, is an important mission in homeland security. This involves searching for often weakly emitting gamma-ray sources in environments that are cluttered with pedestrians and vehicles, which makes the detection and localization of these sources extremely difficult. In addition, when a source is detected and a radiological alarm occurs, alarm adjudication needs to be performed quickly due to the motion of the mobile detector system relative to objects in the scene. This can be challenging given the cluttered and dynamic nature of urban environments. The addition of contextual sensors (e.g., streaming video and LiDAR) to a mobile detector system can provide valuable information about radiological source encounters that can assist in adjudication of alarms by providing associations between objects and the radiological data. In this paper, we explore the concept of fusing contextual information from streaming video or LiDAR to augment radiation detectors on a mobile detector system. Previous methods have demonstrated the ability of simple contextual information (e.g. GPS and a camera) on a mobile detector system to improve situational awareness by overlaying a reconstructed 2D gamma-ray image onto a camera image [1]. More recent works have explored and demonstrated 3D gamma-ray imaging with free-moving handheld devices [2, 3, 4] by leveraging advances in sensor and computational technology. These methods utilize a set of contextual sensors, such as LiDAR and/or streaming video, in conjunction with radiation sensors and algorithms that produce pose (position and orientation) estimates of the free-moving device in a consistent reference frame. All of the contextual sensor information, radiological data, and pose estimates are processed in real-time to produce 3D visualizations of both the scene and gamma-ray image as the device moves through the environment. While these methods can improve situational awareness by enabling 2D or 3D reconstructions of gamma-ray sources, they focus on static sources in stationary environments and currently are not well suited for dynamic environments with moving sources. Thus, alarm adjudication by an operator in a cluttered environment with dynamic objects would still be difficult to perform quickly and efficiently. Recent approaches have used contextual- radiological data fusion to correlate trajectories from tracked objects with radiological data to better improve localization capabilities of moving sources compared to conventional reconstruction approaches. Several works have used a LiDAR point cloud projected onto a XY-plane [5, 6] in a constrained environment to correlate trajectories from 2D tracked objects with radiological data to attribute radiological sources to objects. Previous work by our group used advances in computer-vision-based object detection to detect physical objects (i.e., pedestrians and vehicles) using LiDAR and video independently in an environment with minimal constraints in extent [7]. We then demonstrated the ability to reliably track the detected objects in 3D and to discern between source-carrying and non-source-carrying objects in a scene. The findings from this work also suggested contextual information could be used to improve detection sensitivity. This work was performed using a static contextual sensor system with a co-located NaI(Tl) detector. Here we build upon this previous work by applying this source-object attribution analysis concept to a mobile detector system equipped with video and LiDAR as well as six \(2~{}in.\times 4~{}in.\times 16~{}in.\) NaI(Tl) detectors in a hexagonal arrangement. Similar to the methods presented in our previous work [7], object detecting convolutional neural networks [8, 9] are applied to detect objects in image frames or LiDAR scans in real-time (\(\sim\)15 Hz), and a Kalman filter-based tracking algorithm with parameters specific to pedestrians and vehicles is used to track detected objects between video frames or LiDAR scans. With our mobile system, it has been observed that the Kalman filter-based tracking algorithm performs more consistently if the detections are transformed into a static, consistent reference frame. In this work, two different methods for computing pose information for the mobile system in a consistent reference frame are used, and we explore the impact these two methods have on tracking and attribution performance. Under the hypothesis that the object is associated with the source, we then generate models for a tracked object that describe the expected count rate from the tracked object in each detector within the detector array. Subsequently, the models for each detector are simultaneously fit to the observed count-rate data in each respective detector within the array. Finally, to identify the trajectories that are most (and least) likely to be associated with the radiological data, Poisson deviance is used as a goodness-of-fit metric [10]. The source-object attribution analysis approach is independently evaluated using both video and LiDAR. Additionally, in our previous work, we introduced a track-informed integration window analysis to maximize the signal-to-noise ratio (SNR) for a tracked object. This is extended here to a detector array. We identify time segments across the detector array that, when combined, will enable improved detection sensitivity compared to a summed response of the detector array (i.e., treating the detector array as a monolithic detector) or different fixed integration windows. We hereafter refer to identifying time segments across the detector array as an optimal configuration of detectors. The article is structured as follows: the object detection, tracking, and source-object attribution analysis pipeline is discussed as well as the mobile system it operates on in Section II. In Section III and Section IV, we demonstrate our source-object attribution analysis on a mobile system and improve detection sensitivity using tracking information in a mock urban environment with pedestrians and vehicles, respectively. Finally, a summary and future work are presented in Section V. ## II Methods For a more detailed discussion of the object detection, tracking, and analysis pipeline analysis approach refer to our previous work focusing on a static detection system [7]. ### _Lemus_ In this work, we used the LiDAR Enhanced Mobile Urban Radiation Search (LEMURS) vehicle [11] (Fig. 1). LEMURS consists of two 16-beam LiDAR mounted on the roof of the vehicle, multiple IMU and INS devices, a 360\({}^{\circ}\) panoramic Occam Omni 60 camera, and six \(2~{}in.\times 4~{}in.\times 16~{}in.\) NaI(Tl) detectors arranged in a hexagonal array. Each detector is equipped with an Ortec DigiBASE [12] multichannel analyzer and configured to publish list-mode gamma-ray interaction data packets at 20 Hz with sub-millisecond synchronization between detectors. The contextual sensor data and gamma-ray data are acquired within the Robot Operating System (ROS) [13] across multiple single-board computers with clocks synchronized by the network time protocol (NTP) [14]. ### _Object Detection and Tracking_ A lightweight, open-source object detector algorithm called you only look once (YOLO) was utilized to detect objects in the field of view (FOV) of the Occam camera [8] along with a ROS implementation, YOLO ROS [15], which was modified to use a pre-trained YOLOv4-tiny object detection neural network [8]. From YOLO ROS, 2-D object detection bounding boxes are returned in the image coordinate system along with the object label and confidence score. Subsequently, the bounding boxes are converted to 3-D bounding boxes by inferring the distance of the object in the camera image. This is done by using both the camera intrinsic parameters and the object detection label. For detected pedestrians and vehicles, nominal height information is assumed. A detected object label of person has an assumed nominal height of 1.75 meter (m) [16], and detected cars, trucks, motorcycles, and buses have nominal heights of 1.43 m, 1.80 m, 0.80 m, and 2.5 m, respectively. Fig. 1: The LEMURS system which consists of a panoramic Occam Omni 60 camera, 2 LiDAR, multiple IMU and INS devices, and six \(2~{}in.\times 4~{}in.\times 16~{}in.\) NaI(Tl) detectors. A separate object detection process is run for each Occamera. The camera frames are synchronized, and the object detections from all the cameras are collated. This is done to avoid double-counting detections that take place in regions where the camera FOVs overlap. To detect objects in LiDAR-generated point clouds, SECOND [9] with the PointPillars fast feature encoding [17] was used. Two 360\({}^{\circ}\) scans from each LiDAR are concatenated to produce sufficiently dense point clouds for inference using SECOND. To remove motion blur in the resulting point clouds, LEMURS LiDAR scans are transformed into a world-fixed frame before being aggregated. The transformation to the world-fixed frame was found using two different methods. In the first method, the pose estimates of the LEMURS vehicle in a world-fixed frame are calculated with Google Cartographer SLAM [18] using the IMU and LiDAR data. The other method involved using a GPS stabilized by an INS [19] to track the pose estimates of the LEMURS vehicle in a world-fixed frame. After aggregation, the point cloud is transformed back into the reference frame of LEMURS before inference because SECOND assumes the sensor is in the center of the frame. The timestamp used to map the point cloud back to the LEMURS reference frame is the average of all the timestamps from the point cloud ROS messages that were used to generate the aggregated point cloud. The message format for the LiDAR detected object follows the same format as YOLO ROS. Additionally, it should be noted that SECOND provides 3-D bounding boxes because the depth of an object is directly measured with LiDAR and does not need to be inferred. Detected objects from both video and LiDAR are tracked using the modified Kalman-filter-based Simple Online Realtime Tracking (SORT) algorithm discussed in [7]. In the modified algorithm, 3-D bounding boxes are represented as multivariate normals (MVN) and the Hellinger Distance (HD) is used for data association, which is a measure of similarity between two MVNs and is scaled between 0 and 1 [20]. MVNs and HD are used for tracking over the traditional method of intersection-over-union (IOU) for two reasons. First, Kalman filters use MVNs to represent state variables, which we decided to leverage for tracking, and MVNs enable tracking across multiple cameras by allowing for uncertainty in object extent. The bounding boxes are represented as MVNs by making the center of each bounding box the mean of the MVN, and the off-diagonals of the MVN's covariance matrix are scaled dimensions of the bounding box dimensions. Different scaled dimensions for LiDAR and video are used due to the fact that video has more uncertainty in an object's distance from the system since the information is inferred. Additionally, SECOND provides the heading (yaw) of bounding boxes, and this information was used to transform the off-diagonal elements of the covariance matrix. Two state vectors - position (\(x\), \(y\), and \(z\)) and velocity (\(v_{x}\), \(v_{y}\), and \(v_{z}\)) - are tracked with our modified Kalman filter. By using the detected object's label, the velocity uncertainties in the covariance matrix are adapted according to the object's label to enable simultaneous tracking of both pedestrians and vehicles. We use 4.44 and 0.28 \(m^{2}/s^{2}\) for vehicles and pedestrians, respectively. The velocity uncertainties and HD were found by running an optimization on scenes with either only vehicles or pedestrians present to find the lowest number of tracked objects produced while ensuring a limited number of incorrect associations by the data association algorithm. To perform data association between detections and their most likely corresponding track, the HD is calculated for each possible detection and track pair creating a matrix that has dimensions \(D\times T\), where \(D\) is the total number of detections present and \(T\) is the total number of available tracks. Linear assignment (also known as the Hungarian Method) is then applied to the matrix [21]. If a detection and a track have a calculated HD of less than 0.8 they are consolidated to a single track. In addition, if two detections have a calculated HD of less than 0.6 when they are transformed into the world-fixed frame the two detections are consolidated to a single detection. This is to prevent tracking the same object more than once. With a mobile system, objects in the scene should be invariant to the motion of LEMURS which is not the case in a body-fixed frame. To generate pose estimates of LEMURS in a world-fixed frame, INS or SLAM information is processed, and pose information is produced at a rate of 10 Hz. It is thought that a navigational system that relies on GPS to produce pose estimates of the system's location in a global frame would have degraded tracking and attribution performance in an urban environment compared to applying SLAM. Urban environments are cluttered with buildings that can occlude or reflect signals from GPS satellites to the LEMURS system. To test this concept, the tracking and source-object attribution performance of using SLAM or a GPS stabilized by an INS (hereafter referred to as INS) to produce pose estimates of the LEMURS system in a world-fixed frame are compared. ### _Modeling and fitting trajectories to Radiological Data_ When a radiological alarm occurs, the attribution analysis is triggered. In this case, a non-negative matrix factorization (NMF) based spectroscopic anomaly detection algorithm, described in [22], is used to determine the presence of an anomaly and perform source identification. The NMF-based anomaly detection algorithm is run independently on each detector within the detector array, and if a radiological alarm is triggered for any detector, the attribution analysis is triggered for all detectors. The attribution analysis is performed on all trajectories that are within 3 seconds of the start and stop of the radiological alarm. The goal of the attribution analysis is to identify the trajectories that are most (and least) likely to have been associated with an alarm. This is done by assuming each track is responsible for the radiological alarm and modeling the expected count rate in each detector from a given track. For a given discrete time step, \(i\), the expected number of detected events, \(c_{i}\), within a spectral region of interest (ROI) centered at \(E\), from a radioactive source with gamma-ray flux \(\alpha\) in the presence of a constant background b can be described by \[c_{i}(E)=\frac{\epsilon(\hat{\Omega},E)\alpha e^{-\mu(E)\mathbf{r}_{i}}}{4\pi \mathbf{r}_{i}^{2}}\cdot\Delta t_{i}+b\,, \tag{1}\] where \(\epsilon\) is the effective area of the detector, \(\mathbf{r}_{i}\) is the distance from the detector to the source, \(\Delta t_{i}\) is a given integration time, and \(\mu\) is an energy and medium dependent linear attenuation coefficient. The effective area is a function of energy and the direction between the tracked object's position and the detector, \(\hat{\Omega}\). The spectral ROI is defined using the isotope ID provided by the NMF alarming algorithm. The isotope ID, together with the direction between the object and the detector \(\Omega\) at any given time is used to extract the appropriate \(\epsilon\) from a look-up table of pre-computed response matrices. A global best-fit model for each trajectory is found by simultaneously maximizing the Poisson likelihood between Eq. 1 for each detector and the observed count-rate data in each respective detector with a maximum likelihood estimation algorithm [23], where \(\alpha\) and \(b\) are free parameters. Details of accounting for the total attenuation coefficient, \(\mu\), from tracked objects is described in more detail in [7]. However, here we extend the calculation of \(\mu\) to also include anisotropic attenuation from tracked objects. Depending on the location of a source within a tracked object, the amount of attenuation imposed by occluding material and shielding within the object can change throughout an alarm encounter as the source carrier and LEMURS drive past each other. This has the potential to reduce the ability of the source-object attribution analysis to properly model the expected count rate, which could limit attribution performance as well as affect the track-informed integration window analysis described in Section II-E. To better handle anisotropic shielding from tracked objects, assumptions are made for both pedestrians and vehicles. With pedestrians, the radiological source is assumed to be in a backpack behind the object; whereas, with a vehicle, the source is assumed to be inside the vehicle. In order to model how attenuation changes as a function of angle relative to LEMURS, simulations were run using the Monte Carlo simulator MEGAlib [24]. A pedestrian was modeled using the composition of a human [25], and a vehicle was modeled using a 1 m thickness of Aluminum (Al) for the engine block of the vehicle and 5 cm thickness of Al for the car doors and vehicle frame. A source was placed either behind the pedestrian or centered in the vehicle's trunk. A NaI(Tl) detector was moved in \(5^{\circ}\) azimuthal increments around either object at a constant elevation in line with the source. The amount of attenuation present at each azimuth was calculated and applied to Eq. 1 for a given angle relative to LEMURS. Fig. 2 displays an example of how the estimated amount of attenuation from a tracked object during an alarm encounter is determined. It should be noted that the heading of each object is necessary for this calculation and thus can only be performed with LiDAR detected objects. The energy- and direction-dependent response function was generated using MEGAlib [24]. The simulation included a detailed model of the detector array and a simplified model for the vehicle, operators, electronics, and mechanical supports [26]. The gamma-ray response function was generated by modeling a radiological point-source located 10 m from the detector array center and 1.3 m off the ground, which corresponds to the elevation at the center of the detector crystals. Separate simulations were performed with the source moved at 10\({}^{\circ}\) increments in azimuth around LEMURS for a total of 36 positions. At each source position (i.e., direction), 2.7 \(\times\) 10\({}^{12}\) particles were simulated from an energy-dependent emission distribution normalized to source activity [27]. For each detector, the effective area (per source activity) was computed using: \[\epsilon=\frac{4\pi\cdot R_{\text{sim}}^{2}\cdot X_{\text{cnts}}}{N_{\text{ SimParticles}}}, \tag{2}\] where \(N_{\text{SimParticles}}\) is the number of particles emitted into 4\(\pi\), \(R_{\text{sim}}\) is the source distance from the detector, and \(X_{\text{cnts}}\) is the number of counts within the relevant peak-energy ROI. Finally, to generate a 4\(\pi\) response function, the response function was cosine modulated with elevation. In Eq. 2, it should be noted explicitly that \(\epsilon=\epsilon(\hat{\Omega},E)\). ### _Attributing Trajectories to Radiological Data_ Similar to [7], here we use the Poisson deviance to compute a p-value between the best-fit model for each trajectory and the count-rate data. Subsequently, a S-value (\(\text{log}_{2}(p)\)) is calculated from the p-value and used to reject trajectories that are inconsistent with the data and are unlikely to be associated with the radiological alarm. The Kalman filter's localization prediction is based on the center of an object's bounding box so the source is assumed at the center of an object for the attribution calculation. To account for any position-source offsets that might exist, the modeled trajectory is calculated multiple times over a 0.5 second window while shifting the model trajectory by 0.1 seconds. The lowest S-value in this interval is used as the best-fit model for the trajectory. This approach does not account for potential offsets that exist in other dimensions (elevation and standoff). To reduce false positives associated with simply fitting background (i.e., incorrectly attributing the source to an object that is not responsible for the radiological alarm) in the attribution analysis, an additional metric is applied to reject Fig. 2: A top-down LiDAR image demonstrating how the estimated amount of attenuation present at a given angle \(\Theta\) from a tracked object is determined. The tracking bounding boxes have been omitted for simplicity. The white dots and grid lines represent LiDAR returns and 1 m\({}^{2}\) area, respectively, and the orientation of the two LiDARs and detector array on LEMURS are indicated by the red (x-axis), green (y-axis), and blue (z-axis) axes. trajectories. This is done by calculating Eq. 1 with and without the inclusion of \(\alpha\) in the best-fit. Then, using the Bayesian Information Criterion (BIC) [28], it is determined whether a background only model (\(\alpha\) set to 0 in Eq. 1) or a source plus background model better describes the best-fit model. If a background only model better describes the best-fit model, the trajectory is most likely not responsible for the radiological alarm. This method is described in more detail in [7]. ### _Improving Detection Sensitivity_ In our previous work [7], we demonstrated that detection sensitivity could be increased by using track-informed integration windows that optimize SNR. Here we expand upon this approach to include multiple detectors and within the detector array we identify an optimal configuration of detectors that will maximize SNR. In addition, uncertainty in an object's extent is accounted for in the track-informed integration window formulation. The time-segments that, when combined, will maximize the expected SNR over a detector array for a given trajectory is found through the following analysis. Eq. 3 describes the expected SNR across \(N\) discrete time windows for a tracked object within a fixed integration window \(\Delta t\) at a point in time \(i\) as \[\mathrm{SNR}=\Bigg{(}\sum_{i}^{N}s_{i}\Delta t_{i}\Bigg{)}\Bigg{(}\sum_{i}^{N }b_{i}\Delta t_{i}\Bigg{)}^{-1/2} \tag{3}\] where \(s_{i}=\epsilon(\hat{\Omega},E)\alpha e^{-\mu(E)r_{i}}/(4\pi r_{i}^{2})\) is the photopeak count-rate, and \(b_{i}\) is the mean background rate within \(\Delta t_{i}\). Under the reasonable assumptions of constant source strength and background rate, \(s_{i}/\sqrt{b_{i}}\) can be factored out of Eq. 3. The sensitivity (\(\$_{T}\)) which is proportional to the \(\mathrm{SNR}_{T}\) with the omission of \(s_{i}/\sqrt{b_{i}}\) is found by \[\mathrm{SNR}_{T}\propto\$_{T}=\frac{\sum_{i\in T}\frac{\epsilon(\hat{\Omega},E)\Delta t_{i}}{4\pi r_{i}^{2}}e^{-\mu(E)r_{i}}}{\sqrt{\sum_{i\in T}\Delta t_ {i}}}, \tag{4}\] where \(T\) is a subset of time segments (\(T\in[1,N]\)) that when combined will maximize SNR for a given trajectory. In order to identify the optimal configuration of detectors within the array for a given alarm encounter, the models from each detector are concatenated together. The subset of measurements \(T\), that maximizes \(\$\), is calculated using the concatenated data, and the spectra from each detector's respective calculated optimal window are summed together to produce an optimal spectrum. To account for the position uncertainties of each track in the optimal integration window formulation, a Markov Chain Monte Carlo (MCMC) approach was applied [29] to the data to appropriately sample from the position uncertainties to better determine the optimal integration window. MCMC is a method that draws samples directly from the posterior probability density function (PDF) distribution [30]. MCMC does this by creating \(M\) walkers that explore the parameter space and generate models of the data at each position. The walker vector is defined by \[\theta_{i}=\begin{pmatrix}x_{i}\\ y_{i}\\ z_{i}\end{pmatrix}, \tag{5}\] where \(\theta_{i}\) is the estimated position at a discrete time step \(i\) for a given trajectory. The priors for each walker are the position uncertainty in \(x_{i}\), \(y_{i}\), and \(z_{i}\) around each respective mean value. The \(\theta_{i}\) parameters are modeled to the count-rate data using Eq. 1. The source activity \(\alpha\) and background are extracted from the best-fit model for the track, and the object position is varied to minimize the negative log-likelihood. The negative log-likelihood is determined by \[\ell(\mathbf{x}|\boldsymbol{\lambda})=[\boldsymbol{\lambda}-\mathbf{x}\odot \log\boldsymbol{\lambda}+\log[\Gamma(\mathbf{x}+1)]]^{T}\cdot\boldsymbol{1}, \tag{6}\] where \(\odot\) denotes element-wise multiplication and \(\Gamma(\cdot)\) is the gamma function. Also, \(\boldsymbol{\lambda}\) is the best-fit model for a trajectory and \(\boldsymbol{x}\) is the count-rate over the span of the trajectory. The initial guess for \(\theta_{i}\) is the best-fit model for the track. A total of 400 iterations with 600 walkers were run. However, the first 100 iterations were discarded because the walkers start close to the initial guess before fully exploring the parameter space. This resulted in 180,000 samples, and a random subset of samples is chosen from the 180,000 samples to decrease the computational burden. The total time duration of each optimal integration window is summed together across each detector for a given model. The model that produces the largest time duration for the integration window across all six detectors but lowest negative log-likelihood within the subset is chosen. It should be noted that once a model is chosen each detector's respective optimal integration window is used, not the summed window, to identify the subset of time segments that when combined will maximize SNR. Additionally, the number of detectors that contribute to the optimal configuration of detectors for a given source encounter can vary. In the case of a single detector, the model that produces the largest time duration for the integration window but lowest negative log-likelihood within the subset is chosen as the optimal window. A spectroscopic analysis was applied to the track-informed optimal configuration of detectors and compared to either fixed integration windows (1.0, 2.0, 3.0, and 4.0 seconds) or a track-informed optimal integration window calculated from using the summed response of the detector array. For this analysis, the anomaly value - the Poisson deviance between the observed data and a mean background spectrum scaled to match the observed counts - computed by the NMF-based anomaly detection algorithm was used as a proxy for detection sensitivity where a larger anomaly value suggests improved detection sensitivity through this track-informed analysis. ## III Source-object Attribution in a Mock Urban Environment The performance of the source-object attribution analysis in the presence of a mobile source was tested at a mock urban environment at Richmond Field Station (RFS). A 1.87 mCi \({}^{137}\)Cs source inside of 2 cm of lead-shielding was placed in the trunk of a vehicle and driven around. LEMURS and the source carrier performed straight line drive-bys going either 10 or 20 mph past each other. Fig. 3 depicts the location of the objects present and their direction of motion overlaid on a top-down view of the intersection [31]. Both LEMURS and the source carrier drove straight for 15 m before passing in the middle of an intersection that had pedestrians on either side of the intersection walking parallel to LEMURS and the source carrier. Two stationary cars were on both sides of the intersection and were perpendicular to the direction of motion of LEMURS. Additionally, the source carrier was followed by a car traveling 10 or 20 mph depending on the scenario. These scenarios were repeated at least 18 times for both speeds, and the lowest exclusion metric for a given alarm encounter is used as a metric to determine attribution performance. Attribution was performed using the photopeak ROI (600 keV - 725 keV) for \({}^{137}\)Cs. ### _10 mph Scenario_ Fig. 4 shows the output of the tracking analysis from one alarm encounter using video (Fig. 4a) and LiDAR-based trajectories (Fig. 4b) for a LEMURS and source carrier speed of 10 mph and SLAM to create pose estimates of LEMURS in a world-fixed frame. In Fig. 4a (top pane), the progression of the alarm encounter as the source carrier (Track 4 - white vehicle) drives past LEMURS using video data along with the trajectory for each object (bottom pane) is shown. Fig. 4b shows the same alarm encounter and moment in time as the image in the top pane of Fig. 4a but using LiDAR trajectories. The color-coding of each object (but not the labels) has been kept consistent with Fig. 4a. For both video and LiDAR, the source carrier and surrounding objects are continuously tracked throughout the alarm encounter. One can see the inherent depth information extracted from LiDAR enables more reliable position estimation compared to the inferred distance estimation for video, which adds noise in the trajectories. In the bottom pane of Fig. 4a, both Track 6 and Track 2 were stationary vehicles throughout the encounter, but due to frame-to-frame uncertainty in the distance estimate, the Kalman filter pose estimation for both objects varies; whereas, the variability of Track 11 and Track 13 (Track 2 and Track 3 from Fig. 4a bottom pane) in Fig. 4b is more concentrated around each object's respective position. Additionally, in Fig. 4a (bottom pane), the detector orientation of LEMURS is seen. In this alarm encounter, Detectors 4-6 are the detectors closest to the source carrier during the time of closest approach. The results of the alarm encounter using video and LiDAR-based trajectories are shown in Fig. 5a and Fig. 5b, respectively. The orientation of the detector panes in both figures matches the detector orientation of LEMURS shown in the bottom pane of Fig. 4. In Fig. 5a, the 1/r\({}^{2}\) profile in the count rate is seen from LEMURS and the source carrier driving past each other along with the different best-fit models for all the objects present during the alarm encounter. The best-fit model for Track 4 (white vehicle) clearly follows the radiological data, and this is the correct attribution as the white vehicle was responsible for the radiological alarm. Track 5 has some correlation with Detectors 1-3, but with the angular response of the detector array, the trajectory does not follow the count-rate data in Detectors 4-6 and can be excluded. The remaining trajectories can also all be excluded from the analysis. A similar result is seen using LiDAR data. All of the trajectories can be excluded from the analysis except for Track 17, which correlates with the radiological data and is the correct attribution. Additionally, for video and LiDAR-based trajectories, the calculated time offset between the estimated Kalman filter position and source location was 0.2 seconds (1 meter) or 0.25 seconds (1.3 m), respectively, which corresponds with the trunk of the source carrier's vehicle. Thus, both the source-carrying object and the source location within the object were correctly identified using our source-object attribution analysis with either video or LiDAR data. With LiDAR, there are more trajectories than objects present in the scene. The LiDAR detection CNN has a higher number of false positive detections (i.e., detecting an object when in fact no object is present at the given location) compared to Yolov4-tiny. These artifacts could be limited by increasing the minimum confidence score needed to track an object. However, increasing the minimum confidence score could decrease Fig. 3: A top-down view of the mock urban environment intersection along with the objects present and their direction of motion. The orange right-facing triangles, blue triangles, red star, or green square indicate vehicles, pedestrians, the source carrier, or LEMURS, respectively. The arrows indicate the direction of motion for each object during the alarm encounters. If an arrow is not associated with an object, the object was stationary. tracking performance for certain objects. The false positive detections would need to be discarded by an operator in real time. Additionally, sensor fusion, such as combining LiDAR and video, could be used to discard spurious detections, but sensor fusion is outside the scope of this paper. In Fig. 6, all of the alarm encounters at 10 mph are shown using SLAM for tracking. For a given alarm encounter, the exclusion metric for each object present along with the object's label is displayed. For example in Fig. 5(a), Alarm Encounter 2 has 10 objects present during the encounter. Five of the objects have lower exclusion metrics than the source carrier; however, these objects either have best-fit models where BIC preferred a background only model or more than 95% of the track Fig. 4: Example of object tracking in a mock urban environment using video (a) and LiDAR (b) data with the LEMURS system driving 10 mph past a vehicle carrying a 1.87 mCi \({}^{137}\)Cs source inside of 2 cm Pb shielding and using SLAM to create pose estimates of LEMURS in a world-fixed frame. (a) shows the output of the tracker analysis for one camera image (top) and the trajectories for each object along with the orientation of LEMURS indicated by the red (x-axis), green (y-axis) (bottom). Also, in the bottom pane, the detector orientations are shown. In (b), the same alarm encounter as (a) using LiDAR point clouds is shown. The bounding box colors of the objects (but not the labels) are consistent with (a). In (b), the trajectory of each object to that point is shown. The white grid lines represent \(1\,\mathrm{m}^{2}\), and the orientation of the two LiDARs and detector array on LEMURS are indicated by the red (x-axis), green (y-axis), and blue (z-axis) axes. Fig. 5: The result of the alarm encounter depicted in Fig. 4 using source-object attribution with video (a) and LiDAR-based trajectories (b) in a mock urban environment. In (a-b), the result of the alarm encounter for video and LiDAR are shown, respectively. The arrangement of the count-rate data from each detector matches the arrangement of the detectors in the LEMURS array. The diamond indicates the best-fit model was better described by a background only model, and the dagger indicates more than 95% of the trajectory was outside of the radiological alarm. Also, the dashed line in (a) and (b) correspond with the moment in time depicted in Fig. 3(a) (top pane) and Fig. 3(b), respectively. is outside of the radiological alarm so these tracks can be rejected from the analysis. Thus, the source carrier has the lowest exclusion metric among relevant best-fit models. In all of the trials, the source is correctly attributed to the source carrier in 18 out of 19 trials (Fig. 6a) using video, and for LiDAR-based trajectories (Fig. 6b), the source carrier is correctly identified in 19 out of 19 trials. The effective attribution for both video and LiDAR throughout all the alarm encounters is enabled by effective tracking of objects. In the one trial where the source carrier did not have the lowest exclusion metric for video, an obvious correlation between the best-fit model and radiological data existed. An operator monitoring in real time would be able to correctly attribute the radiological alarm to the source carrier. Additionally, for both video and LiDAR, the average time offset applied to the estimated source position was 0.35 seconds (1.5 m) locating the source to the rear of the source carrier's vehicle. These results demonstrates how the source-object attribution analysis can bring situational awareness to a mobile detector system. The results of tracking using pose estimates produced with the INS for all the alarm encounters for video and LiDAR are shown in Fig. 6c and Fig. 6d, respectively. The results are similar to tracking and performing source-object attribution using SLAM. In this case, the source carrier has the lowest exclusion metric in 19 out of 19 trials using both LiDAR and video-based trajectories. Also, similar to the SLAM results, the source was localized to the trunk using both video and LiDAR. The results from this analysis do not match the hypothesis, that INS should produce degraded pose estimates in an urban environment, which would adversely impact the source-object attribution analysis. In a typical urban environment, buildings will reflect and block satellite signals causing signal interference reducing the position accuracy in the pose estimate. With navigational systems, this loss of position accuracy from obstructions is expressed as dilution of precision (DOP). The fewer satellites available for the pose estimate, the higher the DOP value. In the mock urban environment considered here the vertical DOP (VDOP) and horizontal DOP (HDOP) values range from 1-20, respectively, where values ranging from 5-20 indicate moderate to low confidence levels in the pose estimates due to high environmental interference. The larger DOP values are due to a high number of tall trees in the mock environment that occlude the satellite signal. While the INS does filter between GPS coordinates at \(\sim\)1 Hz to improve GPS accuracy and reduce jitter in the pose estimates, in these environments the position accuracy is still reduced. When the INS pose information for the alarm encounters is overlaid onto a map of the area there is an obvious drift over time of the pose estimates (i.e., the pose estimates for LEMURS do not correspond with the road). However, for a given alarm encounter, all the objects in the scene are relative to this drift since the objects are transformed into the world-fixed frame. Thus, in the 10 mph scenario, tracking and attribution can still be performed effectively and INS performance is similar to using SLAM, but we expect performance with an INS to worsen in an environment subject to degraded GPS performance (e.g., an urban canyon). Overall, using our source-object attribution analysis in these alarm encounters, an apparent connection between the radioactive source origin and the detected signal existed, and we were able to both correctly localize the source to the object responsible for the radiological alarm and correctly localize the position of the source within the object. This was demonstrated using both video or LiDAR data and either INS or SLAM to generate pose estimates of LEMURS in a world-fixed frame. In all of these alarm encounters, an operator would be able to quickly and effectively perform alarm adjudication. ### _20 mph Scenario_ The same scenario presented in Section III-A was repeated for a LEMURS and source carrier speed of 20 mph. These trials were performed to better understand tracking and attribution performance at higher vehicle speeds that are more relevant for urban environments. In this case the scenario was repeated 23 times. The results from all the alarm encounters for the 20 mph scenario using SLAM are shown in Fig. 7a and Fig. 7b for both video and LiDAR, respectively. Using video (LiDAR), the source carrier was correctly attributed to the radiological data in 20 (23) out of 23 trials while the remaining trajectories could be rejected. For both video and LiDAR, the average source offset was 0.15 seconds (1.3 m), which correctly located the source in the rear of the source carrier's vehicle. Using video, in one of the trials, the source carrier was inconsistently detected throughout the alarm encounter and no attribution between the object responsible for the radiological alarm and the count-rate data was made. For the remaining two trials with video, inconsistent tracking of non-source-carrying objects during the period of time in which the radiological alarm was triggered resulted in lower best-fit models for those objects than the source carrier; nonetheless, a clear correlation existed between the count-rate data and the best-fit model for the source carrier and an operator monitoring in real time would be able to identify the object responsible for the radiological data. The results from these encounters demonstrate the flexibility of a Kalman filter and the advantage of using MVN tracking. In these alarm encounters, objects were only detected and tracked for a short period of time; however, for both video and LiDAR, all the objects were continuously tracked throughout a majority of the transient alarm encounter, which enabled more effective alarm attribution in these cases. Tracking using only INS data, the source carrier has the lowest exclusion metric in 10 out of 23 alarm encounters with video-based trajectories (Fig. 7c) and 22 out of 23 alarm encounters with LiDAR-based trajectories (Fig. 7d). With video, there were 13 alarm encounter where alarm attribution could not performed due to inconsistent tracking. This result is in contrast with the video SLAM results from Fig. 7a where the source carrier had the lowest exclusion metric in 20 out of 23 alarm encounters. Since the uncertainty in a detected object's distance is consistent using either INS or SLAM with video data, the discrepancy between INS and SLAM results in the 20 mph scenario appears to be driven by increased noise in the INS pose information at greater speeds. This increased noise is sufficient enough along with the inferred distance information to cause degraded tracking and attribution performance for video. The LiDAR results using INS are comparable to the SLAM results due to more consistent depth information. ## IV Improved Detection Sensitivity with Track-informed Optimized Integration Windows With tracking information, optimal integration times to maximize SNR can be found using the formulation discussed in Section II-E. Furthermore, for a given alarm encounter, certain detectors will be closer to the source carrier and will experience higher SNR than the detectors further from the source carrier. Thus, an optimal configuration of detectors should exist that maximizes SNR. The following two sections (Section IV-A and Section IV-B) investigate optimizing integration windows to maximize SNR using the experimental alarm encounters discussed in Section III-A and Section III-B. Optimal integration windows are found either using the 6 NaI detectors independently or summing the response of the 6 NaI detectors. In both cases, position uncertainty of the object is accounted for with MCMC. Additionally, the analyses were only performed using the trajectories generated from tracking with SLAM. ### _10 mph Scenario_ The top-left and top-right images of Fig. 8 show the results from all of the alarm encounters when LEMURS was traveling 10 mph for both video and LiDAR, respectively. For example, in Fig. 8 (top-left), Alarm Encounter 16 shows the results of applying a spectroscopic analysis using either the track-informed optimal configuration of detectors or the track-informed optimal integration window calculated from using the summed response of the detector array compared to different fixed integration windows. In this alarm encounter, the optimal integration window using the 6 NaI detectors independently produced the highest relative anomaly value compared to both the optimal integration window found using the summed response of the 6 NaI detectors and the different fixed integration windows; whereas, the optimal integration window found using the summed response of the 6 NaI detectors has a larger relative anomaly value compared to the 1, 3, and 4 second fixed integration windows. Altogether, with video trajectories (Fig. 8 (top-left)), using the 6 NaI detectors individually (summed response) and MCMC to produce the track-informed optimal integration window yielded an optimal window in 8 (2) of the 19 trials. For LiDAR trajectories (Fig. 8 (top-right)), the results show that the track-informed Fig. 6: Source-object attribution for all alarm encounters when LEMURS drove 10 mph past a vehicle traveling 10 mph and carrying carrying a 1.87 mCi \({}^{137}\)Cs source inside of 2 cm Pb shielding in a mock urban environment using video (a, c) and LiDAR (b, d). In both (a) and (b), SLAM was used to generate a consistent reference frame, and in (c) and (d), INS data was used. In (a) – (d), the faded (enlarged, outlined) points indicate best-fit models that were better described by a background only (source plus background) model. The black x’s indicate tracks that are 95% or more outside of the radiological alarm. integration window is the optimal window in 6 (1) of the 19 trials for optimal configuration of detectors (summed response). These results from this scenario for video and LiDAR aligns with the hypothesis that for a given alarm encounter there exists an optimal configuration of detectors that will maximize SNR compared to summing the response of all the detectors together. In addition, the findings suggest the optimal configuration of detectors can improve detection sensitivity on a mobile system relative to fixed integration windows using video and LiDAR. In the alarm encounters where the track-informed integration window did not produce the maximum anomaly value, it is thought that the anisotropic attenuation from both LEMURS and the source carrier is sufficient enough to cause the assumptions of the analysis to fail. For example, Alarm Encounter 17 in the top-left image of Fig. 8, which corresponds with the alarm encounter displayed in Fig. 4(a), shows that a fixed integration window of 2 seconds produces the maximum relative anomaly value. In Fig. 4(a), the effect of anisotropic shielding from both LEMURS and the source carrier is seen in the count-rate data at the beginning and end of the radiological alarm. In this case, both the optimal integration windows found using either the 6 NaI detectors independently or the summed response of the 6 NaI detectors produce integration windows that capture the period of time from about 16 to 17 seconds during the time of closest approach; however, both of these optimal windows fail to capture the period of time from about 17 to 17.5 seconds which still has an elevated count rate compared to the background rate due to anisotropic shielding (figure not shown). A 2 second fixed integration window is better able to capture this effect, which leads to a larger relative anomaly value. With LiDAR-based trajectories, since orientation information is available for tracked objects, certain assumptions about the amount of anisotropic attenuation present from the objects were made to mitigate this effect. However, the effects of anisotropic attenuation are not accounted for in the model generation of video trajectories, but video track-informed integration windows produced larger anomaly values in more alarm encounters compared to LiDAR-based trajectories. This is mainly due to the higher uncertainty with video-based trajectories compared to LiDAR-based trajectories, which enables a wider parameter space for MCMC to explore when determining the optimal integration window. These results imply that the anisotropic attenuation modeling does not capture all of the anisotropic attenuation present from tracked objects during the alarm encounter. Higher fidelity modeling of the vehicle along with the vehicle intrinsics Fig. 7: Results from performing source-object attribution on all radiological alarm encounters when both LEMURS and the source carrier were traveling 20 mph relative to each other in the mock urban environment using video (a, c) and LiDAR (b, d). SLAM [INS] was used in (a) and (b) [(c) and (d)] to generate pose estimates in a world-fixed frame. In both (a) – (d), the faded (enlarged, outlined) points indicate best-fit models that were better described by a background only (source plus background) model. The green circles indicate tracks that are 95% or more outside of the radiological alarm. has potential to improve the attenuation modeling. However, there is large uncertainty in the source location within the vehicle and the make and model of vehicles vary greatly. Both of these factors will affect the attenuation imposed by occluding material(s) in the vehicle as LEMURS and a source carrier pass by each other. In addition to anisotropic attenuation modeling, since both the video and LIDAR track-informed integration windows did not produce the maximum anomaly value in a majority of the alarm encounters, these findings also suggest that the generated directional response matrix (\(\epsilon\)) does not capture all of the anisotropic attenuation from LEMURS itself, which could be improved with a more detailed model of LEMURS. Overall, with a better understanding of how to account for different uncertainties associated with anisotropic attenuation from tracked objects as well as a more detailed LEMURS model, detection sensitivity should be improved, but a detailed characterization of both anisotropic attenuation from tracked objects and LEMURS is beyond the scope of this publication. ### _20 mph Scenario_ The spectroscopic analysis was also applied to the alarm encounters where both the source carrier and LEMURS were traveling at 20 mph relative to each other to further explore the optimal integration window analysis in more transient alarm encounters. In the bottom-left and bottom-right images of Fig. 8, the results of applying this analysis to all 23 alarm encounters for video and LiDAR are shown, respectively. In Fig. 8 (bottom-right), the track-informed integration window produces the maximum anomaly value in 11 (10) out of 23 trials compared to different fixed integration windows using an optimal configuration of detectors (using the summed response of the detector array). There was inconsistent tracking with alarm encounter 1 and 3 causing degraded detection performance. The LiDAR-based track-informed integration window produces the largest anomaly value in 5 (4) out of 23 alarm encounters using an optimal configuration of detectors (using the summed response of the detector array) shown in Fig. 8 (bottom-right). In this scenario, the time of closest approach between LEMURS and the source carrier is shorter and we are using an integration time of 0.25 seconds, which results in a 1/r\({}^{2}\) profile of the radiological data that is less distributed in time compared to the 10 mph case. This reduces the contribution of background when a spectroscopic analysis is performed. As a result, the anomaly values using the summed response of the detector array are more comparable to the optimal configuration of detectors. However, the optimal configuration of detectors still produces a larger anomaly value compared to the summed response in 15 (18) of the 23 alarm encounters in Fig. 8 (bottom-left) (Fig. 8 (bottom-right)). Thus, the results in this faster scenario further highlight the advantage of identifying an optimal configuration of detectors for an alarm encounter over summing the detector array to improve detection sensitivity. Even though the track-informed integration windows did not always produce the maximum anomaly value, the results from this analysis demonstrate that our track-informed integration approach can better inform integration times by adapting to the dynamics of the scene and relative motion of objects in a scene for both video and LiDAR to improve detection sensitivity. In the top-left and top-right images in Fig. 8, the maximum anomaly value in a majority of the alarm encounters was generated using a 2 second fixed integration window; whereas in the 20 mph scenario, a 2 second window only produced the maximum anomaly value in 1 alarm encounter. In this scenario, a 1 second fixed integration window generated the maximum anomaly value in the majority of cases. Multiple fixed integration windows can be run in tandem in an effort to maximize detection sensitivity. However, it is not possible to account for the countless changes that can occur in a scene that could impact detection sensitivity with a fixed integration window. For example, if a source-carrying vehicle is stopped at a light near LEMURS for more than 10 seconds, a 1 or 2 second fixed integration window will give lower anomaly values compared to a longer integration window. This is the advantage of using the track-informed integration window which adapts to the scene and motion of objects. ## V Conclusion On mobile detection systems, alarm encounters are transient and localization has to be performed quickly and efficiently. Source-object attribution enables new capabilities on a mobile platform by providing automatic associations between objects in the scene and the radiological data. This work demonstrates that situational awareness can be improved in a mock urban environment on a mobile detection system in the presence of dynamic sources using SLAM and video or LiDAR-based trajectories. The findings show that video or LiDAR offer similar tracking performance which enable effective rejection of tracks that are inconsistent with the radiological data. This performance is seen for various LEMURS and source carrier vehicle speeds. This work also explored performing the source-object attribution analysis using a navigational system that relies on GPS. The findings from this study demonstrate that if there is a strong GPS signal and the LEMURS vehicle speed is low (\(\sim\)10 mph) then using INS pose estimates to perform tracking and attribution offers similar capabilities to using SLAM with video and LiDAR-based trajectories. With faster LEMURS speeds (\(\sim\)20 mph) and INS, SLAM might be needed to generate more accurate pose information in situations where the distance information of the sensor is not directly known and is inferred. Additionally, the findings from this work suggest that an optimal configuration of detectors does exist to improve the detection sensitivity of a detector array compared to either summing the response of the detector array or different fixed integration windows. The track-informed integration windows from video and LiDAR trajectories are able to adapt to the dynamics of the scene and improve the anomaly value, proxy for detection sensitivity, in two different transient alarm encounter scenarios. Overall, the conclusions from this analysis demonstrate that source-object attribution does improve situational awareness on a mobile platform system and suggests that detection sensitivity can be improved as well. Figure 8: Maximum relative anomaly values for the 10 mph scenarios from Fig. 5(a) (top-left) and Fig. 5(a) (top-right) using either track-optimized time-windows or fixed integration windows for both video and LiDAR, respectively. The maximum relative anomaly values for the 20 mph scenarios from Fig. 6(a) and Fig. 6(b) are seen in the bottom-left and bottom-right images for video and LiDAR, respectively. In all of the images, the purple bars (red bars) indicate track-optimized windows using the 6 NaI detectors independently (summing the response of the detectors). The time duration for the summed response optimal window is provided in the parentheses. The grey, blue, black, and orange bars indicate a fixed integration window of 1.0, 2.0, 3.0, or 4.0 seconds, respectively. The blue (green) diamond, dagger, triangle, or filled in circles indicate the encounters where the optimal integration window yielded a higher anomaly value than a 1, 2, 3, or 4 second integration window, respectively, using the 6 NaI detectors independently (summing the response of the detectors). Future work is needed to fully explore using an INS in urban environments to produce pose estimates in a world-fixed frame and the effect this would have on tracking and attribution. While the pose estimates generated using the INS did not impact attribution performance using LiDAR, it is thought in a true urban environment with buildings there will be potential for degraded tracking and attribution performance. Urban environments are full of buildings, which will interfere with satellite signals more than the trees in the mock urban environment considered in this work. Additionally, since source-object attribution enables a new paradigm for source localization, future work will explore how source-object attribution might be considered in the design of detector arrays, or most effectively generalized to be robust to unknown detector configurations. Finally, future work will investigate scenarios with different source carriers, such as pedestrians, and scenarios where attenuating objects are present between the source object and detectors. ## VI Acknowledgements This work would like to acknowledge Adam Glick, Ivan Cho, Raymon Cheng, Kyle Bilton, Mark Bandstra, and Victor Negut for contributing to this work.
2309.07854
A Differentiable Model of the Evolution of Dark Matter Halo Concentration
We introduce a new model of the evolution of the concentration of dark matter halos, c(t). For individual halos, our model approximates c(t) as a power law with a time-dependent index, such that at early times, concentration has a nearly constant value of c=3-4, and as cosmic time progresses, c(t) smoothly increases. Using large samples of halo merger trees taken from the Bolshoi-P and MDPL2 cosmological simulations, we demonstrate that our 3-parameter model can approximate the evolution of the concentration of individual halos with a typical accuracy of 0.1 dex for t>2 Gyr for all Bolshoi-P and MDPL2 halos of present-day mass greater than 10^11.5 Msun. We additionally present a new model of the evolution of the concentration of halo populations, which we show faithfully reproduces both average concentration growth, as well as the diversity of smooth trajectories of c(t), including capturing correlations with halo mass and halo assembly history. Our publicly available source code, Diffprof, can be used to generate Monte Carlo realizations of the concentration histories of cosmologically representative halo populations; Diffprof is differentiable due to its implementation in the JAX autodiff library, which facilitates the incorporation of our model into existing analytical halo model frameworks.
Dash Stevanovich, Andrew P. Hearin, Daisuke Nagai
2023-09-14T16:59:33Z
http://arxiv.org/abs/2309.07854v2
# A Differentiable Model of the Evolution of Dark Matter Halo Concentration ###### Abstract We introduce a new model of the evolution of the concentration of dark matter halos, \(\mathrm{c}(t)\). For individual halos, our model approximates \(\mathrm{c}(t)\) as a power law with a time-dependent index, such that at early times, concentration has a nearly constant value of \(\mathrm{c}\approx 3-4\), and as cosmic time progresses, \(\mathrm{c}(t)\) smoothly increases. Using large samples of halo merger trees taken from the Bolshoi-P and MMPL2 cosmological simulations, we demonstrate that our 3-parameter model can approximate the evolution of the concentration of individual halos with a typical accuracy of 0.1 dex for \(t\gtrsim 2\,\mathrm{Gyr}\) for all Bolshoi-P and MMPL2 halos of present-day peak mass \(M_{0}\gtrsim 10^{11.5}M_{\odot}\). We additionally present a new model of the evolution of the concentration of halo populations, which we show faithfully reproduces both _average_ concentration growth, as well as the _diversity_ of smooth trajectories of \(\mathrm{c}(t)\), including capturing correlations with halo mass and halo assembly history. Our publicly available source code, Diffprof, can be used to generate Monte Carlo realizations of the concentration histories of cosmologically representative halo populations; Diffprof is differentiable due to its implementation in the JAX autodiff library, which facilitates the incorporation of our model into existing analytical halo model frameworks. keywords: Cosmology: large-scale structure of Universe; software: simulations ## 1 Introduction In the standard model of cosmology, the fundamental building blocks of structure formation are gravitationally self-bound halos of cold dark matter (CDM). These halos first form through the gravitational collapse of overdense patches of the initial density field, and then build up their mass over time via a combination of mergers and smooth accretion (see Mo et al., 2010, for a modern review). Observed galaxies are embedded in the centers of halos (White and Rees, 1978; Blumenthal et al., 1984), and so the evolution of the internal structure of dark matter halos plays an essential role in our theoretical understanding of the CDM framework for cosmological structure formation. The radial density profile of dark matter halos is well-described by a double power law known as the NFW profile (Navarro et al., 1997), which is defined by an outer boundary, \(R_{\mathrm{halo}}\), and by a single parameter encoding the internal structure, \(c\), the concentration of the halo, which exhibits a well-studied dependence upon total mass (e.g., Avila-Reese et al., 1999; Dolag et al., 2004; Diemer and Kravtsov, 2015; Child et al., 2018). It has been known for many years that the concentration of a halo is fundamentally linked to its mass assembly history (Bullock et al., 2001; Wechsler et al., 2002; van den Bosch, 2002; Zhao et al., 2003). These early works established a relatively simple picture of dark matter halo growth that is comprised of two distinct phases. At early times, halo mass increases rapidly while concentration remains nearly constant with a value of \(c\approx 3-4\). At later times, the rate of mass growth slows down, and halos tend to pile up mass onto their outskirts: during the slow-accretion phase, the central density of the halo remains relatively constant, while the concentration grows due to the increase of the halo boundary. This basic picture of halo structure growth has recently been confirmed in a model-independent fashion through an interpretable deep learning framework (Lucie-Smith et al., 2023). Even though mergers trigger large transient fluctuations in concentration (Wang et al., 2020; Lucie-Smith et al., 2022), the internal structure of a halo has a remarkably durable memory, and even one-to-one mergers ultimately leave the central density of the halo undisturbed (Kazantzidis et al., 2006; Vass et al., 2009; Drakos et al., 2019). The connection between concentration and halo mass assembly plays an especially important role in efforts to derive cosmological constraints from astronomical measurements of massive halos. In the era of multi-wavelength cluster surveys, from microwave (Bocquet et al., 2019; Aiola et al., 2020), optical (Abbott et al., 2020), to X-ray (Vikhlinin et al., 2009) bands, one must rely upon an observational proxy that scales with halo mass, generically referred to as the "mass-observable" relation (see Pratt et al., 2019, for a recent review). In order to realize the statistical power of upcoming surveys, each effort requires a detailed characterization of the associated mass-observable relation. A successful model for the mass-observable relation needs to capture not only average trends, but also scatter that may exhibit physically important residual correlations (Stanek et al., 2010). There are many indications that variations in the assembly history of massive halos are a major contributor to scatter in the ICM profiles (Lau et al., 2015), cluster shapes (Chen et al., 2019; Machado Poletti Valle et al., 2021), the Sunyaev-Zel'dovich (SZ) Effect (Yu et al., 2015; Green et al., 2020), and hydrostatic mass bias (Nelson et al., 2012, 2014; Shi et al., 2015, 2016). On theoretical grounds, hydrodynamical simulations generically predict that scatter between different mass-observable relations should be correlated due to mutual covariance with cluster assembly (Wu et al., 2015; Farahi et al., 2019). There is also observational support for the notion that scatter in the mass-observable relation is driven by variations in internal structure and assembly history, such as a correlation between the scatter in the stellar mass of Brightest Cluster Galaxies (BCGs) and the halo mass and concentration of optical clusters (Zu et al., 2021; Huang et al., 2022). Mass assembly correlations may also be responsible for the reduction in scatter in optical estimations of cluster mass that can be achieved by leveraging the magnitude gap as an observational proxy for cluster formation history (Hearin et al., 2013; Farahi et al., 2020). Numerous authors have capitalized upon the simplicity of the connection between halo growth and internal structure to build highly effective models for the evolution of halo concentration. In Zhao et al. (2009), the authors developed a model in which the behavior of \(c(t)\) is determined by \(t_{\rm{4g_{L}}}\), the time the halo first reached 4% of its present-day mass. A remarkably successful simplification was first identified in Ludlow et al. (2013), in which it was found that the mean density of a halo interior to its scale radius, \(\rho_{-2}\equiv\rho_{\rm{NFW}}(r_{-2})\), is directly proportional to \(\rho_{\rm{crit}}(r_{-2})\), the critical density of the Universe evaluated at the time \(t_{-2}\), where \(t_{-2}\) is defined by \(M_{\rm{halo}}(t_{-2})=M_{r<r_{-2}}(t_{0})\), so that \(t_{-2}\) is the time when the mass of the halo is equal to the amount of present-day mass enclosed within \(r_{-2}\). The simple picture suggested by this result is that halo concentration is roughly set by \(\rho_{\rm{crit}}\) at the formation-time of the halo, \(t_{\rm{form}}\), from which it follows naturally that halo concentration and \(t_{\rm{form}}\) will be tightly correlated. This simplification was further leveraged in Correa et al. (2015), in which the authors used techniques from extended Press-Schechter theory (e.g., Zentner, 2007; Jiang & van den Bosch, 2014) to build a model that faithfully captures median concentration, \(\langle c|M_{\rm{halo}},t\rangle_{\rm{med}}\), over a broad range of mass, time, and cosmological parameters. In this paper, we develop a new model of the evolution of the concentration of individual and populations of dark matter halos. The basis of our approach is to parametrize _smooth trajectories_ of \(c(t)\), and then to characterize the statistical distribution of these trajectories across halo mass and redshift. We build upon the differentiable population modeling framework described in Hearin et al. (2021), which allows us to design our model to capture both the _average_ growth in halo concentration, as well as the _diversity_ observed in the concentration histories of simulated halos. Using merger trees from the Bolshoi-P and MDPL2 N-body simulations described in SS2, we approximate \(c(t)\) for each simulated halo using the fitting function described in SS3. In SS4, we describe our model of how the probability distribution of \(c(t)\) depends upon \(M_{\rm{halo}}\) and \(t_{\rm{form}}\); throughout the paper, we will use the present-day _peak_ halo mass, \(M_{0}\), to characterize the mass-dependence of \(c(t)\), as this choice anticipates our future applications of incorporating our results into a larger forward modeling pipeline. Here we demonstrate that for all Bolshoi-P and MDPL2 halos of present-day peak mass \(M_{0}\gtrsim 10^{11.5}M_{\odot}\), our model can accurately capture the _average_ concentration growth, \(\langle c(t)|M_{0}\rangle\), the _variance_ in \(P(c(t)|M_{0})\), as well as correlations between \(c(t)\) and \(t_{\rm{form}}\). We discuss our results and outline future applications in SS5, and conclude in SS6 with a summary of our principal findings. Mathematical and computational details underlying our results can be found in the appendices. ## 2 Simulations and merger trees To construct our model for concentration histories, we use two gravity-only simulations: the Bolshoi-P-Clark simulation (Bolshoi-P, Klypin et al., 2011), and the MultiDark Planck 2 simulation (MDPL2, Klypin et al., 2016). The Bolshoi-P simulation evolves \(2048^{3}\) dark matter particles with mass \(m_{\rm{p}}=1.55\times 10^{8}M_{\odot}\) in a periodic box of width 250 Mpc using the ART code (Kravtsov et al., 1997) with a force resolution of 1 kpc. The MDPL2 simulation evolves \(3840^{3}\) dark matter particles with mass \(m_{\rm{p}}=1.51\times 10^{9}M_{\odot}\) in a box of width 1000 Mpc using the L- GADGET-2 code (Springel, 2005) with a force resolution of 5 kpc at low redshift and 13 kpc at high redshift. Cosmological parameters for these simulations are closely aligned with Planck Collaboration et al. (2014), and for both simulations we used the publicly available merger trees as identified by Rockstar and Consistent-Trees (Behroozi et al., 2013, 2014; Rodriguez-Puebla et al., 2016). In these catalogs, the concentration of each halo was computed by dividing halo particles into up to 50 radial bins of equal mass (subject to the constraint of having at least 15 particles per radial bin), directly fitting the radial resulting density with an NFW functional form (Navarro et al., 1997), and selecting the maximum-likelihood of the fit. Further details about these simulations can also be found at the CosmoSim database (Riebe et al., 2013). All results in the paper pertain to the concentration histories of present-day host halos, defined by Rockstar to be halos at the final snapshot with a upid column equal to -1. The boundary of dark matter halos in this catalog was defined according to the virial radius definition, \(R_{\rm{vir}}\)(Bryan & Norman, 1998), and for the value of halo mass we will use \(M_{\rm{peak}}\), the peak historical mass of the main progenitor branch of the halo, as this choice will facilitate our efforts in future work to unify or results with the Diffmah model (see SS5 for further discussion). In particular, we will characterize how the concentration histories of halo populations in terms of \(M_{0}\), the peak historical halo mass evaluated at redshift zero. For some of the results in the paper, we calculate cross-correlation functions between simulated halos and the density field, using random downsampling of particles from the appropriate snapshot. For Bolshoi-P we use a random downsampling of dark matter particles provided by halotools (Hearin et al., 2017). Throughout the paper, including the present section, values of mass and distance are quoted assuming \(h=1\). For example, when writing \(M_{0}=10^{12}M_{\odot}\), we suppress the \(M_{\odot}/h\) notation and write the units as \(M_{\odot}\). ## 3 Concentration histories of individual halos The concentration of a dark matter halo is defined in terms of \(\rho_{\rm{NFW}}\), the NFW model of the radial density profile, \[\rho_{\rm{NFW}}(r)\equiv\frac{\rho_{s}}{(r/r_{\rm{s}})(1+r/r_{\rm{s}})^{2}}, \tag{1}\] where \(r\) is the physical distance from the halo center, \(r_{\rm{s}}\) is the scale radius, and \(\rho_{\rm{s}}\) is the normalization (Navarro et al., 1997). The concentration of the halo is defined as \(c\equiv R_{\rm{halo}}/r_{\rm{s}}\), where \(R_{\rm{halo}}\) is the halo boundary; as described in SS2, the simulated halo catalogs used throughout this paper are defined with \(\beta(t)\) using a sigmoid function, \(\mathcal{S}(x)\); we use sigmoid functions repeatedly throughout the paper, and so a general definition appears below: \[\mathcal{S}(x)=y_{\rm min}+\frac{y_{\rm max}-y_{\rm min}}{1+\exp[-k(x-x_{0})]}. \tag{3}\] In modeling \(\beta(t)\) with a sigmoid, our independent variable \(x=\log_{10}t\), so that the concentration of an individual dark matter halo evolves according to the following equation: \[\log_{10}c(t)=\log_{10}c_{\rm min}+\frac{\beta_{\rm late}-\log_{10}c_{\rm min }}{1+\exp[-k(\log_{10}t-\log_{10}\tau_{\rm c})]}. \tag{4}\] In Equation 4, the variable \(\beta_{\rm late}\) controls the asymptotic behavior at late times; \(\tau_{\rm c}\) is the transition time from early- to late-time behavior; and \(k\) regulates the rate of the transition between the two regimes. For all results in the paper, we have held \(k\) fixed to a constant value of 4; furthermore, we only explore regions of parameter space for which \(\beta_{\rm late}>\log_{10}c_{\rm min}\), so that \(c(t)\) can only increase with time. Hereafter, we will refer to this constrained functional form as the Diffprof model for the evolution of the NFW concentration of individual halos. Figure 1 shows three examples of the concentration histories of dark matter halos, each with a present-day peak mass as indicated by the panel's title. The solid blue curves show the concentration history of the main progenitor halo as taken directly from the simulated merger tree, and the dashed orange curves show the approximation of the fitting function defined in Eq. 4. We give a detailed description of how we find the best-fitting parameters to each halo in Appendix A. The concentration histories displayed in Figure 1 are typical in Figure 1: **Example fits to concentration histories of individual halos.** The blue curve in each panel shows the concentration history of a dark matter halo taken directly from the merger trees of the Bolshoi-P simulation; the orange curve shows the approximate history based on the Diffprof model defined in Eq. 4. Example fits to halos of different masses are shown in each panel. Figure 2: **Residuals of fits to individual halos.** Using a large collection of fits to the concentration histories of simulated halos, the y-axis shows the logarithmic difference between the simulated and best-fit values of \(c(t)\); results for halos of different mass are shown in different panels according to the indicated title. The blue curve shows the average residual difference, and the shaded band shows the \(1\sigma\) scatter. The figure demonstrates that the Diffprof model defined by Eq. 4 gives an unbiased approximation to halo concentration history for \(t\gtrsim 2\) Gyr, with a typical error of \(\sim 0.1\) dex. several respects. At early times, halo concentration fluctuates rather wildly about a constant value of \(c\approx 3-4\), and eventually begins to steadily increase towards a present-day value of \(c\approx 5-15\), with lower-mass halos tending to have larger present-day values (see, e.g., Zhao et al., 2003, for early work noting these characteristic evolutionary trends). Although the evolution of the simulated halo becomes smoother as cosmic time progresses, fluctuations in the broad evolution remain present throughout the history of the halos. While the large, early-time fluctuations are at least in part a marker of numerical resolution effects, it is by now well-established that significant excursions from the smooth evolution are caused by minor mergers, and so should be expected even in well-resolved halos (for a recent analysis of this phenomenon, see Wang et al., 2020). Our differentiable approximation to concentration growth faithfully reproduces the smooth component of halo evolution, but misses the transient features associated with mergers. The remainder of the results in this section illustrates the fidelity with which our model is able to capture the concentration histories of the large statistical samples of the simulated halos described in SS2. Using the optimization techniques detailed in Appendix A, we have identified a set of best-fitting parameters for every halo in the samples described in SS2. On the vertical axis of Figure 2, we show the logarithmic difference between the simulated and best-fitting concentration history of halos, with results for halos of different masses being shown in different panels. The solid blue curve in each panel shows the average residual difference, and the shaded band shows the \(1\sigma\) scatter. For most of cosmic time, the Diffprof model gives an unbiased fit to the concentration history of Bolshoi-P and MDPL2 halos, with a typical scatter of \(\sim 0.1\) dex. The model presents the same level of accuracy and precision for the full range of halo masses we consider, \(10^{11.5}M_{\odot}\leqslant M_{\rm peak}\leqslant 10^{15}M_{\odot}\). Figure 2 shows that the concentration histories of Bolshoi-P and MDPL2 halos are well approximated by our model, albeit with considerable scatter about the smooth evolution due to transient fluctuations associated with minor mergers. In general, the large-scale clustering of halos exhibits a dependence upon NFW concentration at fixed mass, a phenomenon referred to as _secondary halo bias_(Gao et al., 2004)1. Since the incidence of mergers is correlated with both concentration and with the large-scale density field, a natural question that arises is the extent to which our model can retain the correlation between halo concentration and halo clustering. Footnote 1: Note that “halo assembly bias” is an alternative term for this phenomenon, although in the recent literature this has come to specifically refer to the case where the secondary halo property quantifies mass assembly history. In this paper, we will use the general term “secondary bias” to refer to the specific case of concentration as the secondary parameter; even though this term encompasses the more general case of any arbitrary secondary property, there should be no cause for confusion since in this work we are principally concerned with halo concentration. See, e.g., Salcedo et al. (2018); Mao et al. (2018), for further discussion. We address this question in Figure 3. We begin by selecting Bolshoi-P halos at \(z=0\) in a narrow mass bin of \(M_{\rm peak}=10^{12}M_{\odot}\), as the phenomenon of concentration-based secondary bias is particularly strong for halos in this mass range. For the halos in this sample, we calculate \(\xi_{\rm hb}(r)\), the two-point cross-correlation between halos and dark matter particles at \(z=0\). In each panel of Figure 3, the red (blue) curve shows results for halos with above-average (below-average) concentration for their mass. On the vertical axis in the top panel, we directly plot \(r\xi_{\rm hb}(r)\) for the two samples of halos; on the vertical axis in the bottom panel, we plot the fractional difference between the clustering of large- and small-concentration halos relative to the clustering of _all_ halos in the mass bin; thus the _separation_ between the red and blue curves in the top panel is quantified by the vertical axis of the bottom panel. Solid curves show the case where the halo sample has been split in half according to the value of concentration at \(z=0\) taken directly from the simulated halo catalog; dashed curves show calculations where the halos are split according to the Diffprof fit. On small scales, in the "1-halo term", where \(r\lesssim R_{\rm vir}\approx 0.2\) Mpc, the difference between the red and blue curves in Figure 3 is a reflection of the dependence of the NFW profile upon concentration. On very small scales, high-concentration halos have larger density relative to low-concentration halos of the same mass, and so the red curve lies above the blue; this difference reverses on spatial scales in the 1-halo term that are larger than the scale radius, \(r_{\rm s}\lesssim r\lesssim R_{\rm vir}\), where low-concentration halos are denser than high-concentration halos. Finally, the differences between red and blue curves in the 2-halo term on large scales reflect the phenomenon of secondary bias; for halos of this mass, large-scale clustering exhibits a positive correlation with concentration (Wechsler et al., 2006; Mansfield & Figure 3.— **Concentration-dependence of halo clustering.** In this figure, we focus on a sample of host halos in the Bolshoi-P simulation with \(M_{\rm peak}=10^{12}M_{\odot}\), a mass range where the well-known phenomenon of _halo assembly bias_ is particularly strong. We divide the sample in half according to the median concentration value for this mass, and for each subsample we compute \(\xi_{\rm hb}(r)\), the cross-correlation between halos and dark matter particles. In each panel, solid curves show results for the case where concentration is taken directly from each halo’s simulated merger tree; red curves show \(\xi_{\rm hb}(r)\) for high-concentration halos, and blue curves show results for low-concentration halos. In the _top panel_, we directly plot \(\xi_{\rm hb}(r)\), and in the _bottom panel_ we show the fractional difference between the red (blue) curve vs. \(\xi_{\rm hb}(r)\) for all halos in the sample. Dashed curves show results for the case where concentration is defined by the Diffprof approximation. The figure demonstrates that the correlation between halo concentration and the density field is retained when simulated concentration histories are approximated with Diffprof. Kravtsov, 2020), and presents a scale-dependent signature at \(r\approx 1-2\) Mpc (Sunayama et al., 2016). The dashed curves in Figure 3 are within \(\sim 5\%\) of the solid curves across the full range of spatial scales, indicating that residual errors in the Diffproof approximation to \(c(t)\) are largely uncorrelated with the large-scale density field. In Appendix A, we demonstrate that most of this difference is driven by extreme outliers in the concentration-mass relation that are likely to be "splashback halos". ## 4 Concentration histories of halo populations In the previous section, we presented a model for the evolution of the NFW concentration of individual dark matter halos. In our model, the evolutionary history of the concentration of a halo is described by three parameters, \(c_{\rm min}\), \(\beta_{\rm late}\), and \(\tau_{\rm c}\), with behavior defined by Eq. 4. In this section, we present a model for the probability distribution of \(c(t)\) for cosmological populations of halos. Our goal in this section is to develop a model that captures \(P(c(t)|M_{0},t_{\rm form})\), the PDF of concentration history across time, and its joint dependence upon \(M_{0}\) and halo assembly time, \(t_{\rm form}\). Having shown in SS3 that the concentration history of individual halos in Bolshoi-P and MDPL2 can be accurately approximated by our parametric fitting function, the approach we take here is to construct a model for the statistical distribution of \(c_{\rm min}\), \(\beta_{\rm late}\), and \(\tau_{\rm c}\). In SS4.1, we will motivate the functional forms of our model by examining a few basic scaling relations between \(c(t)\), \(M_{0}\) and \(t_{\rm form}\), and in SS4.2 we will present our model and assess the accuracy with which it can capture both the _average_ concentration history of halo populations, as well as the _diversity_ of evolutionary histories. In the main body of the paper, we will focus primarily on the formulation of our model and the principal demonstrations of its accuracy; a full account of our optimization procedure and attendant details can be found in the appendices together with our publicly available code. ### Basic trends of halo populations In this section, we motivate the formulation of our model for the concentration histories of halo populations. We will characterize the \(t_{\rm form}\)-dependence of concentration history in terms of \(t_{\rm form}\equiv 1_{50\%}\), the half-mass time at which the main progenitor mass of the halo first exceeds \(M_{0}/2\). Because the average value of \(t_{50\%}\) itself depends upon \(M_{0}\), we find it useful to quantify halo assembly history in terms of \(\mathrm{p}_{50\%}\equiv P(c+t_{50\%}|M_{0})\), the mass-conditioned cumulative distribution of \(t_{50\%}\). Thus by definition, we have \(0<\mathrm{p}_{50\%}<1\), with smaller values of \(\mathrm{p}_{50\%}\) corresponding to halos with early formation times for their mass. In Figure 4, we show how the average history of concentration depends upon \(M_{0}\) and \(\mathrm{p}_{50\%}\). Each panel shows results for a sample of halos of different mass, as indicated by the in-panel annotation. Red curves in each panel show results for \(\mathrm{p}_{50\%}\approx 0\), and blue curves show results for \(\mathrm{p}_{50\%}\approx 1\). In this figure and throughout the paper, we use the Bolshoi-P simulation for halos with \(M_{0}<10^{13.5}M_{\odot}\), and the MDPL2 simulation for results based on halos at higher mass. Figure 4 illustrates many of the key features of the statistical Figure 4: **Average concentration history across time.** Each curve in the figure shows the average concentration history of a sample of halos. Results for halos of different present-day peak masses are shown in different panels. Different colored curves show results for halos with different \(\mathrm{p}_{50\%}\), which quantifies the _percentile_ of halo formation time, \(t_{50\%}\), conditioned upon \(M_{0}\); the reddest curves show results for \(\mathrm{p}_{50\%}=0.1\), the bluest curves show results for \(\mathrm{p}_{50\%}=0.9\), and halos with intermediate formation times for their mass are shown with the lighter curves in between the red and blue. For halos of all mass, \(c\approx 3-4\) at early times, with concentration increasing at later times. Relative to massive halos, the concentration of lower-mass halos is larger and exhibits a tighter connection to mass assembly history. distribution of concentration histories that we wish to capture with our model for halo populations. For halos of all mass and assembly history, at high redshift we see that \(c(t)\approx 3-4\), and that \(c(t)\) tends to increase over time. At late times, lower-mass halos have higher concentrations relative to higher-mass halos. For halos of all mass, earlier-forming halos have higher concentrations than later-forming ones. By comparing the top-left to the bottom-right panels, we can also see that the concentrations of lower-mass halos present a much stronger dependence upon halo assembly history relative to massive halos; this is sensible, since lower-mass halos are highly susceptible to environmental influence (e.g., Mansfield and Kravtsov, 2020), whereas cluster-mass halos dominate the tidal field of their environment. In Figure 5, we show how the scatter in halo concentration evolves across time, with results for halos of different mass shown with different colors as indicated in the legend. The figure shows that for halos of all mass, scatter in concentration tends to increase with time. This is consistent with the results displayed in Figure 4, as well as with the physical picture of concentration evolution reviewed in SS1. At early times, halo assembly is firmly in the fast-accretion regime, and concentration takes on a nearly constant value of \(c\approx 3-4\); as cosmic time progresses, halo growth slows down, and concentration tends to increase in a manner that is subject to significant environmental influence, leading to larger variance at later times. This environment-correlated variance is more pronounced in lower-mass halos, which is reflected by the bluer curves lying above the redder curves in Figure 5. Figures 4-5 display the basic trends that we wish to capture with our model for the concentration histories of halo populations, quantified by \(P(c(t)|M_{0},\mathrm{p}_{50\%})\). We will formulate our model in terms of \(P(c_{\mathrm{min}},\beta_{\mathrm{late}},\tau_{\mathrm{c}}|M_{0},\mathrm{p}_{50 \%})\), the statistical distribution of the best-fitting parameters appearing in Eq. 4 that describe the concentration histories _individual_ halos. In Figure 6, we motivate the formulation for our model of \(P(c_{\mathrm{min}},\beta_{\mathrm{late}},\tau_{\mathrm{c}}|M_{0},\mathrm{p}_{50 \%})\) by illustrating two different cross-sections of the statistical distribution of \(c_{\mathrm{min}},\beta_{\mathrm{late}}\), and \(\tau_{\mathrm{c}}\), focusing on a sample of cluster-mass halos in the MDPL2 simulation. As described in Appendix A, when fitting individual concentration histories with our model, we first apply a nonlinear transformation to each of the variables appearing in Eq. 4, as this enforces physical constraints in the approximation to each halo's concentration history. Each of the transformations we use is monotonic, so that larger values of each variable, \(x\), correspond to larger values of \(\bar{x}\). The variables \(\bar{c}_{\mathrm{min}},\bar{\beta}_{\mathrm{late}}\), and \(\bar{\tau}_{\mathrm{c}}\) are the actual quantities that we programmatically vary when seeking best-fitting approximations to the concentration histories of individual halos, and these are the variables appearing on the axes in Figure 6. In the top panel of Figure 6, we show two histograms that display the behavior of \(P(\bar{c}_{\mathrm{min}}|M_{0},\mathrm{p}_{50\%})\). The red (blue) histogram shows results for halos with formation times in the bottom (top) quartile of \(\mathrm{p}_{50\%}\). From the top panel, we can see that \(P(\bar{c}_{\mathrm{min}}|M_{0},\mathrm{p}_{50\%})\) has an approximately Gaussian shape that is essentially independent of \(\mathrm{p}_{50\%}\). In the bottom panel of Figure 6, we illustrate the two-dimensional distribution of \(P(\bar{\beta}_{\mathrm{late}},\bar{\tau}_{\mathrm{c}}|M_{0},\mathrm{p}_{50 \%})\), where we have color-coded the scattered points according to \(\mathrm{p}_{50\%}\). Referring back to Equation 4, larger values of \(\bar{\beta}_{\mathrm{late}}\) correspond to halos with higher concentrations today, and so in the bottom panel we can see Figure 5: Scatter in concentration history across time. Each curve in the figure shows the level of scatter in halo concentration across time. Different colored curves show results for halos of different present-day peak masses, using a color gradient that varies logarithmically from the low-mass end in blue to the high-mass end in red. For halos of all mass, scatter in concentration increases with time. At all cosmic times, lower-mass halos have a larger scatter in concentration relative to massive halos. Figure 6: Distribution of parameters for concentration evolution. The figure shows the distribution of best-fitting parameters of our model for the evolution of concentration of individual halos, \(c_{\mathrm{min}},\beta_{\mathrm{late}},\) and \(\tau_{\mathrm{c}}\) (see Eq. 4), focusing on a sample of host halos in the MDPL2 simulation with \(M_{0}\approx 10^{14}M_{0}\). As described in the text, for each parameter, \(x\), the figure shows \(\bar{x}\), which has a nonlinear but monotonic relationship to the parameter. In the top panel, we show a histogram of \(\bar{c}_{\mathrm{min}}\), which has an approximately Gaussian shape with minimal dependence upon assembly time. The bottom panel illustrates the correlated relationship between \(\bar{\beta}_{\mathrm{late}}-\bar{\tau}_{\mathrm{c}}\), which exhibits a strong correlation with halo assembly. The distributions shown here motivate the model for the concentrations of halo populations presented in §4.2. a manifestation of the result that at fixed halo mass, earlier-forming halos tend to have higher concentrations relative to later-forming halos of the same mass. In the next section, we will build a model for the statistical distributions shown in Figure 6, and show that our model is able to capture the key evolutionary trends illustrated in Figures 4-5. ### Halo population model In this section, we present our model for the distribution of concentrations of populations of dark matter halos, \(P(c(t)|M_{0},\mathrm{p}_{50\%})\). As described in SS4.1, we approach this problem through a parametrized description of the probability distribution of \(c_{\mathrm{min}},\beta_{\mathrm{late}}\), and \(\tau_{\mathrm{c}}\). Thus, the fundamental unit of our model is a smooth trajectory in halo concentration defined by Eq. 4, and our model characterizes the PDF of these trajectories, including correlations with halo mass and assembly. As mentioned in SS4.1 and discussed in detail in Appendix A, we will define our model in terms of \(\bar{c}_{\mathrm{min}},\bar{\beta}_{\mathrm{late}}\), and \(\bar{\tau}_{\mathrm{c}}\), each of which is a transformed variable of the quantities appearing in Eq. 4. The purpose of these transformations is to enforce physical constraints on the fits to individual concentration histories, in particular that the Diffprof smooth approximations of \(c(t)\) are bounded and non-decreasing. We find little to no correlation between \(\bar{c}_{\mathrm{min}}\) and either \(M_{0}\) or \(\mathrm{p}_{50\%}\), and so we model \(\bar{c}_{\mathrm{min}}\) to be distributed as a Gaussian with uncorrelated scatter. We model \(\bar{\beta}_{\mathrm{late}}-\bar{\tau}_{\mathrm{c}}\) using a two-dimensional Gaussian with a mean and covariance matrix that jointly depends upon \(M_{0}\) and \(\mathrm{p}_{50\%}\). We give a full description of our functional forms in Appendix C. Figure 7 displays our model's ability to capture the distribution of concentration histories, \(P(c(t)|M_{0},\mathrm{p}_{50\%})\). The top row of panels shows \(\langle c(t)|M_{0},\mathrm{p}_{50\%}\rangle\), the evolution of the average concentration of halos as a function of mass and the rank-order percentile of \(t_{50\%}\). Each of the three columns of panels shows results for halos of different mass. Within each panel on the top row, there are four colored curves, each showing the average histories of halos with different values of \(\mathrm{p}_{50\%}=(0.2,0.4,0.6,0.8)\); redder (bluer) colors pertain to halos with earlier (later) assembly times for their mass. Solid curves show the concentration histories of simulated halos, and dashed curves show predictions for \(P(c(t)|M_{0},\mathrm{p}_{50\%})\) deriving from our parametrized approximation. Each curve showing simulated halo histories was computed using tens of thousands of halos. The DiffprofPop model describes the evolutionary trends of \(c(t)\) reasonably well for halos of all mass that we consider, including the \(\mathrm{p}_{50\%}\)-dependence. Our model is comparatively less accurate for very early-forming halos of lower mass, although it is unclear to what extent this shortcoming is due to splashback subhalos that Figure 7: **Concentration histories of halo populations.** The figure shows \(\boldsymbol{P}(c(t)|M_{0},\mathrm{p}_{50\%})\), the statistical distribution of \(c(t)\) for populations of halos, shown as a function of halo mass, \(M_{0}\), and the _percentile_ of halo formation time, \(\mathrm{p}_{50\%}\equiv\boldsymbol{P}(<t_{50\%}|M_{0})\). Each of the three columns of panels shows results for halos of different present-day peak mass, \(M_{0}\). In each panel, solid curves show concentration histories of simulated halos, and dashed curves show the corresponding predictions of our best-fitting model. In the top row of panels, we show the evolution of the average concentration of halos as a function of mass and assembly time; the reddest curves show results for \(\mathrm{p}_{50\%}=0\), the bluest curves show results for \(\mathrm{p}_{50\%}=1\), and halos with median formation times are shown with the lighter curves in between. In the bottom row of panels, we average over the \(t_{50\%}\)-dependence and show how the mean concentration evolves with time, and additionally show the scatter in concentration at fixed mass, \(\sigma(c(t)|M_{0})\). The figure shows that our model for halo populations can capture both the _average_ evolution of concentration, as well as the _diversity_ of concentration trajectories across time, including dependence upon both halo mass and assembly history. should be excluded from the simulation data; we refer the reader to SS5.2.4 for further discussion of this issue. In the bottom row of panels of Figure 7, we plot \(\langle c(t)|M_{0}\rangle\), and additionally show the scatter in concentration at fixed mass, \(\sigma(c(t)|M_{0})\). For simulated halos, the \(M_{0}\)-averaged concentration is shown with the solid black curve, and the gray band shows the \(1\sigma\) scatter; the corresponding predictions of our model are shown with dashed curves. Taken together, the panels of Figure 7 show that our model for the concentration of halo populations can capture both the _average_ evolution of concentration, as well as the _diversity_ of concentration trajectories across time, including dependence upon both halo mass and assembly history. ## 5 Discussion We have presented a new model for the evolutionary history of dark matter halo concentration, \(c(t)\). In our model, the concentration of an individual dark matter halo evolves as a power-law function of cosmic time with a time-dependent index, \(c(t)=c_{\rm min}10^{\beta(t)}\), where \(\beta(t)\) is a sigmoid function that transitions halo concentration growth from constant, early-time behavior, to smoothly increasing late-time behavior (see Equations 2-4). The results in SS3 demonstrate that for host halos identified with Rockstar in the Bolshoi-P and MDPL2 simulations over a wide range of present-day peak mass, \(10^{11.5}M_{0}\lesssim M_{0}\lesssim 10^{15}M_{0}\), our model accurately approximates the concentration of individual halos for \(t\gtrsim 2\) Gyr. We have additionally presented a new model for the concentration history of halo populations; our model captures both _average_ trends between concentration and halo mass and assembly, as well as the _diversity_ of concentration histories of simulated halos. In SS5.1, we describe the broader context of our modeling approach, discussing how Diffprop is a specific example of a framework for differentiable modeling populations of galaxies and halos, and how this framework relates to current approaches in the literature. We discuss the limitations of our model in SS5.2, and future extensions in SS5.3. ### Differentiable Population Modeling Approach As described in detail in the appendices, our characterization of the diversity in halo concentration is based on the differentiable population modeling technique described in Hearin et al. (2021). When using this framework to model some galaxy or halo property, \(X\), the core modeling ingredient is a family of fitting functions, \(\mathcal{F}_{\theta_{X}}(t)\), parametrized by \(\theta_{X}\), which approximates the time-evolution \(X(t)\) of an _individual_ object. For the cosmological distribution of \(X\), a separate model parametrized by \(\Psi_{X}\) characterizes \(P_{\Psi_{X}}(\theta_{X})\), the statistical distribution of \(\theta_{X}\), which together with \(\mathcal{F}_{\theta_{X}}(t)\) provides a model for the probability distribution of \(X\) across time. For models formulated in this fashion, predictions for \(X(t)\) of individual objects are deterministically specified by \(\theta_{X}\), and predictions for populations of \(X(t)\) are deterministically specified by \(\Psi_{X}\); thus by implementing these two modeling ingredients in a library for automatic differentiation such as JAX (Bradbury et al., 2018), exact gradient information becomes available when optimizing the parameters. In Hearin et al. (2021), this differentiable framework was used to model the evolution of the mass of individual dark matter halos; the quantity \(X(t)=M_{\rm peak}(t)\) was approximated with the Diffmah functional form, \(\mathcal{F}_{\theta_{\rm MAH}}(t)\), whose behavior is specified by three parameters: \(\theta_{\rm MAH}=\{\alpha_{\rm early},\alpha_{\rm late},\tau_{\rm h}\}\); the distribution of halo mass across time is described by DiffmahPop, a family of fitting functions parametrized by \(\Psi_{\rm MAH}\) that approximates the statistical distribution of \(P(\alpha_{\rm early},\alpha_{\rm late},\tau_{\rm h}|M_{0})\). The parameters \(\Psi_{\rm MAH}\) were calibrated using merger trees in high-resolution cosmological N-body simulations, so that DiffmahPop accurately approximates \(P(M_{\rm peak}(t)|M_{0})\). In Alarcon et al. (2023), the same framework was used to model the star formation history (SFH) of individual galaxies. The quantity \(X(t)=M_{\star}(t)\) was approximated with the Diffstar functional form, whose behavior is specified by eight parameters, \(\theta_{\rm SFH}\); the Diffstar model was shown to accurately approximate the SFHs of individual galaxies in the UniverseMachine (Behroozi et al., 2019) and TNG (Pillepich et al., 2018) simulations. In ongoing work, we are building DiffstarPop: a parametrized model for \(P(\theta_{\rm SFH}|\theta_{\rm MAH})\), which will enable differentiable predictions for the statistical distribution of SFH across time for cosmological samples of galaxies. In the present work, we study \(X(t)=c(t)\), the time-evolution of the NFW concentration of individual dark matter halos; we approximate \(c(t)\) using the Diffprof model defined in Eq. 4, which is controlled by three parameters: \(\theta_{\rm NFW}=\{c_{\rm min},\beta_{\rm late},\tau_{\rm c}\}\). We then characterize the cosmological abundance of halo concentration using DiffprofPop, a model for the probability distribution \(P(\theta_{\rm NFW})\) that includes dependence upon both halo mass and assembly history: \[P(c(t)|M_{0},t_{\rm form})=P(c_{\rm min},\beta_{\rm late},\tau_{\rm c}|M_{0},t _{\rm form}), \tag{5}\] where \(M_{0}\) is the present-day peak halo mass, and \(t_{\rm form}=t_{50\%}\). In typical approaches to modeling halo concentration, diversity in \(c\) at fixed \(M_{\rm halo}\) arises in one of two ways. First, in models that directly parameterize the \(c\)-\(M_{\rm halo}\) relation (e.g., Bhattacharya et al., 2013; Dutton and Maccio, 2014), scatter at fixed mass and redshift is simply considered to be a random variable distributed as a log-normal; this class of models can accurately approximate the distribution of concentration as a function of mass and redshift, but is not equipped to characterize the time-evolution of the concentration of individual halos. There is a second class of models that instead parameterizes the evolution of halo concentration (e.g., Zhao et al., 2009; Ludlow et al., 2013; Correa et al., 2015); these models _do_ have the predictive power to describe the evolution of individual halo concentration (as well as variations in cosmology, see below), and in each of these models, scatter at fixed mass and redshift arises exclusively due to variations in some particular summary statistic of halo assembly history, such as \(t_{\rm form}=t_{4\%}\), or \(t_{\rm form}=t_{-2}\). Diversity in halo concentration at fixed mass arises in our DiffProfPop model through two separate channels. First, there is a great diversity of trajectories in time by which halos attain the same final mass \(M_{0}\). In DiffProfPop, the statistical distribution \(P(\theta_{\rm NFW})\) presents a strong dependence upon halo formation time, \(t_{\rm form}\), which gives rise to the bulk of the scatter in the \(c\)-\(M_{\rm halo}\) relationship. Second, in contrast to the models discussed above, DiffProfPop captures a _distribution_ of halo concentrations at fixed values of halo mass and formation time. The DiffProf model approximates the _smooth_ evolutionary history of \(c(t)\), but not transient fluctuations associated with the merging of substructure, and so our models do not capture the contribution of such fluctuations to the variance in \(c(t)\) at fixed halo mass. Transient merging events have been shown to significantly influence the value of concentration at a particular time (Wang et al., 2020; Lucie-Smith et al., 2022), and smooth approximations to the full assembly history only contain a portion of the available information (Mendoza et al., 2023); an additional modeling ingredient beyond what we introduce here would be required to capture these fluctuations. Encouragingly, in approximating \(c(t)\) of simulated halos with the smoothly-evolving Diffprof, the residual errors appear to be largely uncorrelated with the large-scale density field (see Figures 3 & 11). This indicates that even though our model is a simplification of the full merger tree, theoretical predictions for large-scale structure may not suffer from appreciable biases associated with using a smoothly-evolving approximation to \(c(t)\). We will explore this and other related topics in the future work outlined in SS5.3, and in the next section SS5.2, we discuss other shortcomings of our model and caveats to our conclusions. ### Limitations and Caveats #### 5.2.1 Cosmology dependence As our paper is the first attempt at applying the differentiable population modeling technique to halo concentration, we have employed numerous simplifying assumptions that limit the predictive power of Diffpropf relative to more mature modeling efforts. For example, the two simulations we used were both run with cosmological parameters similar to Planck Collaboration et al. (2014), whereas more mature modeling efforts have the capability to capture the cosmology-dependence of halo concentration (e.g., Zhao et al., 2009; Correa et al., 2015; Lopez-Cano et al., 2022). One approach to incorporating cosmology dependence would be to adapt techniques utilizing physically-motivated rescalings (e.g., Angulo & White, 2010; Renneby et al., 2018; Arico et al., 2020), or alternatively, to use a large suite of simulations to develop an emulator-type approach (as in, for example, Heitmann et al., 2016; DeRose et al., 2019; Nishimichi et al., 2019). While the emulator approach would have stringent demands for merger trees and high mass-resolution, Gpc-scale high-resolution simulations with merger trees are becoming increasingly common (Ishiyama et al., 2021; Frontiere et al., 2022), as are suites of simulations that include merger trees (Contreras et al., 2020). #### 5.2.2 Baryonic effects A second simplifying assumption used in our analysis is that we have restricted attention to gravity-only N-body simulations, although it is well known that baryonic effects have a significant impact on the internal structure of dark matter halos (Gnedin et al., 2004; Kazantzidis et al., 2004; Jing et al., 2006; Rudd et al., 2008; Duffy et al., 2010). There has been considerable recent progress in the development of large suites of simulations that span a wide range of baryonic effects (Villaescusa-Navarro et al., 2021), and efforts along these lines have already uncovered a wealth of information about the relationship between feedback and halo internal structure (Chua et al., 2021; Anbajagane et al., 2022). As baryonic effects have an important influence upon the cluster mass-observable relation (e.g., Nagai et al., 2007; Battaglia et al., 2012; Le Brun et al., 2014; Truong et al., 2018), it would be both interesting and well-motivated to extend the Diffpropf model to incorporate dependence upon baryonic feedback. #### 5.2.3 Formation time definition We note that in principle, we could have instead elected to build a model that depends not just upon a particular definition of \(t_{\rm form}\), but instead with joint dependence upon all three Diffmah parameters, \(\alpha_{\rm early}\), \(\alpha_{\rm late}\), and \(\tau_{\rm h}\). Here, we have opted for a simpler approach that essentially uses a single measure of halo formation time as a one-dimensional approximation to the three-dimensional dependence of \(c(t)\) upon \(\alpha_{\rm early},\alpha_{\rm late}\), and \(\tau_{\rm h}\). Since the Diffmah model accurately characterizes the probability distribution \(P(M_{\rm peak}(t)|M_{0})\), the more complex three-dimensional approach has the potential to capture the dependence of \(c(t)\) upon the full distribution of halo assembly trajectories. On the other hand, in the simpler approach taken here, we have used the _formation time percentile_, \(p_{50\%}\), as the one-dimensional variable approximating halo assembly, which has the potential advantage of being more robust to modifications from cosmology or baryonic feedback, as these and other effects may only alter the numerical value of \(t_{50\%}\), without changing dependencies upon rank-order. In future work, we will explore this three-dimensional generalization, as well as other methods for quantifying the mutual covariance between concentration evolution and halo assembly. #### 5.2.4 Halo boundary definition As pointed out in SS5.1, the Diffpropfpop model is comparatively less accurate in making predictions for the earliest-forming halos of mass \(M_{0}\lesssim 10^{12}\mathrm{M}_{\odot}\). As shown in Figure 11 of Appendix A, the residual errors of the Diffpropf approximation appear to be mildly correlated with the halo-matter cross-correlation function, particularly on scales \(r\approx 1-2\) Mpc. These two outlier populations of halos are heavily overlapping, since the \(c-t_{\rm form}\) connection is fairly tight for halos of this mass. A sizable fraction of such halos are likely to be splashback subhalos that are only temporarily outside the virial radius of their host (Mansfield & Kravtsov, 2020; Diemer, 2021); thus the shortcoming of our model in this regime might be ameliorated by adopting simulation data based on a halo boundary definition that is better physically-motivated than the virial radius (e.g., Diemer, 2022; Garcia et al., 2022); otherwise an additional modeling ingredient would need to be introduced to capture this subpopulation. We leave such an investigation as a task for future work based on a higher-resolution suite of simulations. #### 5.2.5 Numerical resolution When calibrating our model for individual concentration histories in SS3, as well as our model for the concentrations of halo populations in SS4, we have focused on the mass range \(10^{11.5}M_{\odot}\leq M_{\rm peak}\leq 10^{15}M_{\odot}\), as this range of masses contains thousands of halos that are resolved by over 2000 particles in the N-body simulations we use. However, we note that even this restricted mass range pushes the resolution limits of our simulations. For example, Bolsho-P halos at the low-mass end of this range have only \(\sim 100\) particles at the high-redshift end of our target cosmological epoch. Broad guidelines from previous work has shown that simulated halos must have at least 200 particles _within the scale radius_ in order to have a reasonably well-measured concentration (Klypin et al., 2001). But for science objectives such as precision cosmology that require strict convergence, a large body of previous work has shown more methodical criteria than this is warranted. Methodical resolution studies typically estimate a convergence radius, \(r_{\rm conv}\), that specifies the radius at which the density profile is subject to numerical effects such as artificial relaxation; the value of \(r_{\rm conv}\) varies from halo to halo, and defines minimum radius used when fitting each profile (see, e.g., Neto et al., 2007; Ludlow et al., 2019; Brown et al., 2022). The basic goals of the present paper are more modest. Here we introduce a novel modeling framework for approximating the evolution of halo concentration, and we restrict our scope to exploring the scientific potential of our approach. As discussed in Appendix A, we have restricted our halo mass range, as well as the range of cosmic time \(t>2\) Gyr, in an effort to have target halo data based on publicly available simulations that \(i\)) has broadly reasonable concentrations, and \(ii\)) spans a sufficiently wide range of mass and time to warrant development of a population model. However, a dedicated convergence study will be required before we are able to transform Diffprop into a precision tool. As indicated in Mansfield & Avestruz (2021), such a convergence study would require an analogous effort to what has been done to calibrate models of the halo mass function (e.g., Jenkins et al., 2001; Tinker et al., 2008; McClintock et al., 2019; Bocquet et al., 2020). This effort would be facilitated by the public availability of recent suites of N-body simulations such as Sympony (Nadler et al., 2022) and MORIA (Diemer, 2020). We consider our results based on Bolshoi-P and MDPL2 to be a promising proof-of-principle that motivates a dedicated convergence study in future work. ### Future Work In future work on the population modeling approach outlined in SS5.1, Diffprop will provide a basic ingredient in joint predictions for the density field traced by the galaxies and gas in dark matter halos. In one such application that relies on high-resolution simulations with data products that include merger trees, Diffprop is used in a pre-processing step in which \(\theta_{\rm NFW}\) provides an approximate description of \(c(t)\) for every halo. This class of application makes no use of DiffpropTop; instead, for each simulated halo, rather than characterizing \(c(t)\) by its tabulation at each of the \(\sim 200\) snapshots, \(c(t)\) is systematically replaced by its 3-dimensional, \(\theta_{\rm NFW}\)-based approximation. Forward modeling with survey-scale simulations is extremely memory intensive, and so Diffprop reduces the memory footprint of these predictions. Our formulation in terms of individual halo trajectories of \(c(t)\) may also simplify future modeling of the time-evolution of baryonic effects on halo profiles (e.g., Rudd et al., 2008; Schneider & Teyssier, 2015). In a separate set of applications, DiffpropTop is used to generate a synthetic population of halos with a distribution of \(c(t)\) that is statistically representative of simulated halo histories. Downstream ingredients for observables such as Compton-y are then based on these synthetic histories. In this second set of applications, in which DiffpropTop generates synthetic halo populations, downstream predictions are typically only made for one-point functions such as a mass-observable scaling relation and its scatter. This application of DiffpropTop is directly analogous to the way some semi-analytic models (e.g., Somerville & Primack, 1999; Benson, 2012) predict quantities such as the galaxy luminosity function using merger trees generated with either extended Press-Schechter methods (Bond et al., 1991; Bower, 1991; Lacey & Cole, 1993), or with machine learning (Nguyen et al., 2023). Recent work introducing the TOPSEM model of galaxy disks and bulges has used the DiffmahpPop model in this fashion (Boco et al., 2023). In the present work, we have defined the internal structure of dark matter halos in terms of the NFW approximation of a spherically symmetric radial distribution, but in reality, halos are not perfect spheres: their internal structure is more accurately described by a triaxial ellipsoid characterized by an ellipticity and a prolaticity (Jing & Suto, 2002), both of which vary with halo mass and redshift (e.g., Allgood et al., 2006; Bonamigo et al., 2015; Lau et al., 2021). Just as with concentration, there is a significant scatter in halo ellipticity at fixed halo mass, and this scatter is strongly correlated with halo assembly history (Despali et al., 2014; Chen et al., 2020). In a companion paper to the present work, we will generalize the Diffprop model to characterize the _joint_ evolution of halo concentration and triaxility, again following a differentiable population modeling approach to capture physically realistic covariance with halo mass and assembly history. Even when neglecting ellipsoidal deviations from spherical symmetry, the NFW density profile fails to capture the well-known "splashback" feature of the outer halo profile (More et al., 2015; Diemer, 2017; Mansfield et al., 2017; O'Neil et al., 2021). The splashback radius of a halo is tightly connected to its recent mass accretion history (Diemer & Kravtsov, 2014; Diemer et al., 2017), which has motivated numerous efforts to model and measure this signature in observations (Umetsu & Diemer, 2017; Zurcher & More, 2019; Khakaj et al., 2020). We note that the differentiable population modeling approach taken here is naturally extensible to alternative characterizations of the halo profile that capture the splashback signature in the outer profile (such as the fitting function in Diemer & Kravtsov, 2015). This extension would proceed by using simulated merger trees to build a smooth model for the evolution of the splashback profile parameters of individual halos (as in SS3), and then by building a model for how the statistical distribution of the best-fitting parameters is connected to \(M_{\rm peak}\) and the Diffmah parameters (as in SS4). A directly analogous effort could similarly be applied to the recent dynamics-based model for halo profiles (Diemer, 2021). One promising application of our model would be to apply its predictions for modeling lensing, X-ray, and SZ effect profiles of galaxy groups and clusters using e.g., the Baryon Pasting (BP) model. The BP model is a physically motivated, computationally efficient approach for modeling X-ray and CMB skies (Shaw et al., 2010; Flender et al., 2017; Osato & Nagai, 2023), for interpreting cross-correlation of weak lensing and SZ surveys (Osato et al., 2018), and for modeling upcoming multi-wavelength cross-correlation (Shirasaki et al., 2020). For example, it has recently been shown in Green et al. (2020) that scatter in the thermal SZ effect is largely driven by variance in the assembly histories of cluster-mass halos, and the latter is faithfully captured by the Diffmah approximation. Further motivation for this application of our model comes from recent advances that improve the sophistication of the observational measurement of cluster pressure profiles (e.g., Anbajagane et al., 2022; Kervazor et al., 2022). These results create a timely opportunity for an effort to develop and calibrate a model for the _joint_ dependence of the mean and scatter of these multi-wavelength cluster observables. Another potentially interesting future application would be to adapt existing prediction pipelines for the evolution of halo substructure (e.g., Zentner et al., 2005; Jiang et al., 2021) to leverage the Diffmah and Diffprop modeling ingredients. Recent progress in modeling subhalo orbital evolution has demonstrated the importance of accounting for the evolution of the host halo potential (Ogiya et al., 2021), and so our model for \(c(t)\) may be useful due to its capability to capture physically realistic correlations between halo assembly and internal structure (see, e.g., Jiang & van den Bosch, 2017). In future work, our larger aim is to unify theoretical predictions for the evolution of halo mass, internal structure, and substructure using the differentiable population modeling framework employed here. ## 6 Conclusion Our principal findings are summarized as follows: 1. We have built a new model for the evolution of the NFW concentration of individual dark matter halos across time. In our model, halo concentration evolves as a power-law function of time with a rolling index, and is characterized by three free parameters (see equation 4). For halos identified by Rockstar in the Bolshoi-P and MDPL2 simulations with masses between \(10^{11.5}M_{\sun}\) and \(10^{15}M_{\sun}\), we have demonstrated that our model provides an unbiased approximation to the evolution of concentration for cosmic time \(t\gtrsim 2\) Gyr 2. We have additionally built a model for the concentrations of halo populations, and demonstrated that our model captures both _average_ trends in the evolution of concentration, as well as its _diversity_, including physically realistic correlations with both halo mass and assembly history. 3. Our python code, Diffprof, is publicly available, can be installed with pip or conda, and includes Jupyter notebooks providing demonstrations of typical use cases. A parallelized script in the Diffprof repository can be used to fit the concentration histories of individual simulated halos. The Diffprof code also includes a convenience function that can be used to generate Monte Carlo realizations of the concentration histories of cosmologically realistic populations of halos. Precomputed fits for hundreds of thousands of halos in the Bolshoi-P and MDPL2 simulations are available at [https://portal.nersc.gov/project/hacc/aphearin/diffprof_data/](https://portal.nersc.gov/project/hacc/aphearin/diffprof_data/). ## Acknowledgements We thank Alex Alarcon, Matt Becker, Erwin Lau, Luisa Lucie-Smith, Phil Mansfield, Ismael Mendoza, and Kuan Wang for helpful discussions. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) and the Partnership for Advanced Supercomputing in Europe (PRACE, www.prace-ri.eu) for funding the MultiDark simulation project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (LRZ, www.lrz.de). The Bolshoi simulations have been performed within the Bolshoi project of the University of California High-Performance AstroComputing Center (UC-HiPACC) and were run at the NASA Ames Research Center. We acknowledge use of the Bebop cluster in the Laboratory Computing Resource Center at Argonne National Laboratory. Work done at Argonne was supported under the DOE contract DE-AC02-06CH11357. This work was supported in part by Yale Summer Experience Award and the DOE contract DE-AC02-06CH11357. DN is supported by NSF (AST-2206055) and NASA (80NSSC22K0821 & TM3-24007X) grants. This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. This work made extensive use of RunPy (Van Der Walt et al., 2011), SciPy (Jones et al., 2016), Jupyter (Ragan-Kelley et al., 2014), IPython (Perez & Granger, 2007), scikit-learn (Pedregosa et al., 2011), JAX (Bradbury et al., 2018), condd-forge (conda-forge community, 2015), Matplotlib (Hunter, 2007), as well as the Astrophysics Data Service (ADS) and arXiv preprint repository. ## Data Availability Software and data underlying this article is publicly available at the Diffprof code repository on github, [https://github.com/BaryonPasters/diffprof](https://github.com/BaryonPasters/diffprof).
2309.16781
Hallucination Reduction in Long Input Text Summarization
Hallucination in text summarization refers to the phenomenon where the model generates information that is not supported by the input source document. Hallucination poses significant obstacles to the accuracy and reliability of the generated summaries. In this paper, we aim to reduce hallucinated outputs or hallucinations in summaries of long-form text documents. We have used the PubMed dataset, which contains long scientific research documents and their abstracts. We have incorporated the techniques of data filtering and joint entity and summary generation (JAENS) in the fine-tuning of the Longformer Encoder-Decoder (LED) model to minimize hallucinations and thereby improve the quality of the generated summary. We have used the following metrics to measure factual consistency at the entity level: precision-source, and F1-target. Our experiments show that the fine-tuned LED model performs well in generating the paper abstract. Data filtering techniques based on some preprocessing steps reduce entity-level hallucinations in the generated summaries in terms of some of the factual consistency metrics.
Tohida Rehman, Ronit Mandal, Abhishek Agarwal, Debarshi Kumar Sanyal
2023-09-28T18:22:16Z
http://arxiv.org/abs/2309.16781v1
# Hallucination Reduction in Long Input Text Summarization ###### Abstract Hallucination in text summarization refers to the phenomenon where the model generates information that is not supported by the input source document. Hallucination poses significant obstacles to the accuracy and reliability of the generated summaries. In this paper, we aim to reduce hallucinated outputs or hallucinations in summaries of long-form text documents. We have used the PubMed dataset, which contains long scientific research documents and their abstracts. We have incorporated the techniques of data filtering and joint entity and summary generation (JAENS) in the fine-tuning of the Longformer Encoder-Decoder (LED) model to minimize hallucinations and thereby improve the quality of the generated summary. We have used the following metrics to measure factual consistency at the entity level: precision-source, and F1-target. Our experiments show that the fine-tuned LED model performs well in generating the paper abstract. Data filtering techniques based on some preprocessing steps reduce entity-level hallucinations in the generated summaries in terms of some of the factual consistency metrics. Hallucination, summary-worthy entities, JAENS, LED, data filtering, text summarization ## 1 Introduction With the exponential growth of textual data, the need for effective summarization techniques becomes crucial to extracting relevant and concise information from lengthy documents. Text summarization plays a vital role in various domains, including news articles, legal documents, and scientific papers. However, when it comes to handling long input texts, such as research papers or legal documents, the task becomes even more challenging. The input documents of such tasks are often significantly longer than the maximum context lengths of most standard transformer models. This has motivated researchers to explore changes in model architecture and training strategies. For instance, to avoid the quadratic growth in memory consumption of the attention computation in transformers, many memory-efficient transformer variants have been proposed in [1]. To handle long inputs, Beltagy et al. [2] have added a long input pre-training stage to the transformer while Chalkidis et al. [3] have only fine-tuned the models with long inputs without any pre-adaptation. In the context of long input text summarization, one common issue is the presence of _hallucinations_, that is, the generated summary includes factual inconsistencies or introduces information not present in the source document. Hallucinations can be categorized as intrinsic and extrinsic hallucinations [4]. Intrinsic hallucinations occur when the model interprets information from the input text incorrectly but uses the terms or concepts that occur in the source document. Extrinsic hallucinations occur when the model generates text that does not match the input text, that is, uses terms and concepts not even present in the source document. Hallucinations can undermine the reliability and accuracy of the summarization process, potentially leading to misinformation or misleading interpretations. These contradictions in fact can exist at the entity or phrase level. A model-generated summary may include named-entities that were not included in the source document. This is known as the entity hallucination problem [5]. The main contributions of this paper are: 1. We use Longformer Encoder-Decoder (LED) model [2] to generate summary of scientific articles in the PubMed dataset [6]. In addition, we explore two techniques, namely, data filtering and JAENS (Join sAlient ENtity and Summary generation) [5] to study their effect on the factual consistency of the generated summaries. 2. We analyze the factual consistency of the output summary at the entity level using the following metrics: precision-source and F1-target, introduced by [5]. We also use the traditional metrics, namely, ROUGH [7], METEOR [8], and BERTScore [9], to evaluate the performance of the models. The entity-based data filtering technique improves the precision-source but the other metrics achieve higher values when fine-tuning with LED is done without the other two techniques. Our code and results are available on github 1. Footnote 1: [https://github.com/tohidaehman/Hallucination-Reduction-Text-Summarization](https://github.com/tohidaehman/Hallucination-Reduction-Text-Summarization) ## 2 Literature survey Early research efforts in text summarization predominantly focused on extractive methods, which involve selecting the most significant sentences or phrases from the source document to form the gist. While extractive summarization approaches achieved reasonable success, it was hard to modify these methods to handle information that required rephrasing or merging content from multiple sentences. This limitation led to research in abstractive summarization techniques, which aim to generate summaries by understanding the source text and producing new sentences that capture the essential information. The emergence of recurrent neural networks (RNNs) that are capable of processing and producing text has significantly improved abstractive summarization, but they sometimes exhibit undesirable behavior such as incorrectly reproducing factual details, an inability to deal with out-of-vocabulary (OOV) words, and repetitive themselves [10]. The pointer-generator model with a coverage mechanism helps to resolve the problem of out-of-vocabulary (OOV) words, and repeating phrase generation [11; 12; 13; 14; 15]. Large pre-trained transformer models have proven to be exceptionally capable of dealing with natural language tasks [16; 17]. Handling extended textual sequences, on the other hand, remains a considerable issue for these models. These challenging input documents are often substantially longer than the maximal context lengths of typical transformer models, necessitating both specialized model architectural adjustments and unique training regimes to accommodate. For example, numerous memory-efficient transformer variations have been proposed to prevent the quadratic escalation in memory consumption of the attention estimation in transformers. Another severe issue is the inability of current abstractive summarization methods to generate faithful results. These systems frequently struggle to verify that the generated summaries only include information extracted from the source document and do not include manufactured or hallucinated statements. These hallucinations can occur for a variety of causes, including biases in the training data, a lack of context perception, or model over-optimization. Cao et al. [18] and Kryscinski et al. [19] reported that approximately 30% of the summaries generated by seq2seq models suffer from the issue of hallucination. As a result, as noted in the works of the NLP community, attention has been drawn more and more to the faithfulness and factual components of abstractive summarization [19, 20, 21]. Many recent works study entity-level and relation-level hallucination problems in the generated text. Nan et al. [5] address entity hallucination by applying a filter on the training data and multi-task learning. Goyal and Durrett [20] study relation hallucination, that is, whether the semantic relationships manifested by the individual dependency arcs in a generated sentence are entailed by the source sentence. One notable work by Narayan et al. [22] incorporates entity chain content planning to guide faithful summary generation. There has been growing interest in quantitatively measuring the faithfulness of text generation models. Most widely-adopted evaluation metrics for text generation, such as ROUGE [7] and BERTScore [9] correlate poorly with the human perceived faithfulness of the generated text [19]. Recent studies explore categorical and content-based analysis for measuring the faithfulness of summaries [20]. ## 3 Methodology To handle long input sequences, we utilized the pre-trained checkpoints of the Longformer Encoder Decoder (LED) model [2], which incorporates a sliding window and dilated sliding window attention mechanisms. It consists of both the encoder and decoder Transformer stacks, but instead of using full self-attention in the encoder, it employs the Longformer's efficient local+global attention pattern. The decoder applies full self-attention to all encoded tokens and previously decoded locations. Because pre-training LED is expensive, authors in [2] have used BART parameters to initialize LED parameters and adhered to BART's exact design in terms of the number of hidden sizes and layers. This allows it to effectively process lengthy inputs. We performed fine-tuning of the pre-trained LED model to adapt it specifically for text summarization of scientific documents. To ensure the accuracy of the summaries, we implemented scispACy-based Named Entity Recognition (NER) on the ground truth summaries. We applied the JAENS (Jointly Aligned Entity Names and Summaries) approach to augment salient entities in front of the abstracts. Training the model to recognize summary-worthy named-entities aims to enhance the precision and recall related to named-entities in the generated summaries. We have performed experiments with 3 variants with the LED model: (1) fine-tuned on the LED model, (2) fine-tuned LED model with the filtered dataset, and (3) fine-tuned LED model using the JAENS approach on the filtered dataset. ### Fine-tuning LED Pre-trained models like LED learn rich language representations from a large corpus. Fine-tuning customizes these models for specific tasks. It initializes the model with pre-trained weights, then fine-tunes it on a task-specific dataset using backpropagation. Fine-tuning leverages the model's language understanding saves time and resources, and requires less labeled data. This approach enhances text summarization by adapting the model to task-specific data while leveraging its pre-trained knowledge. ### Entity-based data filtering As demonstrated successfully by [5], the training dataset's quality has a significant impact on the amount of entity-level hallucinations present in the generated summary. With that in mind, we applied scispaCy Named Entity Recognition (NER) to the gold summary for the PubMed dataset. This allows us to identify all the named-entities present in the gold summary. Our objective is to ensure that these named-entities have corresponding \(n\)-gram matches within the source document. For unigram matching, we avoid matching any stop words. Therefore, if any named-entity of a sentence in the summary cannot be found within the source document, we decided to exclude that sentence from the summary. If the number of sentences in the summary is one and using the filtering technique we need to remove that sentence, then the entire article-summary pair has been removed from the dataset. ### Joint sAlient ENtity and Summary generation (JAENS) The JAENS (Joint sAlient ENtity and Summary generation) approach, originally introduced by Nan et al. [5], is an alternative generative approach aimed at enhancing entity-level precision, and recall metrics. JAENS trains the LED model to construct a sequence that contains summary-worthy named-entities, a special token, and the summary itself, as opposed to typical summarization approaches. This approach enables the model to simultaneously learn the identification of summary-worthy named-entities while generating summaries, similar to the multitask learning approach. By prioritizing the generation of salient named-entities in the decoder, JAENS ensures that the summaries incorporate and highlight these important entities through decoder self-attention. By incorporating the JAENS approach into our project, we aim to mitigate entity-level summary hallucinations and improve the overall quality of the generated summaries. ## 4 Experimental setup ### Datasets We used a dataset collected from a scientific repository, PubMed2), and was introduced in [6]. We chose scientific papers as our dataset because they are examples of long documents with a standard discourse structure. Furthermore, scientific papers are rich in domain-specific terminology and technical information, which makes them an important source of information for researchers and practitioners alike. PubMed is a biomedical literature database that contains over 30 million citations and abstracts of research articles. The dataset contains almost 19,000 scholarly publications on diabetes from the PubMed database, which are categorized into one of three categories. In our experiment, we choose, for training 2000 examples, validation 250 examples, and testing 250 examples. The size of the used dataset after applying the entity-based filtering procedure was 1798 examples for training, 232 examples for validation, and 236 examples for testing. The average number of sentences in summary before applying the entity-based data filtering technique was 7.33, 7.04, and 7.51 for training, validation, and test datasets. The average number of sentences in a summary after applying the entity-based data filtering technique is 4.34, 4.11, and 4.58 for training, validation, and test datasets. Footnote 2: [https://pubmed.ncbi.nlm.nih.gov/](https://pubmed.ncbi.nlm.nih.gov/) ### Data processing We eliminated all punctuation, numerals, special characters, mathematical formulas, and citation markers from the documents and lowercase the entire corpus. When we were going through documents, we made sure they were the right length and had the right structure. If something was too long, like a thesis, or too short, like a tutorial announcement, we removed it. We also looked for documents that did not have an abstract or a clear structure. To understand the structure, we used the section headings as clues. Sometimes, documents had figures or tables that did not help us understand the text. We got rid of those, keeping only the words. In our model, the maximum number of allowed input tokens is 8192, that of output tokens is 512, and the minimum number of output tokens is 100 only. In line with the JAENS approach, we used the scispaCy model en_core_sci_sm 3 library to generate summary-worthy named-entities and augmented the list of comma-separated named-entities before the ground truth summary (abstract) for each sample of the dataset. The sequence of named-entities is followed by a special token, which helps separate the entities from the abstract. This special token is chosen from the model's vocabulary such that it is not commonly occurring and can help the model learn to recognize the named-entities separately from the actual abstract. This helps in training the model as now the model will apply special attention to these entities while generating the summary. Footnote 3: [https://allenai.github.io/scispacy/](https://allenai.github.io/scispacy/) ### Implementation details We conducted our experiments using Google Colab Pro+, which provided us with an NVIDIA A100 GPU. For all experiments, we used the base variant of the pre-trained LED model led-base-163844, due to resource limitations. Firstly, we fine-tuned the LED model on the original 2000-sample PubMed dataset. Secondly, we utilized a filtered version of the taken dataset by removing article-abstract pairs with a \(prec_{s}\) score (to be defined in the next subsection) less than 1 (i.e., we ensure that the abstract - which is the ground-truth summary - contains almost no hallucinations of entities) and performed fine-tuning on this filtered dataset. Finally, we incorporated the JAENS approach into the fine-tuning process by augmenting summary-worthy named-entities in front of the abstract for each example of the filtered train dataset, aiming to enhance entity-level precision, recall, and F1 metrics in the generated summaries and thus reduce the entity-level hallucinations. For all the models, we fine-tuned up to 10 epochs. To evaluate the models, we used the same test dataset that was obtained after the entity-based data filtering technique. Footnote 4: [https://huggingface.co/allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) ### Evaluation metrics We employ a comprehensive set of widely used text summarization evaluation metrics, including ROUGE [7], METEOR [8], BERTScore [9], to assess the quality and effectiveness of the generated summaries. Unfortunately, these metrics are inadequate to quantify factual consistency [19]. Hence, we have also used three new metrics, introduced by [5], to evaluate the factual consistency of the generated summaries. We define \(\mathcal{N}(t)\) as the count of named-entities in the target (ground truth or gold summary) and \(\mathcal{N}(h)\) as the count of named-entities in the hypothesis (generated summary). To determine the number of entities in the hypothesis that have corresponding matches in the source document, we use \(\mathcal{N}(h\cap s)\). In circumstances when a named-entity in the summary spans many words, we consider it a match if any component of the named-entity can be identified in the original document, permitting partial matching based on \(n\)-grams. **Precision-source**, defined as, \(prec_{s}=\mathcal{N}(h\cap s)/\mathcal{N}(h)\) is a metric that is used to determine the intensity of hallucination in relation to the source. Note that \(prec_{s}\) represents the percentage of entities mentioned in the generated summary that can be retrieved from the source. Low \(prec_{s}\) indicates that hallucination is possibly present in the generated text. However, \(prec_{s}\) does not capture the computed summary's entity-level correctness in relation to the ground-truth summary. Entity-level accuracy of the generated summary is calculated using the **precision-target** as \(prec_{t}=\mathcal{N}(h\cap t)/\mathcal{N}(h)\); the **recall-target** as \(recall_{t}=\mathcal{N}(h\cap t)/\mathcal{N}(t)\); and **F1 score** as \(F1_{t}=\frac{2*(recall_{t}*prec_{t})}{recall_{t}+prec_{t}}\). Here, \(\mathcal{N}(h\cap t)\) represents the number of matched named-entities in the generated summary and the ground truth summary. Note that the above precision and recall scores can be calculated in two ways. One is to consider the entity mentioned in each document (which may be the source \(s\) or target \(t\) or hypothesis \(h\)) as a set so that multiple occurrences of an entity in a document are equivalent to a single occurrence. The other is to consider the entity mentioned in a document as a list; here, if a metric is defined as \(\mu=\mathcal{N}(x\cap y)/\mathcal{N}(x)\), then for each entity mention in \(x\), we check if it occurs in \(y\), and if so, increment the intersection count \(\mathcal{N}(x\cap y)\) by unity. The second approach is followed in [5], In the first approach, we denote the metrics as \(prec_{s}^{U}\), \(prec_{t}^{U}\), \(recall_{t}^{U}\), and \(F1_{t}^{U}\) (\(U\) indicates that only unique entity mentions are considered). In the second, we represent them as \(prec_{s}^{NU}\), \(prec_{t}^{NU}\), \(recall_{t}^{NU}\), and \(F1_{t}^{NU}\). ## 5 Results ### Comparison of the models In this sub-section, we report the results of the variations of fine-tuning the LED model on PubMed dataset. Table 1 shows the F1-scores for ROUGE-1 (R-1), ROUGE-2 (R-2), ROUGE-L (R-L), BERTScore, and METEOR metrics along with values of the entity-level factual consistency metrics \(prec_{s}^{U}\), \(prec_{s}^{NU}\), \(F1_{t}^{U}\) and \(F1_{t}^{NU}\) on the filtered test dataset. The LED model fine-tuned on the filtered dataset achieves the highest \(prec_{s}\) scores. However, when fine-tuning with LED is done without additional techniques like filtering or JEANS, the values of ROUGE, METEOR, BERTScore, and even F1\({}_{t}\) are the highest. This shows that not only \(n\)-gram matches and cosine similarity of embeddings are higher for the plain LED model, but the entity-level hallucination is also lower for it. Nan et al. [5] also observed a reduction in ROUGE scores when data filtering and JEANS were applied, and remarked that it could be due to the increased complexity during decoding. Surprisingly, we find that data filtering and JEANS do not improve the \(F1_{t}\) scores. In future, we intend to conduct a detailed study of this behaviour and try to decipher its reason. This could be related to the inaccuracy in entity recognition that we observed for the dataset; for example, on manual review, we found that many phrases detected as entities do not appear to be very important, but their \begin{table} \begin{tabular}{|l c c c c c c c c c c|} \hline Model Name & R-1 & R-2 & R-L & R-LSum & METEOR & BERTScore & \(prec_{s}^{U}\) & \(prec_{s}^{NU}\) & \(F1_{t}^{U}\) & \(F1_{t}^{NU}\) \\ \hline Fine-tuned LED & **35.12** & **14** & **21.57** & **29.96** & **32.08** & **84.96** & 93.38 & 94.76 & **43.76** & **46.14** \\ \hline Fine-tuned LED + & 33.18 & 12.04 & 19.93 & 28.48 & 27.43 & 84.74 & **96.04** & **96.83** & 40.15 & 43.27 \\ \hline Fine-tuned LED + & 30.21 & 09.13 & 18.26 & 25.87 & 23.55 & 84.35 & 92.16 & 89.36 & 40.15 & 36.34 \\ \(\mathrm{JAENS}\) & & & & & & & & & & \\ \hline \end{tabular} \end{table} Table 1: Evaluation of the models: F1-scores for ROUGE, METEOR, BERTScore, along with the \(prec_{s}^{U}\), \(prec_{s}^{NU}\), \(F1_{t}^{U}\) and \(F1_{t}^{NU}\) scores are used for evaluating the factual consistency of the generated summaries for the PubMed dataset. All scores in percentage (%). match/mismatch between the generated and golden summary do impact the \(F1_{t}\) scores. In contrast, in [5], standard entities are detected which could be achieved with high accuracy. Another difference with [5] is that in our case, the dataset is much smaller and the summaries longer. ### Case study Figure 1 shows sample outputs generated by the fine-tuning of the LED model without the filtered dataset, with the filtered dataset, and with both the filtered dataset and JAENS. The entities detected in each summary are also shown. The original abstract consists of 5 sentences, but after using entity-based filtering technique it consists of only 2 sentences. In this case study, yellow color represents an incorrect representation of entity during summary generation. In this case study, cyan color denotes an correct entity mention that was incorrectly generated by the fine-trained LED model. ## 6 Conclusion We applied the Longformer Encoder-Decoder model on scientific research papers to generate summaries and used data filtering along with the JAENS approach to reduce entity hallucinations. We found that the simple fine-tuned LED model performs the best in terms of ROUGE, METEOR, and BERTScore but entity-based data filtering improves the scores of some of the factual consistency metrics. In the future, we would like to investigate in detail the reason behind the low performance of the JEANS approach. We also noticed that entities are not always identified with high recall and precision in the summary. We would like to analyze this issue in detail and improve the entity recognition module. Finally, we would like to study the reduction in ROUGE, METEOR, and BERTScore values that we observed in all the hallucination-mitigating designs.
2309.14196
Learning Restricted Boltzmann Machines with greedy quantum search
Restricted Boltzmann Machines (RBMs) are widely used probabilistic undirected graphical models with visible and latent nodes, playing an important role in statistics and machine learning. The task of structure learning for RBMs involves inferring the underlying graph by using samples from the visible nodes. Specifically, learning the two-hop neighbors of each visible node allows for the inference of the graph structure. Prior research has addressed the structure learning problem for specific classes of RBMs, namely ferromagnetic and locally consistent RBMs. In this paper, we extend the scope to the quantum computing domain and propose corresponding quantum algorithms for this problem. Our study demonstrates that the proposed quantum algorithms yield a polynomial speedup compared to the classical algorithms for learning the structure of these two classes of RBMs.
Liming Zhao, Aman Agrawal, Patrick Rebentrost
2023-09-25T14:56:30Z
http://arxiv.org/abs/2309.14196v1
# Learning Restricted Boltzmann Machines with greedy quantum search ###### Abstract Restricted Boltzmann Machines (RBMs) are widely used probabilistic undirected graphical models with visible and latent nodes, playing an important role in statistics and machine learning. The task of structure learning for RBMs involves inferring the underlying graph by using samples from the visible nodes. Specifically, learning the two-hop neighbors of each visible node allows for the inference of the graph structure. Prior research has addressed the structure learning problem for specific classes of RBMs, namely ferromagnetic and locally consistent RBMs. In this paper, we extend the scope to the quantum computing domain and propose corresponding quantum algorithms for this problem. Our study demonstrates that the proposed quantum algorithms yield a polynomial speedup compared to the classical algorithms for learning the structure of these two classes of RBMs. ## I Introduction Graphical models are widely used in probability theory and machine learning to describe the dependence structure among random variables. Various algorithms have been proposed for learning graphical models [1; 2; 3]. Models with latent (hidden) variables capture more complex dependencies compared to fully-visible models. Learning latent variable models using samples of visible variables is a major challenge in this field. A Restricted Boltzmann Machine (RBM) is a two-layer network consisting of a visible layer and a hidden layer, where variables within each layer are not connected. It can be represented as a weighted bipartite graph connecting visible nodes and hidden nodes. RBMs find applications in feature extraction and dimensionality reduction across various domains [4; 5]. However, learning general RBMs and related problems have been proven to be difficult [6]. Fortunately, certain classes of RBMs exhibit properties that can be efficiently learned. One such property is the two-hop neighborhood of visible nodes, which refers to the visible nodes connected to a specific node through a single hidden node. Learning this property is an example of _structure learning_, and knowing the two-hop neighborhood helps the learning of the marginal distribution of the visible nodes of the RBM. Bresler _et al._[7] proposed a classical greedy algorithm based on influence maximization for learning the two-hop neighbors of a visible node for ferromagnetic RBMs. In ferromagnetic RBMs, pairwise interactions between nodes and external fields are non-negative. The algorithm, based on the GHS (Griffiths-Hurst-Sherman) inequality, has a nearly quadratic runtime and logarithmic sample complexity with respect to the number of visible nodes. The runtime and sample complexity depend exponentially on the maximum degree, which is nearly optimal. Additionally, Goel [8] extended the results to locally consistent RBMs, where pairwise interactions associated with each latent node have the same sign but arbitrary external fields are allowed. The proposed classical greedy algorithm for learning two-hop neighbors is based on maximizing conditional covariance, relying on the FKG (Fortuin-Kasteleyn-Ginibre) inequality. The runtime and sample complexity with respect to the number of visible nodes are the same as in [7], but the dependency on the upper bound strength is doubly exponential. Quantum algorithms offer potential speed-ups over classical algorithms for certain problems [9; 10; 11]. Many quantum machine learning algorithms are based on amplitude amplification and estimation [12], which can achieve quadratic speedups in some parameters while potentially slowing down others. Quantum learning of graphical models such as factor graphs was considered in [13]. Quantum algorithms for learning generalized linear models, Sparsitron, and Ising models have been studied [14]. Quantum structure learning of MRFs has also been explored [15]. Quantum computation holds the promise of more efficient structure learning of RBMs and MRFs, which is of both theoretical and practical interest. In this paper, we present quantum algorithms for structure learning of the underlying graphs of ferromagnetic RBMs and locally consistent RBMs with arbitrary external fields. The quantum algorithms are based on the classical algorithms in [7] and [8] for non-degenerate RBMs with bounded two-hop degrees. We demonstrate that these quantum algorithms provide polynomial speed-ups over the classical counterparts in terms of sample dimensionality. Informally, our results are as follows. **Theorem 1** (Informal version of Theorem 4 and Theorem 6).: _Consider a ferromagnetic RBM (locally-consistent RBM) of \(n\) visible nodes where pairwise interactions and external fields are upper and lower-bounded. There exists a quantum algorithm that learns the two-hop neighborhood for a single visible node with high probabil ity in \(\tilde{\mathcal{O}}(\sqrt{n})\) time and sample complexity close to the theoretical lower bound in \(\supseteq\Omega(\log n)\)._ The remainder of the paper is organized as follows. In Sec. II, we introduce the notations used in this paper and provide an overview of RBM models. In Sec. III we briefly review the classical greedy algorithm introduced in [7] and present the quantum version. In Sec. IV, we provide a quantum algorithm for structure learning of locally consistent RBMs. ## II Preliminaries **Notations.** Let \(\mathbb{Z}_{+}\) denote the set of positive integers, \(\mathbb{R}\) denote the set of real numbers, and \([N]=\{1,2,\cdots,N\}\). For all sets \(A\subset[N]\), define the indicator function \(\mathbb{I}_{A}\) as \[\mathbb{I}_{A}(x):=\begin{cases}1&\text{if}\quad x\in A,\\ 0&\text{otherwise}.\end{cases} \tag{1}\] Furthermore, let \(\sigma(x)=1/\left(1+e^{-x}\right)\) denote the sigmoid function for \(x\in\mathbb{R}\). **Restricted Boltzmann Machines.** Restricted Boltzmann Machines are a widely used class of graphical models with latent variables. An RBM consists of two layers, a visible layer, and a hidden layer. Nodes within the same layer are not connected. Thus, an RBM can be represented as a bipartite graph. Figure 1 (a) illustrates an example of an RBM with 4 visible nodes and 4 hidden nodes. Given an RBM with \(n\) visible (observed) nodes \(\{X_{i}\}_{i\in[n]}\) and \(m\) latent nodes \(\{Y_{k}\}_{k\in[m]}\), the probability distribution for any configuration \(x\in\{\pm 1\}^{n}\) and \(y\in\{\pm 1\}^{m}\) is defined as \[P(X=x,Y=y)=\frac{1}{Z}\exp(x^{\mathsf{T}}Jy+f^{\mathsf{T}}x+g^{\mathsf{T}}y), \tag{2}\] where \(Z\) is the partition function, \(f\in\mathbb{R}^{n}\) and \(g\in\mathbb{R}^{m}\) are external fields, and \(J\in\mathbb{R}^{n}\times\mathbb{R}^{m}\) is the interaction matrix. An RBM with fixed external fields \(f\) and \(g\), and interaction matrix \(J\) is denoted as RBM \((J,f,g)\). Learning an RBM involves determining the optimal parameters \(J,f,\) and \(g\). In this paper, our focus is on the underlying structure learning of an RBM. One approach to learning the structure of an RBM is to learn all two-hop neighbors of each visible node. The definition of two-hop neighbors of a visible node is provided below. **Definition 1** (Two-hop Neighborhood).: _Let \(i\in[n]\) be a visible node for a fixed RBM \((J,f,g)\). The two-hop neighborhood of \(i\), denoted as \(\mathcal{N}_{2}(i)\), is defined as the smallest set of visible nodes \(S\subset[n]\setminus\{i\}\) such that conditioned on \(X_{S}\), \(X_{i}\) is conditionally independent of \(X_{j}\) for all visible nodes \(j\in[n]\setminus(S\cup\{i\})\). The two-hop degree of the RBM is defined as \(d_{2}=\max_{i\in[n]}\{|\mathcal{N}_{2}(i)|\}\)._ As mentioned in [8], \(\mathcal{N}_{2}(i)\) is always a subset of the graph-theoretic two-hop neighborhood, which is the smallest set \(S\) such that vertex \(i\) is separated from the other observable nodes in the graphical structure of the RBM. However, it may be a strict subset. By learning the two-hop neighbors of each visible node, we can reconstruct the underlying graph of the RBM. The underlying graph, denoted as \(G\), represents the visible nodes as vertices, and an edge between two visible nodes \(i\) and \(j\) indicates that they are two-hop neighbors in the corresponding RBM. For example, in Fig. 1, \(\mathbf{b}\) is the underlying graph of the RBM in \(\mathbf{a}\). We observe that visible nodes 1 and 2 are two-hop neighbors in \(\mathbf{a}\), and correspondingly, there is an edge between these two nodes in the underlying graph \(\mathbf{b}\). In order to learn the two-hop structure of an RBM, it is necessary to establish lower and upper bounds on the weights. A lower bound is needed to determine if there is an edge present in the RBM. If the interaction strength is weaker than the lower bound, we treat it as a non-edge. On the other hand, an upper bound is required to ensure that the distribution of any variable is not close to a deterministic value. This is a standard assumption in the literature on learning Ising models [7; 8]. We consider \((\alpha,\beta)\) non-degenerate RBMs defined as follows. **Definition 2**.: _An RBM is said to be \((\alpha,\beta)\)-non-degenerate if it satisfies the following conditions:_ * _for every_ \(i\in[n],j\in[m],\) _if_ \(|J_{ij}|\neq 0\)_, we have_ \(|J_{ij}|\geq\alpha\)_._ * _for every_ \(i\in[n]\)_, we have_ \(\sum_{j}|J_{ij}|+|f_{i}|\leq\beta\)_._ * _for every_ \(j\in[m]\)_, we have_ \(\sum_{i}|J_{ij}|+|g_{j}|\leq\beta\)_._ **Classical and quantum computational model.** We assume standard classical and quantum computational models [16]. In addition, a classical and corresponding quantum arithmetic model is assumed, which allows us to ignore issues arising from the fixed-point representation of real numbers and count the basic arithmetic operations as constant time operations. **Quantum data input.** Let us assume that we have quantum access to \(M\) samples from a distribution of an RBM. We define the quantum data input as follows: Figure 1: \(\mathbf{a}\). An example of an RBM with 4 visible nodes (orange circles) and 4 hidden nodes (grey circles). \(\mathbf{b}\). Underlying graph of the RBM in \(\mathbf{a}\). **Definition 3** (Quantum data input).: _Let \(\{x^{i}\in\{\pm 1\}^{n}\}_{i\in[M]}\) be \(M\) samples of an RBM with \(n\) visible nodes. We say that we have quantum query access to the samples if for each sample \(i\in[M]\) we have access to a quantum circuit that performs_ \[\ket{j}\ket{\bar{0}}\rightarrow\ket{j}\ket{x_{j}^{i}}, \tag{3}\] _with \(\mathcal{O}\left(\log n\right)\) qubits in constant time, where \(j\in[n]\) and \(x_{j}^{i}\) is the configuration of node \(j\) in the \(i\)-th sample._ Usually, this reversible operation is defined via the bit-wise modular addition, while here we only specify the action on the computational initial state \(\ket{\bar{0}}\). **Quantum minimum finding.** In this paper, we utilize a quantum subroutine known as quantum minimum finding [10]. This subroutine provides an almost quadratic speedup compared to its classical counterpart. **Theorem 2** (Quantum minimum finding [10]).: _Given quantum access to a comparison oracle for elements of a data set of size \(N\), we can find the minimum element with success probability \(1-\rho\) with \(\mathcal{O}\left(\sqrt{N}\log(1/\rho)\right)\) queries and \(\widetilde{O}(\sqrt{N}\log(1/\rho))\) quantum gates._ The minimum finding algorithm can be used to find the maximum element as well by changing the oracles accordingly. ## III Structure learning for ferromagnetic RBMs In Ref. [7], an efficient greedy algorithm based on influence maximization is described for learning the two-hop neighbors of any visible node in a ferromagnetic RBM. This algorithm allows for the recovery of the two-hop structure of the ferromagnetic RBM. In this section, we first introduce the definition of the influence function. We then review the classical greedy algorithm presented in Ref. [7]. Finally, we propose a quantum version of the classical algorithm and demonstrate its polynomial speedup over the classical counterpart. The definition of a ferromagnetic RBM is provided below. **Definition 4** (Ferromagnetic RBMs).: _A ferromagnetic RBM is an RBM \((J,f,g)\) in which the pairwise interactions and external fields are all non-negative i.e., \(J_{ij}\geq 0\), \(f_{i}\geq 0\), and \(g_{j}\geq 0\), \(\forall i\in[n],j\in[m]\)._ There are various measures to quantify correlations between variables, and one of them is the expected "magnetization" of a node when certain other nodes are fixed to \(+1\). The formal definition is as follows. **Definition 5** (Discrete Influence Function).: _Given a visible node \(u\in[n]\) and a subset \(S\subset[n]\setminus\{u\}\) of a ferromagnetic RBM, let \(s=|S|,\) the discrete influence function is defined as_ \[I_{u}(S):=\mathbb{E}\left[X_{u}|X_{S}=\{1\}^{s}\right]. \tag{4}\] The discrete influence function is a monotone submodular function for any visible node \(u\in[n]\) which has been proven using the Griffiths-Hurst-Sherman inequality in [7]. It has also been shown that any set \(S^{\prime}\) that is close enough to the maximizer of \(I_{u}(S^{\prime})\) must contain the two-hop neighbors of \(u\). Given \(M\) samples from a ferromagnetic RBM, the empirical discrete influence function is defined as follows \[\widehat{I}_{u}(S):=\widehat{\mathbb{E}}\left[X_{u}|X_{S}=\{1\}^{s}\right], \tag{5}\] where \(\widehat{\mathbb{E}}\) denotes the empirical expectation. Expanding the above equation yields: \[\widehat{I}_{u}(S) = \sum_{x_{u}\in\{\pm 1\}}x_{u}\widehat{P}(X_{u}=x_{u}|X_{S}=\{1\}^{ s}) \tag{6}\] \[= \frac{2\widehat{P}(X_{S\cup\{u\}}=\{1\}^{s+1})}{\widehat{P}(X_{S} =\{1\}^{s})}-1,\] where the second line is obtained by using the Bayes' rule and the fact that \(\sum_{x_{u}\in\{\pm 1\}}\widehat{P}(x_{u}|X_{S}=\{1\}^{s})=1.\) The empirical probability for \(M\) samples can be obtained by \[\widehat{P}(X=x)=\frac{1}{M}\sum_{i=1}^{M}\mathbb{1}_{\{X^{i}=x\}}. \tag{7}\] **Classical greedy algorithm.** Given a number of samples from a ferromagnetic RBM, the two-hop neighbors of each visible node can be found from the maximizer of the empirical influence function [7]. For a visible node \(u\in[n]\), a visible node subset \(S\) (excluding \(u\)), and a visible node \(j\) that is neither \(u\) nor part of \(S\), it has been demonstrated that if \(j\) is not a two-hop neighbor of \(u\), the difference \(I_{u}(S\cup j)-I_{u}(S)\) equals zero. Conversely, if \(j\) is a two-hop neighbor of \(u\), we have \(I_{u}(S\cup j)-I_{u}(S)\geq 2\eta\), where the threshold \(\eta\) will be defined later. Based on this insight, the authors propose a greedy algorithm, as outlined in Algorithm 1, to identify the two-hop neighbors of each visible node. The algorithm's performance, including the required number of samples and the associated time complexity, has been analyzed in Ref. [7]. **Theorem 3** (Theorem 6.1 in Ref [7]).: _Given \(M\) samples \(\{x^{i}\in\{\pm 1\}^{n}\}_{i\in[M]}\) of the visible nodes of a ferromagnetic RBM which is \((\alpha,\beta)\)-non-degenerate, and has two-hop degree \(d_{2}\). Let \(\eta=\alpha^{2}\sigma(-2\beta)(1-\tanh(\beta))^{2}\), \(k=d_{2}\log(4/\eta)\), for \(\delta>0\), as long as_ \[M\geq 2^{2k+3}(d_{2}/\eta)^{2}(\log(n)+k\log(en/k))\log(4/\delta), \tag{8}\] _then for every visible node \(u\in[n]\), the Algorithm 1 returns the set \(\mathcal{N}_{2}(u)\) with probability \(1-\delta\) in a time complexity of \(\mathcal{O}(Mkn)\)._ The total run time for all \(n\) visible nodes is therefore \(\mathcal{O}(Mkn^{2})\). It is important to note that the number of iterations \(k\) depends on the two-hop degree and the upper and lower bounds on the strengths of the RBM. **Quantum greedy algorithm.** We then propose a quantum version of Algorithm 1 to learn the two-hop neighborhood of any visible node in a ferromagnetic RBM, assuming we have quantum access to \(M\) samples of the RBM. The key idea behind the quantum algorithm is to replace the step 3 of Algorithm 1 with a quantum subroutine of maximum finding [10]. This quantum subroutine is expected to provide a quadratic speed-up compared to the classical setting in step 3. Firstly, we explore the quantum representation of the empirical discrete influence function. Given quantum data input from Definition 3, we can prepare a quantum state that encodes the value of the empirical discrete influence \(\widehat{I}_{u}(S\cup\{j\})\) as defined in Eq. (5). For the node subset \(S\), we begin by identifying the samples where the configuration \(x_{S}=\{1\}^{s}\) and storing the corresponding sample indices classically. **Lemma 1**.: _Given access to \(M\) samples of a ferromagnetic RBM, for a given set \(S\in[n]\), we can identity the samples where \(x_{S}=\{1\}^{s}\) and record the corresponding sample index in a set_ \[M_{S}:=\{i\ |\ i\in[M],x_{S}^{i}=\{1\}^{s}\}, \tag{9}\] _with \(\mathcal{O}\left(Ms\right)\) queries._ Proof.: Querying the \(M\) samples for the set \(S\) requires \(\mathcal{O}\left(Ms\right)\) queries. Hence, the sample indices where \(x_{S}=\{1\}^{s}\) can be found and stored in the set \(M_{S}\) in \(\mathcal{O}\left(Ms\right)\) time. Since Eq. (5) can be expanded as Eq. (6), the empirical discrete influence \(\widehat{I}_{u}(S\cup\{j\})\) can be obtained by using the probabilities \(\widehat{P}(X_{S\cup\{u,j\}}=\{1\}^{s+2})\) and \(\widehat{P}(X_{S\cup\{j\}}=\{1\}^{s+1})\). We now show that the probabilities can be encoded into a quantum state. **Lemma 2**.: _Let us be given quantum query access to \(M\) samples of the observable nodes of a ferromagnetic RBM according to Definition 3. Let a subset \(S\subset[n]\) with \(|S|=s\leq k\) and \(j\in[n]\setminus S\). Assuming we have the sample index set \(M_{S}\) as defined in Lemma 1, there exists a quantum circuit that performs_ \[|j\rangle\left|\bar{0}\right\rangle\rightarrow|j\rangle\left| \widehat{P}\left(X_{S\cup\{j\}}=\{1\}^{s+1}\right)\right\rangle, \tag{10}\] _with \(\mathcal{O}\left(M\right)\) quantum queries and \(\mathcal{O}\left(M\right)\) run time._ Proof.: The proof details are provided in Appendix A. Similarly, by Lemma 1, we can find the set \(M_{S\cup\{u\}}\) which stores the sample indices where the configuration of each visible node in the set formed by combining \(S\) with the node \(u\) is equal to one. We can then prepare a quantum state representing the probability where the configuration of each node in \(S\cup j,u\) is equal to one. Assuming we already have sets \(M_{S}\) and \(M_{S\cup\{u\}}\), we demonstrate that the empirical influence \(\widehat{I}_{u}(S\cup\{j\})\) can be prepared in a quantum state. **Lemma 3**.: _Let us be given quantum query access to \(M\) samples of a ferromagnetic RBM according to Definition 3. Let \(d_{2}\) be the two-hop degree and \(\eta,k\) be as in Theorem 3. Given a node \(u\in[n]\), a subset \(S\subset[n]\setminus\{u\}\) with \(|S|\leq k\), and the sets \(M_{S}\) and \(M_{S\cup\{u\}}\) from Lemma 1, for \(j\in[n]\setminus(S\cup\{u\})\), there exists a unitary operator \(U_{\mathrm{inf}}\) which performs_ \[|j\rangle\left|\bar{0}\right\rangle\rightarrow|j\rangle\left| \widehat{I}_{u}(S\cup\{j\})\right\rangle, \tag{11}\] _where \(\widehat{I}_{u}(S\cup\{j\})\) is the empirical influence function defined in Eq.(5). The circuit requires \(\mathcal{O}\left(M\right)\) quantum queries and has a run time of \(\mathcal{O}\left(M\right)\)._ Proof.: We can prepare a quantum state that encodes the empirical influence \(\widehat{I}_{u}(S\cup\{j\})\) by following these steps. First, using Lemma 2, for a visible node \(j\in[n]\setminus(S\cup\{u\})\), we can prepare state \[|j\rangle\left|\widehat{P}\left(X_{S\cup\{j\}}=\{1\}^{s+1} \right)\right\rangle\left|\widehat{P}\left(X_{S\cup\{u,j\}}=\{1\}^{s+2} \right)\right\rangle. \tag{12}\] By calculating the influence based on these probabilities using Eq. (6), we can prepare the desired influence \(\widehat{I}_{u}(S\cup\{j\})\) in a quantum state. Finally, we undo the unitary operator used in the state preparation process, resulting in the state given by Eq. (11). Regarding the run time, according to Lemma 2, preparing the state in Eq. (12) requires \(\mathcal{O}\left(M\right)\) queries and has a run time of \(\mathcal{O}\left(M\right)\). Therefore, the total number of quantum queries required is \(\mathcal{O}\left(M\right)\), and the total run time is \(\mathcal{O}\left(M\right)\). **Main result.** We present the quantum version of the greedy algorithm (Algorithm 2) for identifying the two-hop neighbors of a visible node \(u\) using quantum access to samples from a ferromagnetic RBM. In this algorithm, we leverage the computation of the empirical influence \(\widehat{I}_{u}(S\cup\{j\})\), which is stored as a bit string in a register of qubits. This allows us to operate on the superposition of all visible nodes \(j\in[n]\setminus(S\cup\{u\})\). The key step in the quantum greedy algorithm is to apply the quantum minimum finding algorithm to determine the maximum value of the empirical influence \(\widehat{I}_{u}(S\cup\{j\})\) among all possible nodes \(j\). By using a threshold \(\eta>0\), we can then identify the neighbors of node \(u\) based on whether the empirical influence surpasses this threshold. **Theorem 4**.: _Let us be given quantum access to samples \(\{x^{i}\in\{\pm 1\}:i\in[M]\}\) from the observable distribution of an \((\alpha,\beta)\) non-degenerate ferromagnetic RBM with two-hop degree \(d_{2}\), for any node \(u\). Let \(\eta,k\) be as in Theorem 3. If the number of samples satisfies_ \[M\geq 2^{2k+3}(d_{2}/\eta)^{2}\left(\log(n)+k\log(en/k)\right)\log( 8/\delta), \tag{13}\] _for every visible node \(u\), Algorithm 2 returns \(\mathcal{N}_{2}(u)\) with success probability at least \(1-\delta\) in time \(\mathcal{O}\left(Mk^{2}+Mk\sqrt{n}\log(k/\delta)\right).\)_ Proof.: The number of samples required has been proved in Ref. [7], notice that the sample required is slightly different between the classical algorithm and the quantum version, due to the different success probability. Here we analyze the query complexity. Step 3 in the for loop is the most time-consuming step. By using Lemma 1, Lemma 3 and Lemma 2, the number of queries required in step 3 is \(\mathcal{O}\left(Mk+M\sqrt{n}\log(k/\delta)\right).\) Thus, the total run-time of the algorithm is \(\mathcal{O}\left(Mk^{2}+Mk\sqrt{n}\log(k/\delta)\right)\) for \(k\) iterations. We now analyze the success probability. In our setting, the success probability for every quantum maximum finding is \(1-\delta/(2k)\). For \(k\) loop, the success probability is \((1-\delta/(2k))^{k}\geq 1-\delta/2\). Combining this with the success probability of the original classical algorithm, which we set as \(1-\frac{\delta}{2}\), the total success probability is at least \(1-\delta\) by the union bound. The total run time is then \(\mathcal{O}\left(Mkn\left(k+\sqrt{n}\log(k/\delta)\right)\right)\) for finding the two-hop neighbors for all \(n\) visible nodes. With this information, we can obtain the structure of the underlying graph of the RBM. ## IV Structure Learning for Locally Consistent RBM In Ref [8], a classical greedy algorithm was introduced for learning the two-hop neighbors of any visible node in locally consistent RBMs with arbitrary external fields. The algorithm maximizes the conditional covariance between pairs of visible nodes, based on the observation that the covariance is positive and bounded away from \(0\) for pairs of visible nodes that are two-hop neighbors (connect to a common hidden node). This property also holds for the ferromagnetic Ising model with arbitrary external field [17]. In this section, we will review the classical greedy algorithm, then propose a quantum version of the algorithm, and demonstrate its improved efficiency. We start by defining a locally consistent RBM, which is an RBM where the interaction weights between each hidden node and visible nodes are either all non-negative or non-positive. Formally, a locally consistent RBM can be defined as follows: **Definition 6** (Locally Consistent RBMs).: _An RBM \((J,f,g)\) is locally consistent if for each \(j\in[m]\), we have \(J_{ij}\geq 0\) for all \(i\in[n]\), or \(J_{ij}\leq 0\) for all \(i\in[n]\)._ The conditional covariance is defined as follows. **Definition 7** (Conditional covariance).: _The conditional covariance for visible node \(u,v\in[n]\) and a subset \(S\subseteq[n]\setminus\{u,v\}\) is defined as_ \[\text{Cov}(u,v|x_{S}):=\mathbb{E}[X_{u}X_{v}|x_{S}]-\mathbb{E}[X _{u}|x_{S}]\mathbb{E}[X_{v}|x_{S}],\] _where \(x_{S}\) is a shorthand notation for \(X_{S}=x_{S}.\) The average conditional covariance is then defined as_ \[\text{Cov}^{\text{avg}}(u,v|S):=\mathbb{E}_{x_{S}}[\text{Cov}(u,v|x_{S})]. \tag{14}\] Given a number of samples of visible nodes of an RBM, the empirical average conditional covariance is defined as \[\widetilde{\text{Cov}}^{\text{avg}}(u,v|S) = \widehat{\mathbb{E}}_{x_{S}}[\widetilde{\text{Cov}}(u,v|x_{S})], \tag{15}\] where \(\mathbb{E}\) is the empirical expectation and \(\widetilde{\text{Cov}}(u,v|x_{S})\) is the empirical conditional covariance. **Classical greedy algorithm.** In Ref [8], it has been shown that for any visible node \(u\in[n]\) and a visible node subset \(S\in[n]\) that does not contain \(u\), if a visible node \(v\in[n]\) is neither equal to \(u\) nor contained in \(S\) is not a two-hop neighbor of \(u\), the conditional covariance \(\text{Cov}(u,v|x_{S})\) is equal to zero. Conversely, if \(v\) is a two-hop neighbor of \(u\), we have \(\text{Cov}(u,v|x_{S})\geq 2\tau\), where \(\tau\) is a function of \(\alpha\) and \(\beta\) which will be provided later. Based on these observations, the author proposed a greedy algorithm (Algorithm 3) to learn the two-hop neighbors of each node by maximizing the empirical average conditional covariance. The following Theorem gives the number of samples required and the run time of the algorithm. **Theorem 5** (Theorem 6 in Ref [8]).: _Given \(H\) samples of visible nodes \(\{x^{i}\in\{\pm 1\}^{n}\}_{i\in[H]}\) of an \((\alpha,\beta)\)-nondegenerate locally consistent RBM, for \(\delta=\frac{1}{2}e^{-2\beta}\), \(\tau=\alpha^{2}\exp(-12\beta)\) and \(T^{*}=\frac{8}{\tau^{2}}\), if_ \[H\geq\Omega\left((\log(1/\zeta)+T^{*}\log n)\,\frac{2^{2T^{*}}}{ \tau^{2}\delta^{2T^{*}}}\right), \tag{16}\] _the two-hop neighbours \(\mathcal{N}_{2}(u)\) of a visible \(u\) can be obtained in time \(O(HnT^{*})\) using Algorithm 3 with success probability at least \(1-\zeta\)._ The proof of this theorem can be found in Ref. [8]. **Quantum greedy algorithm.** Based on Algorithm 3, we propose a quantum version of the algorithm to learn the underlying graph of a locally consistent RBM with arbitrary external fields. We assume that we have quantum access to a number of samples from such an RBM, as defined in Definition 3. The main idea is to utilize quantum techniques to prepare the empirical average conditional covariance \(\widehat{\mathrm{Cov}}^{\mathrm{avg}}(u,v|S)\) in a quantum state with a superposition of all node \(v\), and then use a quantum minimum finding algorithm to find the values \(i^{*}\) and \(\mathrm{val}^{*}\) in step 2 of Algorithm 3. Our approach still involves a hybrid algorithm, combining classical and quantum computations. For each node \(u\) and a known set \(S\), we first determine the unique configurations of the visible nodes on that subset \(S\) in the \(H\) samples. For a subset \(S\in[n]\), there are \(2^{s}\) possible configurations of the visible nodes on that subset. Let \(L_{S}\) denote the set consisting of the unique configurations of \(S\) in the \(H\) samples. There are at most \(|L_{S}|\leq\min\{H,2^{s}\}\) unique configurations, where \(s\) is the size of the subset \(S\). Formally, we define the unique configuration set \(L_{S}\) and the set \(F(x_{S}^{l})\) which stores the sample indices in \([H]\) where \(l\in[|L_{S}|]\) and the configuration of set \(S\) is equal to \(x_{S}^{l}\) in set \(L_{S}\), as follows: \[L_{S} := \mathrm{Unique}\{x_{S}^{i}:i\in[H]\},\] \[F(x_{S}^{l}) := \{i\mid i\in[H],\ x_{S}^{i}=x_{S}^{l}\}, \tag{17}\] where Unique returns the set of unique elements. For example, Unique\(\{a,b,c,d,d,d,a\}=\{a,b,c,d\}\). **Lemma 4**.: _Given quantum access to \(H\) samples of an RBM as Definition 3, for a subset \(S\subset[n],\) we can obtain the unique configuration set \(L_{S}\) and sample index set \(F(x_{S}^{l})_{l\in[|L_{S}|]}\) defined in Eq. (17) with \(\mathcal{O}\left(Hs\right)\) quantum queries and \(\mathcal{O}\left(Hs\right)\) runtime._ Proof.: To obtain the unique configuration set \(L_{S}\), we first query the quantum access to the \(H\) samples as described in Definition 3. This allows us to obtain the quantum state \[\ket{S}\ket{\bar{0}}\rightarrow\ket{S}\otimes_{i=1}^{H}\ket{x_{S}^{i}}, \tag{18}\] with \(\mathcal{O}(Hs)\) queries. We can then measure the quantum state in the computational basis to obtain the classical representation of the configurations. By examining all \(H\) samples, we can identify the unique configurations and construct the set \(L_{S}\). This process takes \(\mathcal{O}(Hs)\) time. Therefore, we can obtain the unique configuration set \(L_{S}\) and the sample index set \(F(x_{S}^{l})_{l\in[|L_{S}|]}\) with \(\mathcal{O}(Hs)\) quantum queries and \(\mathcal{O}(Hs)\) runtime. Now we demonstrate how to compute the empirical average conditional covariance. Let \(x_{S}^{l}\) denote the \(l\)-th configuration in set \(L_{S}\) defined in Eq. (17). The empirical average conditional covariance in Eq. (15) can be expressed as \[\widehat{\mathrm{Cov}}^{\mathrm{avg}}(u,v|S)=\frac{1}{H}\left(\sum_{i=1}^{H}z _{u,v}^{i}-\sum_{l=1}^{|L_{S}|}\frac{a_{u,x_{S}^{l}}a_{v,x_{S}^{l}}}{|F(x_{S}^ {l})|}\right), \tag{19}\] where \(z_{u,v}^{i}:=x_{u}^{i}x_{v}^{i}\) represents the product of configurations of \(u\) and \(v\) in the \(i\)-th sample, and \[a_{j,x_{S}^{l}}:=\sum_{i\in F(x_{S}^{l})}x_{j}^{i},\ \mathrm{for}\ j=u,v \tag{20}\] denote the sum of the configurations of \(u\) (\(v\)) in the samples where the sample index is in \(F(x_{S}^{l}).\) The complete derivation of Eq. (19) is provided in Appendix B.1. We now show that the empirical average conditional covariance in Eq (19) can be represented in a quantum state. From Lemma 4, we know that obtaining the unique configuration set requires \(\mathcal{O}\left(Hs\right)\) quantum queries and running time. As it only needs to be calculated once for set \(S\), we assume we have already gotten different configurations of the \(H\) samples for set \(S\). We proceed with the representation. **Lemma 5**.: _Given quantum query access to \(H\) samples of an RBM as defined in Definition 3, nodes \(u,v\in[n],\) set \(S\subset[n]\setminus\{u,v\}\), the unique configuration set \(L_{S}\) and the sample indexes subset \(F(x_{S}^{l})\) as defined in Eq. (17), there exists a unitary \(U_{cov}\) which performs the transformation_ \[\ket{v}\ket{u}\ket{\bar{0}}\rightarrow\ket{v}\ket{u}\widehat{\mathrm{Cov}}^{ \mathrm{avg}}(u,v|S)\Big{>}\,, \tag{21}\] _in \(\mathcal{O}\left(H\right)\) quantum queries and \(\mathcal{O}\left(H\right)\) running time._ The detailed proof is provided in Appendix B.2. **Main result.** We introduce the quantum version of Algorithm 3 in Algorithm 4, designed to identify the two-hop neighbors of a locally consistent RBM. According to Lemma 5, we observe that the empirical average condition covariance \(\widehat{\mathrm{Cov}}^{\mathrm{avg}}(u,v|S)\) can be encoded in a quantum state with a supposition of all visible node \(v\) which is neither \(u\) and nor contained in set \(S\). Then by utilizing the quantum minimum finding algorithm, we can determine a node among all nodes \(v\) which maximizes the empirical average conditional covariance. By comparing this value with the threshold \(\tau\), we can determine whether \(v\) is a two-hop neighbor of \(u\). **Theorem 6**.: _Given quantum access to \(H\) samples of a locally consistent RBM according to Definition 3, if the number of samples satisfies Eq. (16), we can find the two-hop neighbors of any visible node \(u\) with success probability at least \(1-\zeta\), using \(\mathcal{O}\left(HT^{*2}+H\sqrt{n}T^{*}\log(T^{*}/\zeta)\right)\) runtime._ Proof.: We observe that the sample complexity remains the same as in Theorem 5. Next, we analyze the query complexity and runtime of Algorithm 4. The most time-consuming steps are the loop steps 2-9, particularly steps 2 and 3. As shown in Lemma 4, step 2 requires \(\mathcal{O}\left(Hs\right)\) quantum queries and \(\mathcal{O}\left(Hs\right)\) run time. For step 3, as discussed earlier in Lemma 2, we can employ the quantum minimum finding algorithm [10] to find the maximum element \(\operatorname{val}^{*}\) and the corresponding visible node \(i^{*}\) from the set of elements \(\{\,\widetilde{\operatorname{Cov}}^{\operatorname{Avg}}(u,v|S)\,|\,v\in[n]/(S \cup\{u\})\,\}\) given as quantum states. In particular, \(\operatorname{val}^{*}\) and \(i^{*}\) can be found by performing \(\mathcal{O}\left(\sqrt{n}\log(T^{*}/\zeta)\right)\) times of unitary \(U_{cov}\) in Lemma 5 with success probability \(1-\frac{\zeta}{2T^{*}}\). The number of quantum query required is then \(\widetilde{\mathcal{O}}\left(H\sqrt{n}\log(T^{*}/\zeta)\right)\) as each \(U_{cov}\) requires \(\mathcal{O}\left(H\right)\) quantum queries in step 3. Combining steps 2 and 3, the total number of quantum queries and runtime is \(\mathcal{O}\left(Hs+H\sqrt{n}\log(T^{*}/\zeta)\right)\). Over \(T^{*}\) iterations, the runtime becomes \(\mathcal{O}\left(HT^{*2}+H\sqrt{n}T^{*}\log(T^{*}/\zeta)\right)\) as \(s\leq T^{*}\). We now turn to analyze the success probability. For each quantum maximum finding, the success probability is \(1-\zeta/(2T^{*})\), and over \(T^{*}\) iterations, the success probability becomes \((1-\zeta/(2T^{*}))^{L}\geq 1-\zeta/2\). Combining this with the success probability of the original classical algorithm, which we set as \(1-\zeta/2\), this leads to a total success probability \((1-\zeta/2)^{2}\geq 1-\zeta\). **Input:** Visible node \(u\), quantum query access to samples \(\{\,x^{i}\in\{\pm 1\}^{n}\}_{i\in[H]}\), threshold \(\tau\). ``` 1:Set \(S:=\emptyset\). 2:Find set \(L_{S}\) and \(F(x_{S}^{t})\) for \(l\in[|L_{S}|]\) defined in Eq. (17). 3:Find a \(i^{*}\leftarrow\operatorname{arg\,max}_{\tau}\widetilde{\operatorname{Cov}}^{ \operatorname{avg}}(u,v|S)\) by quantum maximum finding, via apply number of \(U_{cov}\) in Lemma 5 with success probability \(1-\frac{\zeta}{2T^{*}}\). 4:if\(\widetilde{\operatorname{Cov}}^{\operatorname{avg}}(u,i^{*}|S)\geq\tau\). then 5:\(S=S\cup\{i^{*}\}\). 6:else 7: Go to step 9. 8:endif 9: Go to step 2. 10: Pruning step: For each \(v\in S\), if \(\widetilde{\operatorname{Cov}}^{\operatorname{avg}}(u,v|S)<\tau\), remove \(v\). 11:\(S\rightarrow\hat{\mathcal{N}_{2}}(u)\). ``` **Output:**\(S\rightarrow\hat{\mathcal{N}_{2}}(u)\). Applying Algorithm 4 to all \(n\) visible nodes allows us to learn the underlying graph of the RBM. The total run time is then \(\mathcal{O}\left(nHT^{*}\left(T^{*}+\sqrt{n}\log(T^{*}/\zeta)\right)\right)\). ## V Discussion and Conclusion In this work, we have presented quantum algorithms for learning the two-hop neighbors of ferromagnetic RBMs and locally consistent RBMs with arbitrary external fields. Our quantum algorithms offer a polynomial speedup over their classical counterparts in terms of the number of visible nodes, while maintaining the same sample complexity. By exploiting quantum query access to the RBM samples, we can efficiently obtain the unique configuration set and compute the empirical average conditional covariance. This enables a speedup in the identification of visible nodes that have the highest covariance with a given node, indicating their two-hop neighbor relationship. Once the structure of the underlying graph is obtained, further analysis and modeling can be performed. For example, we can apply existing algorithms such as the Sparsitron algorithm [18] or GLMtron [19] to learn the parameters of the RBMs. These algorithms take advantage of the sparsity of the graph structure to achieve efficient parameter estimation. Structure learning is a fundamental problem in machine learning, and our quantum algorithms offer the potential for speedup in other graph-based learning tasks as well. This theoretical research is inspired by numerous applications where learning the underlying structure of data is crucial, such as in social network analysis, biological network inference, and recommendation systems. Exploring provable learning using quantum algorithms shows the prospects and limits of learning in a theoretical setting, which translates into insights for practical settings as well. In conclusion, our work demonstrates the advantages of quantum computing in accelerating the learning of two-hop neighbors in RBMs. The quantum algorithms we have developed provide a promising avenue for structure learning tasks and open up new possibilities for efficient data analysis. We anticipate further advancements in leveraging quantum techniques for graph-based learning and other related problems. ## VI Acknowledgements The author LZ was supported by the National Natural Science Foundation of China (No.12204386), and the Scientific and Technological Innovation Project (No. 2682023CX084). Additionally, this research is supported by the National Research Foundation, Singapore under its CQT Bridging Grant. AA acknowledges an internship visit at CQT.
2309.13821
An ALMA-resolved view of 7000 au Protostellar Gas Ring around the Class I source CrA-IRS 2 as a possible sign of magnetic flux advection
Transferring a significant fraction of the magnetic flux from a dense cloud core is essential in the star formation process. A ring-like structure produced by magnetic flux loss has been predicted theoretically, but no observational identification has been presented. We have performed ALMA observations of the Class I protostar IRS 2 in the Corona Australis star-forming region and resolved a distinctive gas ring in the C$^{18}$O ($J$ = 2-1) line emission. The center of this gas ring is $\sim$5,000 au away from the protostar, with a diameter of $\sim$7,000 au. The radial velocity of the gas is $\lesssim1$ km s$^{-1}$ blueshifted from that of the protostar, with a possible expanding feature judged from the velocity-field (moment 1) map and position-velocity diagram. These features are either observationally new or have been discovered but not discussed in depth because they are difficult to explain by well-studied protostellar phenomena such as molecular outflows and accretion streamers. A plausible interpretation is a magnetic wall created by the advection of magnetic flux which is theoretically expected in the Class 0/I phase during star formation as a removal mechanism of magnetic flux. Similar structures reported in the other young stellar sources could likely be candidates formed by the same mechanism, encouraging us to revisit the issue of magnetic flux transport in the early stages of star formation from an observational perspective.
Kazuki Tokuda, Naofumi Fukaya, Kengo Tachihara, Mitsuki Omura, Naoto Harada, Shingo Nozaki, Ayumu Shoshi, Masahiro N. Machida
2023-09-25T02:03:37Z
http://arxiv.org/abs/2309.13821v2
An ALMA-resolved view of 7000 au Protostellar Gas Ring around the Class I source CrA-IRS 2 as a possible sign of magnetic flux advection ###### Abstract Transferring a significant fraction of the magnetic flux from a dense cloud core is essential in the star formation process. A ring-like structure produced by magnetic flux loss has been predicted theoretically, but no observational identification has been presented. We have performed ALMA observations of the Class I protostar IRS 2 in the Corona Australis star-forming region and resolved a distinctive gas ring in the C\({}^{18}\)O (\(J\) = 2-1) line emission. The center of this gas ring is \(\sim\)5,000 au away from the protostar, with a diameter of \(\sim\)7,000 au. The radial velocity of the gas is \(\lesssim 1\) km s\({}^{-1}\) blueshifted from that of the protostar, with a possible expanding feature judged from the velocity-field (moment 1) map and position-velocity diagram. These features are either observationally new or have been discovered but not discussed in depth because they are difficult to explain by well-studied protostellar phenomena such as molecular outflows and accretion streamers. A plausible interpretation is a magnetic wall created by the advection of magnetic flux which is theoretically expected in the Class 0/I phase during star formation as a removal mechanism of magnetic flux. Similar structures reported in the other young stellar sources could likely be candidates formed by the same mechanism, encouraging us to revisit the issue of magnetic flux transport in the early stages of star formation from an observational perspective. Star formation (1569); Protostars (1302); Molecular clouds (1072); Interstellar medium (847); Circumstellar envelopes (237); Magnetic fields (994) 0000-0002-8870-7886]Kazuki Tokuda ## 1 Introduction The magnetic flux problem has long been a critical issue in star formation (e.g., Nakano, 1984). If the magnetic fields observed in typical prestellar cores (Troland & Crutcher, 2008) were dragged into a young stellar object, the field strength would reach tens of millions of Gauss. This simple estimation is more than three orders of magnitude higher than the observed values, which are typically in the kilo-Gauss range (Johns-Krull et al., 2009). Consequently, the majority of the magnetic flux of the prestellar core must be removed from the dense material infalling into the star, but when and how the removal occurs remains an unresolved issue. A possible mechanism to remove magnetic flux is microscopic magnetic diffusion, such as ohmic dissipation and ambipolar diffusion, where charged particles frozen into the magnetic field drift through neutral particles, reducing the magnetic flux carried into the star and circumstellar disk (e.g., Dapp et al., 2012; Tomida et al., 2015). Although attempts have been made to observe this effect (Yen et al., 2018, 2023), it remains a challenging study. The ambipolar diffusion and ohmic dissipation decouple the neutral gas from the magnetic field in a steady-state manner over relatively long time scales. On the other hand, the magnetic interchange instability causes a more dynamic direct removal of magnetic flux from the circumstellar disk into its surroundings (Parker, 1979; Kaisig et al., 1992; Lubow & Spruit, 1995; Stehle & Spruit, 2001). During this time, outside the disk, a large amount of magnetic flux forms a low-gas density cavity that is surrounded by a ring-shaped dense gas cloud (Zhao et al., 2011; Machida et al., 2014), a structure sometimes referred to as a "magnetic wall". These structures generally form on scales ranging from several tens to over 1,000 au from the protostar (Zhao et al., 2011; Matsumoto et al., 2017). The magnetic field in the circumstellar disk can be transported to the edge of the disk due to ohmic dissipation and ambipolar diffusion, and the magnetic field strength (or magnetic flux) is enhanced at the edge. The magnetic interchange instability can occur when the ratio of magnetic strength to mass loading (or surface density), \(B/\Sigma\), increases in the direction of gravity (Lubow & Spruit, 1995). This condition tends to be realized around the edge of the circumstellar disk due to magnetic diffusion within the disk (Machida & Basu, 2020). After magnetic interchange instability occurs, the magnetic flux is rapidly leaked out from the disk when the magnetic pressure exceeds the ram pressure of the infalling envelope (Krasnopolsky et al., 2012). While the occurrence of the magnetic interchange instability has been frequently demonstrated in numerical studies of protostar formation (e.g., Joos et al., 2012; Li et al., 2013; Matsumoto et al., 2017; Vaytet et al., 2018; Machida & Basu, 2020; Lee et al., 2021), there are also instances in simulations where the interchange instability does not arise due to ambipolar diffusion (Masson et al., 2016; Xu & Kunz, 2021, 2021). Theoretically, the manifestation of this instability in real protostellar systems remains ambiguous, and the lack of observational counterparts described in the next paragraph leads to it not being seriously considered until recently. From an observational perspective, no structure produced by interchange instability has been reported in the literature. The advection of magnetic flux forms the ring-like or arc-like structure almost simultaneously with the protostar formation, and the size scales are very small, a few tens of au in the early stages, but grow into a larger structure on the order of the sound speed or free-fall time (e.g., Zhao et al., 2011; Matsumoto et al., 2017). Consequently, these scales could have been adequately detected with a spatial resolution available since the early scientific operation of ALMA toward nearby star-forming regions. Nevertheless, several reasons have prevented a successful identification. An embedded nature in the Class 0/I envelope makes discriminating the surrounding infalling and outflowing gas difficult. ALMA observations discovered complex arc-like envelopes around the Class 0 protostar in the Taurus dense core MC27/L1521F (Tokuda et al., 2014, 2016, 2018), but the actual origin is still under debate. Only a few subsequent theoretical studies (Matsumoto et al., 2017; Machida & Basu, 2020) discussed interchange instability as one of the possible physical mechanisms to reproduce the complex arc-like envelopes in MC27/L1521F. More recently, similar arc-like structures have been predominantly interpreted as accretion streams (e.g., Pineda et al., 2020). Although the interchange-instability and streamer interpretations are not mutually exclusive, the simpler physical phenomenon of the latter reduces the opportunity to put the former on the table for discussion as an "observer" bias. In this latter, we report on ALMA observations toward the protostar IRS 2 in the Corona Australis (CrA) star-formation region whose distance is 149 pc (Galli et al., 2020). Forbrich & Preibisch (2007) classified it as a Class I protostar with a spectral type K2. Sicilia-Aguilar et al. (2013) performed the spectral energy distribution analysis of the source using multi-wavelength data. They determined the bolometric luminosity of \(\sim\)0.8 \(L_{\odot}\) and envelope temperature of \(\sim\)25 K. Interestingly enough, their Herschel/PACS 100 and 160 \(\mu\)m images show a bubble or shell-like structure with a diameter of \(\sim\)5,000 au toward the southwest direction from the protostar (see Figure 4 in Sicilia-Aguilar et al., 2013). Our ALMA data provide a detailed molecular gas view of the previously reported peculiar feature and an implication of the formation mechanism by the magnetic flux advection due to interchange instability, which may be an important factor in resolving the magnetic flux problem in the star formation process. ## 2 Observations and Data Reduction We performed ALMA Cycle 8 observations toward the CrA star-forming region (P.I., K. Tachihara; #2021.1.00715.S) using the 12 m array C43-1 configuration with the Band 6 receivers. The total mosaicing number of 137 provided the field of view of \(\sim\)180\(\arcsec\)\(\times\) 140\(\arcsec\) (P.A. = 45 deg.) at a central coordinate of (\(\alpha_{\rm J2000.0}\), \(\delta_{\rm J2000.0}\)) = (19\({}^{\rm h}\)01\({}^{\rm m}\)39\(\fs\)4, \(-\)36\(\arcdeg\)58\(\arcmin\)16\(\arcmin\)0). The main tracers used in this letter is C\({}^{18}\)O (\(J\) = 2-1). We supplementary used SO (\(N\), \(J\) = 5,6-4,5) and 1.3 mm continuum emission to trace the gas and dust of the protostar vicinity. The C\({}^{18}\)O and SO bandwidths were 59 MHz with 480 channels at central sky frequencies of 219.562 GHz and 219.951 GHz, respectively. For the 1.3 mm continuum, we used two spectral windows centered at 232.999 GHz and 216.354 GHz. The bandwidth and channel number were 1.875 GHz and 1920, respectively, in each. We used the Common Astronomy Software Application package (CASA Team et al., 2022), v.6.4.4-31, in data reduction and imaging. We employed the tclean algorithm with the multi-scale deconvolver in the analysis. We applied the natural weighting. The clean mask regions were automatically determined by the auto-multithresh scheme (Kepley et al., 2020). The resultant beam size and r.m.s. noise level of the C\({}^{18}\)O and SO data are 2\(\farcs\)1 \(\times\) 1\(\farcs\)3 (P.A. = \(-\)82 deg.) and \(\sim\)0.2 K at a velocity channel of 0.2 km s\({}^{-1}\), respectively. For the 1.3 mm continuum data, the beam size and sensitivity are 2\(\farcs\)0 \(\times\) 1\(\farcs\)3 (P.A. = \(-\)78 deg.) and \(\sim\)0.3 mJy beam\({}^{-1}\), respectively. Figure 1 shows the spatial and velocity distributions of 1.3 mm continuum and C\({}^{18}\)O (2-1) emission toward IRS 2. The position of the continuum source corresponds to that of the previously identified protostar, tracing thermal dust emission arising from the circumstellar disk. A large ring-like structure is outstanding in the southwest direction of the protostar. In particular, because a continuous distribution of equal intensity of \(\sim\)1.5 K km s\({}^{-1}\) follows the ring's southeast edge, we determined the ring structure guideline based on the boundary by eye as shown in the white dotted circle with a radius of 25\({}^{\prime\prime}\) (\(\sim\)3,700 au). The projected separation is \(\sim\)35\({}^{\prime\prime}\) (\(\sim\)5,000 au) between the protostar and the ring's center. Although the ring-like structure in the moment 0 and 1 maps is somewhat discontinuous in the northwest direction, the channel maps (Figure 2) show that C\({}^{18}\)O emission is visible along at least one of the velocity channels, according to the guidelines along the dotted circle. We regard it as a quasi-complete ring with a diameter of \(\sim\)7,000 au around the protostellar source. This ring was recognized as a continuous component with the protostar in the Herschel/PACS observations (Sicilia-Aguilar et al., 2013). In addition to the C\({}^{18}\)O data, we examined the SO emission in our ALMA dataset and found a distinct emission surrounding IRS 2 within a velocity range of 6.2-6.4 km s\({}^{-1}\) (Figure 2), closely matching the systemic velocity. Although C\({}^{18}\)O emission around the systemic velocity is not visible (see Figure 3a), possibly due to the optical thickness, combining it with the more optically thin SO distribution suggests that the ring structure and the protostar may constitute the same continuum component. Based on the C\({}^{18}\)O data, we obtained the average H\({}_{2}\) column density of \(\sim\) 6 \(\times\)10\({}^{21}\) cm\({}^{-2}\) along the ring assuming the local thermodynamical equilibrium (LTE) condition with a constant gas kinematic temperature of 25 K (Sicilia-Aguilar et al., 2013) and applying a C\({}^{18}\)O relative abundance, [H\({}_{2}\)]/[C\({}^{18}\)O], of 5.9 \(\times\)10\({}^{6}\), which is a typical value in the solar neighborhood (Frerking et al., 1982). Suppose the ring thickness of \(\sim\)1,000 au (100 au), the average H\({}_{2}\) volume density is \(\sim\)4 \(\times\)10\({}^{5}\) cm\({}^{-3}\) (4 \(\times\)10\({}^{6}\) cm\({}^{-3}\)). The total mass of the ring is roughly estimated to be \(\sim\)0.05 \(M_{\odot}\) with the integration of the column density map. Figure 1: (Left panel) The color-scale image shows the velocity-integrated intensity (moment 0) map of C\({}^{18}\)O (2–1) with a velocity range of 5–7 km s\({}^{-1}\). The white contours shows the 1.3 mm continuum image with contour levels of 0.05, 0.15, and 0.25 Jy beam\({}^{-1}\). The synthesized beam size, 2\(\farcs\)1 \(\times\) 1\(\farcs\)3 is given by the white ellipse at the lower-left corner. The white dashed circle highlights the ring structure with a radius of 25\({}^{\prime\prime}\) (\(\sim\)3,700 au) at a central coordinate of (\(\alpha_{\rm J2000.0}\), \(\delta_{\rm J2000.0}\)) = (19\({}^{\rm h}\)01\({}^{\rm m}\)39\(\fs\)13, \(-\)36\({}^{\circ}\)58\(\farcs\)47\(\farcs\)3). (Right panel) The color-scale image shows the velocity-field (moment 1) map toward IRS 2. The contours and circle are the same as in the right panel. To investigate the dynamics of the ring, we examine its velocity structure. In Figure 1b, there is a discernible blueshifted trend in the ring's inner edge and a subsequent redshift towards the outer. The channel maps, presented in Figure 2, further elucidate this pattern. Gas components are more blueshifted than 5.7 km s\({}^{-1}\) in the upper three panels located inside the ring. The C\({}^{18}\)O emission is almost along the white dotted lines in the middle three panels (5.9-6.3 km s\({}^{-1}\)). At the bottom panels with the velocity of 6.5-6.7 km s\({}^{-1}\), the primary emissions are extended toward outside the circle. Notably, around 6.1 km s\({}^{-1}\) near the center, there is a prominent blob with an intensity of \(\sim\)3 K, which is consistent with the observed emission. The \(\sim\)3 K emission is also seen in the middle three panels (5.9-6.3 km s\({}^{-1}\)). The \(\sim\)3 K emission is seen in the middle three panels (5.9-6.3 km s\({}^{-1}\)). At the bottom panels with the velocity of 6.5-6.7 km s\({}^{-1}\), the primary emissions are extended toward outside the circle. Notably, around 6.1 km s\({}^{-1}\) near the center, there is a prominent blob with an intensity of \(\sim\)3 K, which is consistent with the observed emission. spanning more than 1,000 au. Given its connection to the structure stretching northward, this component might be less a part of the ring and more likely an overlapping different cloud component along our line of sight. A noteworthy characteristic of this ring structure is overall blueshifted compared to the protostellar centroid velocity. To determine the systemic velocity of the protostar, we extracted the C\({}^{18}\)O spectrum at the 1.3 mm continuum disk position. Figure 3a exhibits a dual-horned shape, indicative of a rotating motion (e.g., Murillo et al., 2013; Tokuda et al., 2017). The detailed velocity structure in channel maps or velocity diagrams of the disk itself is not visualized in this letter, but the direction of rotation is northwest-southeast, probably tracing the Keplerian motion. The dip is likely proximate to the protostar's systemic velocity (e.g., Tokuda et al., 2017), estimated at 6.4 km s\({}^{-1}\). As shown in Figure 2, components redshifted more than 6.9 km s\({}^{-1}\) are scarcely observed in the ring structure. Figure 3b shows a Position-Velocity (PV) diagram across the ring structure. The yellow curve highlighted in the diagram likely shows an expansion-like behavior that resembles expanding molecular or atomic gas components around H ii regions and supernova remnants (e.g., Zhu et al., 2008; Dawson et al., 2008; Sano et al., 2018, 2021). Although the velocity distribution around driving sources depends on the initial gas configuration, in a scenario where the gas sweeps through a uniform medium, it would exhibit velocity structures symmetrical with respect to the central velocity. However, the IRS 2 ring structure only displays blueshifted components from the protostellar systemic velocity. This suggests that terms like expanding "bubble" or "shell" might not be the most fitting descriptions. Rather, referring to it as a "ring" with an asymmetric velocity is more appropriate in the context. To summarize the observed structures, we identified the high-density (\(\sim\)10\({}^{5}\) cm\({}^{-3}\)) gas ring with an expanding, blueshifted velocity gradient, approximately 7,000 au in diameter, positioned roughly 5,000 au from the protostar's center. Such characteristics appear to be largely unprecedented (or at least not clearly discussed) in recent observations of young protostellar sources. ## 4 Discussion We first discuss what exactly is the ring structure around the Class I protostar, IRS 2, characterized in the present study. The CrA molecular cloud is a complex cluster-forming region with rich hierarchical molecular structures as a whole, and thus it cannot be completely ruled out that the ring structure is just a coincidence of unrelated line-of-site components along with the protostellar system as part of the natal cloud dynamics. Our forthcoming paper will discuss this possibility further (K., Tachihara et al. in prep.). The presence of the velocity gradient across the ring edge (Figures 1b and 3b), which can be interpreted as an expanding motion, is reminiscent of the existence of an energy-driving source at the center, but no corresponding object has been reported so far (see also the next paragraph). The proximity of the observed velocity to that of the protostar, coupled with the distance of only a few thousand au, strongly suggests that the ring structure is likely related to IRS 2 directly. We proceed with our discussion, considering the ring as a structure or phenomenon associated with protostar formation hereafter. Recent ALMA observations of protostellar systems have frequently reported what appear to be crescent or arc structures that are not pure rings but represent parts of them with different origins in each. Fernandez-Lopez et al. (2020) and Harada et al. (2023) have identified ring or crescent-shaped molecular outflows that a pole-on configuration can explain. Sai et al. (2023) detected an arc-like structure around the central protostar in the Ced110 IRS 4 System. The author's primary interpretation is a part of the outflowing gas. In the CrA IRS 2 system, the observed relative velocity of the C\({}^{18}\)O ring with respect to the protostar is too small for the outflow interpretation, and we could not find the high-velocity component more than \(\sim\)5 km s\({}^{-1}\). Even if there were a hidden protostar at the center of the ring driving an outflow at the currently observed lower limit velocity of \(\lesssim\)1 km s\({}^{-1}\) (see Figure 3b), its outflow force would be an order of \(\sim\)10\({}^{-6}\) \(M_{\odot}\) km s\({}^{-1}\) yr\({}^{-1}\) based on the ring mass of \(\sim\)0.05 \(M_{\odot}\) and the radius of \(\sim\)3,500 au (see Sect. 3). Such an outflow-driving source should be brighter than the bolometric luminosity of \(\sim\)0.1-1 \(L_{\odot}\)(Wu et al., 2004), which should be detectable with the currently available infrared survey, such as Spitzer. The existence of arc-like structures, interpreted as accretion streamers, is garnering increasing attention (e.g., Pineda et al., 2020). In the IRS 2 system, the SO component connected to the protostar is observed only around the systemic velocity (see Figure 2). This characteristic differs from other potential streamers characterized by their velocity structure, which exhibits an accelerating velocity toward the protostar vicinities (e.g., Harada et al., 2023; Kido et al., 2023). In summary, the rings found in this IRS 2 system do not resemble structures or phenomena well known from previous observations of protostellar envelopes and are either newly discovered features or have not been discussed in depth. We focus on the magnetic wall growth scenario induced by interchange instability (see Sect. 1), reproduced in magnetohydrodynamics calculations as the mechanism generating the ring-like structure in the IRS 2 system. As illustrated in Figure 3 of Stehle & Spruit (2001), the interchange instability develops with a substantial azimuthal wave number, leading to the formation of multiple rings or holes in specific (off-center) directions. Although simulations tend to satisfy the conditions for repeatable interchange instability at the outer edge due to magnetic field dissipation within the disk, these conditions may not always be satisfied. The model in Machida & Basu (2020) appears to form two rings (see their Figure 5) from a single magnetic flux advection event because they imposed an initially axisymmetric density distribution. If the surrounding environment exhibits a non-uniform gas density around the disk, it is expected that the advection direction of the magnetic flux can be determined by the anisotropic ram pressure (Krasnopolsky, 2012). Figure 3: (a) An average C\({}^{18}\)O spectrum toward IRS 2 over the region where the 1.3 mm continuum mission is larger than 0.05 Jy beam\({}^{-1}\). The red dashed line represents the systemic velocity, 6.4 km s\({}^{-1}\) of the protostar judging from the dip velocity. (b) A C\({}^{18}\)O position-velocity (PV) diagram toward over the ring structure. We extracted the spectral cube along the rectangle with a length of 70\({}^{\prime\prime}\) and a width of 55\({}^{\prime\prime}\) at the same central coordinate of the ring center as shown in Figure 1. The y-axis shows the declination offset from the center. The cyan dotted lines along the horizontal axis denote the ring radius as shown in Figure 1. The white dotted line represents the systemic velocity defined in panel (a). et al., 2012; Matsumoto et al., 2017), resulting in the formation of a single ring (Zhao et al., 2011). In this case, magnetic flux is leaked from the region where the ram pressure is weakest outside the disk. From an observational perspective, the primary filament in the CrA region is located northeast of IRS 2 (Sicilia-Aguilar et al., 2013). This location is opposite the ring, suggesting the presence of an inhomogeneous gas distribution. A small-scale (\(\sim\)1,000 au) gas density inhomogeneities prior to star formation are indeed observed in the Taurus region, where isolated dense cores are clearly identified (Tokuda et al., 2020). Therefore, the possibility of a singular ring formation by interchange instability cannot be excluded. The magnetic-wall ring formed by interchange instability is created almost simultaneously with protostar formation with a size scale of a few tens au (e.g., Joos et al., 2012; Machida et al., 2014; Matsumoto et al., 2017; Machida and Basu, 2020). The advection of the magnetic flux produces the gas cavity that expands until the magnetic pressure balances with the ram pressure of the infalling gas, where a magnetic wall (high-density gas region) forms. The expansion time scale of the ring is roughly determined by the sound speed or on the order of the free-fall time (Zhao et al., 2011; Matsumoto et al., 2017). For example, in the IRS 2 system, if we assume that the ring expansion started immediately after the protostar formation at a constant speed of \(\sim\)0.3 km s\({}^{-1}\), which is the sound speed at a temperature of 25 K (Sicilia-Aguilar et al., 2013) and consistent with the observed relative velocity with respect to the protostar (Figures 1b and 3b), it would take on the order of \(\sim\)10\({}^{5}\) years to reach the most distant ring edge from the protostar, \(\sim\)9,000 au. Considering that the statistically derived timescale for Class 0 is \(\sim\)10\({}^{5}\) years (e.g., Evans et al., 2009; Maury et al., 2011), the evolutionary stage of IRS 2 is consistent with the ring expansion estimate above if we assume that the ring and protostar formed simultaneously. While theoretical calculations suggest that rings could be deformed due to interactions within the infalling and rotating envelope (e.g., Figure 6 of Zhao et al., 2011), the interchange instability-driven structure indicated in Figure 2 of Zhao et al. (2011) appears fairly circular as in the case of a non-rotating collapse case. Although prestellar cores are generally thought to be rotating in their initial conditions, it is conceivable that when the magnetic field is strong enough to induce interchange instability, the magnetic braking can suppress rotation, potentially rendering scenarios similar to cases without rotation. As mentioned in the previous section, the primary filament is in the opposite direction of the ring. Because the protostar is located at the edge of the dense parental cloud, it is natural that flux leaks in a low-density gas region and creates a cavity structure in the same direction without a significant perturbation from the infalling gas. Based on the current evidence, we conclude that the ALMA-resolved gas distribution of the IRS 2 ring is likely a product of the advection of the magnetic flux due to the interchange instability. Note that it is necessary to conduct additional theoretical calculations in the future to replicate the phenomena discussed here and verify which structures develop under various initial conditions that are more specialized to the situation of CrA IRS 2. The structure discovered here may have some common characteristics in previously reported protostellar envelopes. For instance, arc-like structures with a size scale of \(\sim\)2,000 au were discovered toward MC27/L1521F (see Sect. 1). The arc's column density and velocity dispersion (see Table 2 in Tokuda et al., 2018) are also consistent with those in the IRS 2 ring. The author's interpretation is that the complex arcs are formed by turbulent gas motion or dynamical interaction among multiple protostars and/or gas condensations (see also Matsumoto et al., 2015). The lack of observational confidence to discuss the interchange instability was due to the ring's incomplete distribution. Such imperfect ring structures are also found in other Class 0 objects, such as VLA 1623 (Mercimek et al., 2023) and IRAS 16293-2422 (Murillo et al., 2022). The observational examples cited here bear some resemblance to the features of the IRS 2 gas ring, providing value for revisiting their origin, although individual detailed studies are needed. If the interchange-instability-driven ring expands as the function of the time, the density likely decreases, making it more likely detectable with low-density tracers such as \({}^{12}\)CO. Possibly, the \({}^{12}\)CO filamentary gas observed toward B59-BHB2007 (Alves et al., 2020) might correspond to the diffuse remnant of the ring. Identifying the interchange-instability origin ring likely becomes more challenging as the structures mix with the ambient mediums in the later evolutionary stages (Zhao et al., 2011). Nevertheless, we propose that the potential to deepen our understanding of the protostellar evolution process may be hidden in the interaction between ring structures and the surrounding envelope, creating complex structures. The presence or absence of these rings may be vital in exploring the star formation magnetic flux problem, necessitating a dual approach from both observational and theoretical aspects. If we wish to prove the validity of interchange instability as the ring formation mechanism in CrA IRS 2, a stronger magnetic field must be observed within the hole. If the C\({}^{18}\)O-traced ring is observed in lines causing the strong Zeeman splitting, such as CN and CCS, it may be worth challenging polarization observations to measure the magnetic field strength along the line-of-sight. Unfortunately, the Stokes \(I\), i.e., total line strength, is also expected to be weak in the hole, preventing a plausible detection of polarized emission with the current capabilities of ALMA. If the goal is to detect the magnetic field structure resulting from the interchange instability, targeting regions with newly formed rings without much expansion and with high column densities that show a thermal dust continuum may be a strategic approach. A higher polarization fraction in the dust continuum emission is indeed observed in arc-like protostellar envelopes (e.g., Cox et al., 2018; Takahashi et al., 2019), suggesting the presence of magnetic walls. However, the specific polarization pattern could vary depending on the evolutionary stage and viewing angle. It would be prudent to perform synthetic observations based on current numerical studies to understand how interchange instability-driven rings will be observed. Future works to reveal the magnetic field could further clarify arc or ring's characteristics, enabling distinctions from different physical origin features such as accretion streamers. ## 5 Summary We presented ALMA observations of the Class I protostar IRS 2 in the CrA star-forming region at a spatial resolution of \(\sim\)200 au. We identified a large molecular gas ring, \(\sim\)7,000 au in diameter, showing signs of expansion. The characteristics of this ring, specifically its size and location with respect to the protostellar source, are consistent with the theory of magnetic wall expansion caused by magnetic flux leakage due to interchange instability. Our observational results could represent the first preliminary evidence of magnetic flux leakage during the early stages of star formation, providing crucial insights into the star formation process, particularly regarding the magnetic flux problem. The ring's morphology, especially its highly circular shape, is a point of ongoing consideration. Future theoretical studies focusing on the dynamics of interchange-instability-driven structures under various conditions are crucial. Additionally, follow-up magnetic field observations and case studies involving other protostellar sources are essential for gaining a deeper understanding. We would like to thank the anonymous referee for useful comments that improved the manuscript. This paper makes use of the following ALMA data: ADS/JAO. ALMA#2021.1.00715.S. ALMA is a partnership of ESO (representing its member states), the NSF (USA), and NINS (Japan), together with the NRC (Canada), MOST, and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by the ESO, AUI/NRAO, and NAOJ. This work was supported by a NAOJ ALMA Scientific Research grant Nos. 2022-22B, Grants-in-Aid for Scientific Research (KAKENHI) of Japan Society for the Promotion of Science (JSPS; grant No. JP21K13962). astropy (Astropy Collaboration et al., 2018), CASA (CASA Team et al., 2022)
2310.20436
SignAvatars: A Large-scale 3D Sign Language Holistic Motion Dataset and Benchmark
We present SignAvatars, the first large-scale, multi-prompt 3D sign language (SL) motion dataset designed to bridge the communication gap for Deaf and hard-of-hearing individuals. While there has been an exponentially growing number of research regarding digital communication, the majority of existing communication technologies primarily cater to spoken or written languages, instead of SL, the essential communication method for Deaf and hard-of-hearing communities. Existing SL datasets, dictionaries, and sign language production (SLP) methods are typically limited to 2D as annotating 3D models and avatars for SL is usually an entirely manual and labor-intensive process conducted by SL experts, often resulting in unnatural avatars. In response to these challenges, we compile and curate the SignAvatars dataset, which comprises 70,000 videos from 153 signers, totaling 8.34 million frames, covering both isolated signs and continuous, co-articulated signs, with multiple prompts including HamNoSys, spoken language, and words. To yield 3D holistic annotations, including meshes and biomechanically-valid poses of body, hands, and face, as well as 2D and 3D keypoints, we introduce an automated annotation pipeline operating on our large corpus of SL videos. SignAvatars facilitates various tasks such as 3D sign language recognition (SLR) and the novel 3D SL production (SLP) from diverse inputs like text scripts, individual words, and HamNoSys notation. Hence, to evaluate the potential of SignAvatars, we further propose a unified benchmark of 3D SL holistic motion production. We believe that this work is a significant step forward towards bringing the digital world to the Deaf and hard-of-hearing communities as well as people interacting with them.
Zhengdi Yu, Shaoli Huang, Yongkang Cheng, Tolga Birdal
2023-10-31T13:15:49Z
http://arxiv.org/abs/2310.20436v3
# SignAvatars: A Large-scale 3D Sign Language Holistic Motion Dataset and Benchmark ###### Abstract In this paper, we present SignAvatars, the first large-scale multi-prompt 3D sign language (SL) motion dataset designed to bridge the communication gap for hearing-impaired individuals. While there has been an exponentially growing number of research regarding digital communication, the majority of existing communication technologies primarily cater to spoken or written languages, instead of SL, the essential communication method for hearing-impaired communities. Existing SL datasets, dictionaries, and sign language production (SLP) methods are typically limited to 2D as the annotating 3D models and avatars for SL is usually an entirely manual and labor-intensive process conducted by SL experts, often resulting in unnatural avatars. In response to these challenges, we compile and curate the SignAvatars dataset, which comprises 70,000 videos from 153 signers, totaling 8.34 million frames, covering both isolated signs and continuous, co-articulated signs, with multiple prompts including HamNoSys, spoken language, and words. To yield 3D holistic annotations, including meshes and biomechanically-valid poses of body, hands, and face, as well as 2D and 3D keypoints, we introduce an automated annotation pipeline operating on our large corpus of SL videos. SignAvatars facilitates various tasks such as 3D sign language recognition (SLR) and the novel 3D SL production (SLP) from diverse inputs like text scripts, individual words, and HamNoSys notation. Hence, to evaluate the potential of SignAvatars, we further propose a unified benchmark of 3D SL holistic motion production. We believe that this work is a significant step forward towards bringing the digital world to the hearing-impaired communities. Our project page is [https://signavatars.github.io/](https://signavatars.github.io/). ## 1 Introduction According to the World Health Organization, there are 466 million people suffering from hearing disability (Davis and Hoffman, 2019). Among them, there are over 70 million who communicate via sign languages (SLs) resulting in more than 300 different SLs across different hearing-impaired communities (The World Federation). While the field of (spoken) natural language processing (NLP) and language assisted computer vision (CV) are well explored, this is not the case for the alternate and important communicative tool of SL, and accurate generative models of holistic 3D avatars as well as dictionaries are highly desired for efficient learning (Naert et al., 2020). We argue that the lack of large scale, targeted SL datasets is an important reasons for this gap putting a barrier in front of downstream tasks such as digital simultaneous SL translators. On one hand, existing SL datasets and dictionaries (Duarte et al., 2021; Albanie et al., 2021; Camgaro et al., 2018; Hanke et al., 2020; Huang et al., 2018) are typically limited to 2D videos or 2D keypoints annotations, which are insufficient for learners (Lee et al., 2023) as different signs could appear to be the same in 2D domain due to _depth ambiguity_. On the other hand, while parametric holistic models exist for human bodies (Pavlakos et al., 2019) or bodies & faces (Yi et al., 2023), there is no unified, large-scale, multi-prompt 3D holistic motion dataset with accurate hand mesh annotations, which are crucial for SL. The reason for this is that the creation of 3D avatar annotation for SL is a labor-intensive, entirely manual process conducted by SL experts and the results are often unnatural (Aliwy and Ahmed, 2021). To address this challenge, in this paper, we first introduce the SignAvatars dataset, synergizing various data sources from public datasets to continuous online videos with mixed-prompt annotations including HamNoSys, spoken language, and word. Overall, we compile \(70K\) videos from \(153\) signers amounting to \(8.34M\) frames. Unlike (Forte et al., 2023), our dataset is not limited to isolated signs, _i.e._ single sign per video, where HamNoSys-annotations are present, but includes continuous and co-articulated signs. To augment our dataset with 3D full-body annotations, including 3D body, hand and face meshes as well as 2D & 3D keypoints, we design an automated and generic annotation pipeline, in which we perform a multi-objective optimization over 3D poses and shapes of face, hands and body. Our optimizer considers the temporal information of the motion and respects the biomechanical constraints in order to produce accurate hand poses, even in presence of complex, interacting hand gestures. Apart from meshes and SMPL-X models, we also provide a _hand-only_ subset with MANO annotations. SignAvatars enables multitude of tasks such as 3D sign language recognition (SLR) or the novel 3D sign language production (SLP) from text scripts, individual words, and HamNoSys notation. To address this challenge and accommodate diverse forms of semantic input, we propose a novel approach utilizing a semantic Variational Autoencoder (VQVAE) (Van Den Oord et al., 2017) that effectively maps these varied inputs to discrete code indices. This _parallel linguistic feature generator_ is fused with a discrete motion encoder within an auto-regressive model to generate sequences of code indices derived from these semantic representations, strengthening the text-motion correlation. Consequently, our method can efficiently generate sign motion from an extensive array of textual inputs, enhancing its versatility and adaptability to various forms of semantic information. We will demonstrate in Sec 5 that building such reliance and correlation between the low-level discrete representations leads to accurate, natural and sign-motion consistent SL production compared to direct regression from a high-level CLIP feature. To quantitatively & qualitatively evaluate the potential of SignAvatars, we introduce a new benchmark and present the first results for 3D SL holistic mesh motion production from multiple prompts including HamNoSys, spoken language, and word. On this benchmark. We assess the performance of our Sign-VQVAE against the baselines we introduce, where we show a relative improvement of Figure 1: Overview of SignAvatars, the first public large-scale multi-prompt 3D sign language holistic motion dataset. (**upper row**) We introduce a generic method to automatically annotate a large corpus of video data. (**lower row**) We propose a 3D SLP benchmark to produce plausible 3D holistic mesh motion and provide a neural architecture as well as baselines tailored for this novel task. \(200\%\). Though, none of these models can truly match the desired accuracy, confirming the timeliness and the importance of SignAvatars. To summarize, our contributions are: * We introduce SignAvatars, the first large-scale multi-prompt 3D holistic motion SL dataset, containing diverse forms of semantic input. * To provide accurate annotations for SignAvatars, in the form of expressive 3D avatar meshes, we introduce a multi-objective optimization capable of dealing with the complex interacting hands scenarios, while respecting the biomechanical hand constraints. We initialize this fitting procedure by a novel multi-stage, hierarchical process. * We provide a new 3D sign language production (SLP) benchmark for SignAvatars, considering multiple prompts and full-body meshes. * We further develop a VQVAE-based strong 3D SLP network significantly outperforming the baselines, which are also introduced as part of our work. We believe SignAvatars is a significant stepping stone towards bringing the 3D digital world and 3D SL applications to the hearing-impaired communities, by fostering future research in 3D SL understanding. ## 2 Related Work **3D holistic mesh reconstruction (for SL).** Recovering holistic 3D human body avatars from RGB videos and parsing them into parametric forms like SMPL-X (Pavlakos et al., 2019) or Adam (Joo et al., 2018) is a well explored area (Yi et al., 2023; Pavlakos et al., 2019; Lin et al., 2023). For example, Arctic (Fan et al., 2023) introduces a full-body dataset annotated by SMPL-X, for 3D object manipulation. Hasson et al. (2019) provide a hand-object constellations datasets with MANO annotations. However, such expressive parametric models have rarely been applied to the SL domain. Kratimenos et al. (2021) use off-the-shelf methods to estimate a holistic 3D mesh on existing dataset (Theodorakis et al., 2014) but cannot deal with the challenging occlusions and interactions, making them unsuitable for complex, real scenarios. The latest concurrent work (Forte et al., 2023) can reconstruct 3D holistic mesh when the signs are isolated and the pre-defined interpolation and linguistic rules are provided, _e.g._ such as HamNoSys. As such, it can not be generalized to multiple-sign videos or videos without **isolated-sign-specific** HamNoSys annotation. Overall, the literature lacks a robust method handling **continuous and co-articulated** SL videos with complex hand interactions. **SL datasets.** While there have been many well-organized continuous 2D SL motion datasets (Duarte et al., 2021; Albanie et al., 2021; Camgoz et al., 2018; Hanke et al., 2020; Huang et al., 2018), the only existing 3D SL motion dataset with 3D holistic mesh annotation is in (Forte et al., 2023). As mentioned, this rather small dataset only includes a single sign per video only with HamNoSys-prompts. In contrast, SignAvatars provides a **multi-prompt 3D** SL holistic motion dataset with **continuous and co-articulated** signs and fine-grained hand mesh annotations. **SL applications.**Arkushin et al. (2023) can generate 2D motion sequences from HamNoSys. Saunders et al. (2020) and Saunders et al. (2021) are able to generate 3D keypoint sequences relying on glosses. The avatar approaches are often hand-crafted and produce robotic and unnatural movements. Apart from them, there are also early avatar approaches (Ebling and Glauert, 2016; Efthimiou et al., 2010; Bangham et al., 2000; Zwitserlood et al., 2004; Gibet et al., 2016) with a pre-defined protocol and character. To the best of our knowledge, we present the first large-scale 3D holistic SL motion dataset, SignAvatars. Built upon the dataset, we also introduce the novel task and benchmark of 3D sign language production, through different prompts (language, word, HamNoSys). ## 3 SignAvatars Dataset **Overview**. SignAvatars is a holistic motion dataset composed of \(96K\) video clips having \(13.7M\) frames in total, containing body, hand and face motions as summarized in Tab. 2. We compile SignAvatars by synergizing various data sources from public datasets to online videos and form seven subsets, whose distribution is reported in Fig. 2. Since the individual subsets do not naturally contain expressive 3D whole-body motion labels and 2D keypoints, we introduce a unified automatic annotation framework providing rich 3D holistic parametric SMPL-X annotations along with MANO subsets for hands. Overall, we provide \(117\) hours of \(70K\) video clips with \(8.34M\) frames of motion data with accurate expressive holistic 3D mesh as motion annotations. ### Dataset Characteristics **Expressive motion representation**. To fill in the gaps of previous 2D-only SL data. Our expressive 3D holistic body annotation consists of face, hands, and body, which is achieved by adopting SMPL-X (Pavlakos et al., 2019). It uses standard vertex-based linear blend skinning with learned corrective blend shapes and has N = 10475 vertices and K = 67 joints. For time interval \([1:t]\), \(V_{1:T}=(v_{1},...,v_{t}),J_{1:T}=(j_{1},...,j_{t}),\theta_{1:T}=(\theta_{1},...,\theta_{t})\), represent mesh vertices, 3d joints, and poses in 6D representation (Zhu et al., 2019). Here the pose \(\theta_{t}\) includes the body pose \(\theta_{t}^{b}\in R^{23\times 6}\) with global orientation and the hand pose \(\theta_{t}^{h}\in R^{30\times 6}\). Moreover, \(\theta_{t}^{f}\in R^{6}\) and \(\phi\) represents the yaw pose and facial expressions respectively. For each of the sequences, we use an optimized consistent shape parameter \(\tilde{\beta}\) as there is no signer change in each clip. Overall, a motion state \(M_{t}\) is represented as: \(M_{t}=(\theta_{t}^{b},\theta_{t}^{h},\theta_{t}^{f},\phi,\tilde{\beta})\). Moreover, as shown in Tab. 1, our dataset also provides a hand motion subset by replacing the parametric representation from SMPL-X to MANO (Romero et al., 2022): \(M_{t}^{h}=(\theta_{t}^{h},\tilde{\beta})\), where \(h\) is the _handed-ness_ and \(\tilde{\beta}\) is also an optimized consistent shape parameter. **Sign language notation**. Similar to spoken languages, sign languages have special structures with a set of linguistic rules (Blaisel, 1997) (_e.g._ grammar, lexicons). Unlike spoken languages, they have no standard written forms. Moreover, there are over 300 different sign languages across the world, with hearing-impaired people who do not know any SL. Hence, having only a single type of annotation is insufficient in practice. To enable more generic applications for different users, we provide more modalities in the SignAvatars dataset. Our SL annotations can be categorized into four common types: HamNoSys, spoken language, word, and gloss, which can be used for a variety of downstream applications such as SLP and SLR. **Data sources**. As shown in Tab. 2, SignAvatars leverages our unified automatic annotation framework to collect SL motion sequences in divrese modalities from various different sources. Specifically, for co-articulated SL datasets like How2Sign (Duarte et al., 2021) and How2 (Sanabria et al., 2018) with American Sign Language (ASL) transcriptions, we collect _sentence-level_ clips from the \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline **Data** & **Video** & **Frame** & **Type** & **Signer** \\ \hline Word & 21K & 1.39M & W & 119 \\ PJM & 2.6K & 0.21M & H & 2 \\ DGS & 1.9K & 0.12M & H & 8 \\ GRSL & 0.8K & 0.06M & H & 2 \\ LSF & 0.4K & 0.03M & H & 2 \\ ASL & 34K & 5.7M & S & 11 \\ GSL & 8.3K & 0.83M & S, G & 9 \\ \hline Ours & 70K & 8.34M & S, H, W, G & 153 \\ \hline \end{tabular} \end{table} Table 2: Statistics of sub-datasets. W, H, S, G represent **w**ord, **H**amNoSys, **spoken language** and **g**loss separately. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline **Data** & **Video** & **Frame** & **Type** & **Signer** \\ \hline \hline **WRT-Procina-20147 (Camego et al., 2018)** & 8.25K & 0.94M & 11 & C & 9 \\ DGS Corpus (Haake et al., 2020) & - & - & 50 & C & 2D keypoints & 327 \\ BSL Corpus (Schreibl et al., 2013) & - & - & 125 & C & - & 269 \\ MS-SL,Jen (Jenke & Kelder, 2019) & 25K & - & 25 & 1 & 2 \\ W.ASL,Gil et al. (2020) & 21K & 1.39M & 14 & 1 & 20 keypoints & 119 \\ Hour2Sign (Duarte et al., 2021) & 34K & 5.7M & 29 & C & 20 keypoints, depth* & 11 \\ CSL,Daily (Huang et al., 2016) & 21K & - & 23 & C & 20 keypoints, depth & 10 \\ SIGNON (Van Avista et al., 2008) & 33K & - & 55 & C & 25 \\ AUTSL,Stonton \& Koles, 2020) & 38K & - & 1 & depth & 43 \\ Fieu et al. (2021) & 0.00K & 4K & - & 1 & body head instruction* & - \\ \hline SignAvatars (Ours) & 70K & 8.34M & 17 & Both & SMPL-X, MANO, 2MB,00 keypoints & 15.7 \\ \hline \end{tabular} \end{table} Table 1: Modalities of **publicly available** sign language datasets. C, I represent isolated and co-articulated (continuous) separately. * means the annotation has not been released yet. To the best of our knowledge, our dataset is the first publicly available 3D SL holistic continuous motion dataset with whole-body and hand mesh annotations with the most parallel modalities. Green Screen studio_ subset with multi-view frames, resulting in \(35K\) clips for the **ASL** subset. For **GSL** subset, we mostly gathered data from the publicly available PHOENIX14T dataset (Camoz et al., 2018) following the official split to have \(8.25K\) video clips. For isolated-sign SL videos with HamNoSys annotations, we collect \(5.8K\) clips from Polish SL corpus (Linde-Usiekniewicz et al., 2014) for **PJM**, and German Sign Language (DGS), Greek Sign Language (GRSL) and French Sign Language (LSF) from DGS Corpus (Prillwitz et al., 2008) and Dicta-Sign (Matthes et al., 2012). we gathered \(21K\) clips from word-level sources such as WLASL (Li et al., 2020) to curate the isolated-sign videos without HamNoSys annotation. We provide further details in our appendix. ### Automatic Holistic Annotation To efficiently auto-label the SL videos with motion data given only RGB online videos, we design an automatic 3D SL annotation pipeline that is not limited to isolated signs with HamNoSys annotations as in (Forte et al., 2023). To ensure motion stability and 3D shape accuracy, while maintaining efficiency during holistic 3D mesh recovery from SL videos, we propose an iterative fitting algorithm minimizing an objective heavily regularized both holistically and by _biomechanical hand constraints_(Spurr et al., 2020): \[E(\theta,\beta,\phi)=\lambda_{J}L_{J}+\lambda_{\theta}L_{\theta}+\lambda_{ \alpha}L_{\alpha}+\lambda_{\beta}L_{\beta}+\lambda_{\mathrm{smooth}}L_{ \mathrm{smooth}}+\lambda_{\mathrm{angle}}L_{\mathrm{angle}}+L_{\mathrm{bio}} \tag{1}\] where \(\theta\) is the full set of optimizable pose parameters, and \(\phi\) is the facial expression. \(L_{J}\) represents the 2D joint re-projection loss, which optimizes the difference between joints extracted from the SMPL-X model, projected into the image, with joints predicted with ViTPose (Xu et al., 2022) and MediaPipe (Kartynnik et al., 2019). \(L_{\theta}\) is the pose prior term following SMPLify-X (Pavlakos et al., 2019). Moreover, \(L_{\alpha}\) is a prior penalizing extreme bending only for elbows and knees and \(L_{\beta}\) is the shape prior term. In addition, \(L_{\mathrm{smooth}}\), \(L_{\mathrm{angle}}\) and \(L_{\mathrm{bio}}\) are the smooth-regularization loss, angle loss and biomechanical constraints, separately. Finally, each \(\lambda\) denotes the influence weight of each loss term. Please refer to the appendix for more details. In what follows, we describe in detail our regularizers. **Holistic regularization.** In terms of reasonable regularization terms, to reduce the jittery results caused by the noisy 2D detected keypoints, we first introduce a smooth term to avoid jitter results for body and hand motion poses, which is defined as: \[L_{\mathrm{smooth}}=\sum_{t}(||\hat{\theta}^{b}_{1:T}||_{2}+||\hat{\theta}^{h} _{1:T}||_{2}+||\theta^{h}_{2:T}-\theta^{h}_{1:T-1}||_{2}+||\theta^{b}_{2:T}- \theta^{b}_{1:T-1}||_{2}) \tag{2}\] where \(\hat{\theta}^{b}_{1:T}\in R^{Nxj_{k}x3}\) is the selected subset of pose parameters from \(\theta^{b}_{1:T}\in R^{NxJx3}\), and N is the frame number of the video. \(\hat{\theta}^{h}\in R^{Nxj_{k}}\) is the selected subset of hand parameters from \(\theta^{b}_{1:T}\in R^{NxJx3}\). The \(j_{b}\) and \(j_{h}\) is the selected body joint number and hand parameter number, respectively. Moreover, this could prevent implausible poses along the bone direction such as twist angle. After that, we add an angle limit prior term to penalize the hand pose lies outside the plausible range: \[L_{\mathrm{angle}}=\sum_{t}(\mathcal{I}(||\theta^{h}_{1:T}||_{2};\theta^{h}_{ \mathrm{min}},\theta^{h}_{\mathrm{max}})+\mathcal{I}(||\theta^{b}_{1:T}||_{2}; \theta^{b}_{\mathrm{min}},\theta^{b}_{\mathrm{max}})) \tag{3}\] where \(\mathcal{I}\) is the interval loss penalizing the outliers, \(\theta^{h,b}_{\mathrm{min}},\theta^{h,b}_{\mathrm{max}}\) is the pre-defined interval, \(\theta^{h},\theta^{b}\) is the selected subset of holistic poses. Finally, the signer in each video clip will not change, so we can use an **optimized consistent shape parameters**\(\beta\) to represent the holistic body shape. Specifically, our fitting procedure is split into **five** stages, where we will optimize the shape for the first **three** stages of optimization to derive the mean shape and freeze the shape in the following stages. **Biomechanical hand constraints**. Hand pose estimation from monocular RGB images is challenging due to fast movements, interaction, frequent occlusion and confusion. To further improve the hand motion quality and eliminate implausible hand pose. We apply biomechanical constraints to our hands, which contain three terms: (i) \(L_{\mathrm{bl}}\) for bone length, (ii) \(L_{\mathrm{palm}}\) for palmar region optimization, and (iii) \(L_{\mathrm{ja}}\) for joint angle priors. Specifically, \(L_{\mathrm{bio}}\) is defined as the weighted sum of: \[\begin{split} L_{\mathrm{bl}}=\sum_{i}\mathcal{I}(||\theta^{h}_{1: T}||_{2};\theta^{h}_{min},b^{i}_{max}),\quad L_{\mathrm{ja}}=\sum_{i}D( \alpha^{i}_{1:T},H^{i})\\ L_{\mathrm{palm}}=\sum_{i}(\mathcal{I}(||c^{i}_{1:T}||_{2};c^{i }_{min},c^{i}_{max})+\mathcal{I}(||d^{i}_{1:T}||_{2};d^{i}_{\mathrm{min}},d^{i} _{\mathrm{max}})),\end{split} \tag{4}\] where \(\mathcal{I}\) is the interval loss penalizing the outliers, \(b_{i}\) is the bone length of \(i\)-th finger bone and the optimization constraints the whole sequence \([1:T]\). After that, we further constraint the curvature and angular distance for the 4 root bones of palm structures by penalizing the outliers of curvature range \(c^{i}_{max},c^{i}_{min}\) and angular distance range \(d^{i}_{\max},d^{i}_{\min}\). Inspired by Spurr et al. (2020), we also apply constraints to the sequence of joint angles \(\alpha^{i}_{1:T}=(\alpha^{f}_{1:T},\alpha^{a}_{1:T})\) by approximating the convex hull on \((\alpha^{f},\alpha^{a})\) plane with point set \(H^{i}\) and minimizing their distance \(D\), where \((\alpha^{f},\alpha^{a})\) is the flexion and abduction angles. The biomechanical loss is then computed as the weighted sum of them: \(L_{\rm bio}=\lambda_{\rm bl}L_{\rm bl}+\lambda_{\rm palm}L_{\rm palm}+\lambda_ {\rm ja}L_{\rm ja}\). We refer the reader to our appendix for more details. **Hierarchical initialization**. Given an RGB image sequence, we initialize the holistic SMPL-X parameters from OSX (Lin et al., 2023). Though, due to the frequent occlusion and hand interactions, OSX is not always sufficient for a good initialization. Therefore, we further fuse OSX with ACR (Yu et al., 2023), PARE (Kocabas et al., 2021) to improve stability under occlusion and truncation. For 2D holistic keypoints initialization, we first train a whole-body 2D pose estimation model on COCO-wholeBody (Jin et al., 2020) based on ViTPose (Xu et al., 2022) and subsequently incorporated with MediaPipe (Kartynnik et al., 2019) by fusing and feeding through a confidence-guided filter. ## 4 Applications: SignVAE Our SignAvatars dataset enables the first applications to generate high-quality and natural 3D sign language holistic motion along with 3D meshes from both isolated and continuous SL prompts. To this end, motivated by the fact that the text prompts are highly correlated and aligned with the motion sequence, our method consists of a two-stage process designed to enhance the understanding of varied inputs by focusing on both semantic and motion aspects. In the first stage, we develop two codebooks - a shared semantic codebook and a motion codebook - by employing two Vector Quantized Variational Auto-Encoders (VQ-VAE). This allows us to map diverse inputs to their corresponding semantic code indices and link motion elements to motion code indices. In the second stage, we utilize an auto-regressive model to generate motion code indices based on the previously determined semantic code indices. This integrated approach ensures a coherent and logical understanding of the input data, effectively capturing both the semantic and motion-related information. **SL motion generation**. To produce stable and natural holistic poses, instead of directly mapping prompts to motion, we leverage the generative model VQ-VAE as our SL motion generator. As illustrated in Fig. 4, our SL motion VQVAE consists of an autoencoder structure and a learnable codebook \(Z_{m}\), which contains \(I\) codes \(Z_{m}=\{z_{i}\}_{i=1}^{I}\) with \(z_{i}\in R^{d_{z}}\). We first encode the given 3D SL motion sequence \(M_{1:T}=(\theta^{b}_{T},\theta^{b}_{T},\theta^{f}_{\varphi},\phi)\), where \(T\) is the motion length, into a latent feature \(F^{m}_{1:T/w}=(f^{m}_{1:T/w},...,f^{m}_{1:T/w})\in R^{d_{z}}\), where \(w=4\) is used as the downsampling rate for the window size. Subsequently, we quantize the latent feature embedding by searching for the nearest neighbour code in the codebook \(Z_{m}\). For the \(j-th\) feature, the quantization code is found by: \(f^{m}_{j}=\operatorname*{arg\,max}_{z_{i}\in Z}||f^{m}_{j}-z_{i}||_{2}\). Finally, the quantized latent features are fed into decoders for reconstruction. In terms of the training of the SL motion generator, we apply the standard optimization scheme with \(L_{motion\_q}\): \[L_{m-vq}=L_{\rm recon}(M_{1:T},\hat{M}_{1:T})+||\text{sg}[F^{m}_{1:T}]-F^{\hat{ m}}_{1:T}||_{2}+\beta||F^{m}_{1:T}-sg[F^{\hat{m}}_{1:T}]||_{2} \tag{5}\] Figure 3: Overview of our automatic annotation pipeline. Given an RGB image sequence as input for hierarchical initialization, it is followed by optimization with temporal smoothing and biomechanical constraints. Finally, it outputs the final results in a motion sequence of SMPL-X parameters. where \(L_{\rm recon}\) is the MSE loss and \(\beta\) is a hyper-parameter. \(\rm sg\) is the detach operation to stop the gradient. We provide more details regarding the network architecture and training in our appendix. **Prompt feature extraction for parallel linguistic feature generation.** In terms of the linguistic condition \(c\) from input prompt, typical motion generation tasks usually leverage LLM to produce linguistic prior for efficient learning. In our task of spoken language and word-level annotation, we leverage CLIP (Radford et al., 2021) as our prompt encoder to obtain the text embedding \(E^{l}\). However, this does not extend to all the other SL annotations we desire. As a remedy, to enable applications with different prompts such as HamNoSys, instead of relying on the existing pre-trained CLIP, we define a new prompt encoder for embedding. After quantizing the prompt (_e.g._ HamNoSys glyph) into tokens with length \(s\), we use an embedding layer to produce the linguistic feature \(E^{l}_{1:s}=(\hat{e}^{l}_{1},...,\hat{e}^{l}_{s})\) with same dimension \(d_{l}\) as the text embeddings of CLIP (Radford et al., 2021). For simplicity, we use "text" to represent all different input prompts. Subsequently, motivated by the fact that the text prompts are highly correlated and aligned with the motion sequence, we propose a _parallel linguistic feature generator_ (PLFG) coupled with the SL motion generator. In particular, we leverage a similar quantization process using the codebook \(Z_{l}\) and training scheme as in the SL motion generator to yield linguistic features: \[L_{l-vq}=L_{\rm recon}(E^{l}_{1:s},\hat{E}^{l}_{1:s})+||sg[F^{l}_{1:s}]-F^{l}_{ 1:s}||_{2}+\beta||F^{l}_{1:s}-sg[F^{l}_{1:s}]||_{2} \tag{6}\] where \(F^{l}_{1:s}\) is the latent feature after encoding the initial linguistic feature. \(F^{l}_{1:s}\) is the quantized linguistic feature after applying \(\hat{f}^{l}_{j}=\operatorname*{arg\,max}_{z_{i}\in Z_{l}}||f^{l}_{j}-z_{i}||_ {2}\) to \(F^{l}_{1:s}\). **Sign-motion cross modelling and production.** After training the VQVAE-based SL motion generator, we can map any motion sequence \(M_{1:T}\) to a sequence of indices \(X=[x_{1},...,x_{T/w},x_{\rm EOS}]\) through the motion encoder and quantization, where \(end\) is a learnable end token representing the \(stop\) signal. After training both the SL motion generator and the linguistic feature generator, our network will be jointly optimized in a parallel manner. Specifically, we fuse the linguistic feature embedding \(E_{l}\) and the codebook index vectors of \(Z_{l}\) to formulate the final condition for our autoregressive code index generator. The objective for training the code index generator can be seen as an autoregressive next-index prediction task, represented with a \(\rm cross-entropy\) loss between the likelihood of the full predicted code index sequence and the real ones as \(L_{\rm SLP}=\mathbb{E}_{X\sim p(X)}[-\log p(X|c)]\). Lastly, with the quantized motion representation, we generate the codebook vectors in a temporal autoregressive manner and predict the distribution of the next codebook indices given an input linguistic prompt as linguistic condition \(c\). After mapping the codebook indices \(\tilde{X}\) to the quantized Figure 4: Overview of our 3D SLP network. Our method consists of a two-stage process. We first create semantic and motion codebooks using two VQ-VAEs, mapping inputs to their respective code indices. Then, we employ an auto-regressive model to generate motion code indices based on semantic code indices, ensuring a coherent understanding of the data. motion representation \(\hat{F}^{m}_{1:(T/w)}\), we are able to decode and produce the final 3D holistic motion with mesh representations \(M_{1:T}\). ## 5 Experimental Evaluation With SignAvatars dataset, we have enabled more 3D applications for sign language communities, especially 3D SL motion production. We now showcase the effectiveness and contribution of SignAvatars on the benchmark and application introduced in Sec 4. Note that, these are also thee first benchmark results for 3D holistic SL motion production yielding mesh representations. **Evaluation metrics**. To fully assess the quality of our motion generation, we evaluate the holistic motion as well as the arm motion1. Based on an evaluation model trained following prior arts in motion generation (Tever et al., 2022; Zhang et al., 2023b), we use the scores and metrics of FID, Diversity, Multimodality (MM), MM-Dist, MR-precision, whose details are provided in App. B.2. Unfortunately, there is no de-facto standard for evaluating 3D SLP in the literature. While Lee et al. (2023) is capable of back-translating 3D SL motion by treating it as a classification, it is tailored only for word-level back-translation. While BLEU and ROUGE are commonly used in the back-translation evaluation (Saunders et al., 2020, 2021), they are not generic for other types of annotations such as HamNoSys or glosses. Since the generated motion might differ in length from the real motion, absolute metrics like MPJPE would also be unsuited. Inspired by (Arkushin et al., 2023; Huang et al., 2021), we propose a new **MR-Precision** for motion retrieval as well as DTW-MJE (Dynamic Time Warping - Mean Joint Error) (Kruskal, 1983) with standard SMPL-X keypoint set without lower body, for evaluating the performance of our method as well as the baselines. Footnote 1: The lower body is not evaluated in our experiments as it is not related to the SL motion **Subsets & training settings.** Specifically, we report results on \(three\) representative subsets: (i) the complete set of \(ASL\) for spoken language (corresponding to \(language\) in Table 3), (ii) the \(word\) subset with 300 vocabularies, (iii) combined subset of DGS, LSF, PJM, and GRSL for HamNoSys. For training, we follow the official splits for (i) and (ii). For (iii), we leverage a four-fold strategy where we train on three of them and test on the other, repeated four times to have the final results. **Qualitative analysis**. Fig. 5 shows examples of our 3D holistic body motion generation results. As observed, our method can generate plausible and accurate holistic 3D motion from different prompts while containing some diversity enriching the production results. We show further comparison against the state-of-the-arts in Appendix. **Quantitative analysis.** We present detailed quantitative results in Tab. 3. It can be seen that the 3D SLP with word-level prompts can achieve the best performance reaching the quality of real motions. Learning from spoken languages is a naturally harder task and we invite the community to develop stronger methods to produce 3D SLP from spoken languages. To further evaluate the sign accuracy and effect of body movement, we report separate results for individual arms (\(e.g.\)\("Gesture"\)), with slight improvements in FID and MR-Precision. However, it will also degenerate the text-motion consistency (R-Precision and MM-dist) due to the absence of body-relative hand position. Figure 5: Qualitative results of 3D holistic SLP from different prompts (left row: spoken language, top right: HamNoSys, bottom right: word). Within each sample, the first two rows are the input prompts and the generated results. The last row is the corresponding video clip from our dataset. Since there is no work in the literature that can generate 3D holistic SL motion with mesh representation from any of the linguistic sources (_e_.\(g\). spoken language, HamNoSys, gloss,...), we introduce a new baseline, where we modify the latest HamNoSys-based SLP work, Ham2Pose (Arkushin et al., 2023) (corresponding to _Ham2Pose-3d_ in Tab. 4), to take our linguistic feature as input and to output SMPL-X standard keypoints. We train this network on the same split as our SignVAE and use DTW-MJE for evaluation. We also regress the keypoints from our holistic representation \(M_{1:T}\) on third subset (iii). It shows the potential that leveraging our SignAvatars dataset can easily enable more 3D approaches and significantly improve the existing SLP applications by simple adaptation compared to the original Ham2Pose. The results are reported on the HamNoSys _holistic_ set for comparison as illustrated in Tab. 4. While our method drastically improves over the baseline, the result is far from ideal, motivating the need for better models for this new task. **Ablation study**. To study the contribution of different components of our method, we have modified MDM (Tevet et al., 2022) as our backbone to take our linguistic feature as input. This corresponds to _SignDiffuse_ in Tab. 5. For the study of our unique text-sign cross-modeling module, our baseline, _SignVAE (Base)_, replaces the PLFG with a canonical pre-trained CLIP feature as input to the encoder. As shown in Tab. 5, our joint scheme utilizing the PLFG can significantly improve the prompt-motion consistency, resulting in an increase in **R-precision** and **MM-dist**. Moreover, our VQVAE backbone quantizing the motion representation into a motion codebook, enables interaction with the linguistic feature codebook, leading to significant improvements in prompt-motion correspondences and outperforms other baselines built with our linguistic feature generator (SignDiffuse, Ham2Pose-3d) and generates more text-motion consistent results. ## 6 Conclusion We introduced **SignAvatars**, the first large-scale 3D holistic SL motion dataset with expressive 3D human and hand mesh annotations, provided by our automatic annotation pipeline. SignAvatars enables a variety of application potentials for hearing-impaired communities. Built upon our dataset, we propose the first 3D sign language production approach to generate natural holistic mesh motion sequences from SL prompts. We also introduce the first benchmark results for this new task, 3D holistic SL motion production from diverse SL prompts. Our evaluations on this benchmark clearly show the advantage of our new VQVAE-based model over the baselines, we develop. **Limitations and Future Work:** Having the first benchmark we proposed, there is still a lack of in-depth investigation of other 3D techniques for 3D SL motion generation. Especially, due to the lack of a sophisticated existing generic 3D back-translation methods, the evaluation may not fully showcase the superiority of our dataset and the proposed method. We leave this for a future study. Moreover, combining 3D SLT and SLP to formulate a multi-modal generic SL framework will be one of the future works. Developing a large sign language model with more properties and applications in AR/VR will significantly benefit the hearing-impaired society around the world. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{DTW-MJE Rank (\(\uparrow\))} \\ \cline{2-5} & top 1 & top 3 & top 5 \\ \hline Ham2Pose* & 0.092\({}^{\pm 0.01}\) & 0.197\({}^{\pm 0.20}\) & 0.354\({}^{\pm 0.02}\) \\ Ham2Pose-3d & 0.253\({}^{\pm 0.006}\) & 0.369\({}^{\pm 0.02}\) & 0.511\({}^{\pm 0.035}\) \\ SignVAE (Ours) & 0.516\({}^{\pm 0.039}\) & 0.694\({}^{\pm 0.041}\) & 0.786\({}^{\pm 0.035}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison with state-of-the-art SLP method from HamNoSys. * represents using only 2D information. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{**Data Type**} & \multicolumn{3}{c|}{**R-Precision(\(\uparrow\))**} & \multicolumn{3}{c|}{**FID (\(\downarrow\))**} & \multicolumn{3}{c|}{**Diversity (\(\downarrow\))**} & \multicolumn{3}{c|}{**MM (\(\downarrow\))**} & \multicolumn{3}{c|}{**MM-dist (\(\downarrow\))**} & \multicolumn{3}{c}{**MM-dist (\(\downarrow\))**} \\ \hline \multirow{3}{*}{**Real motion**} & Language & Top 1 & top 3 & top 5 & top 5 & top 5 & top 5 & top 5 & top 5 & top 5 & top 5 \\ \hline \multirow{3}{*}{**Real motion**} & Language & 0.098\({}^{\pm 0.01}\) & 0.021\({}^{\pm 0.00}\) & 0.706\({}^{\pm 0.001}\) & 0.017\({}^{\pm 0.002}\) & 0.365\({}^{\pm 0.003}\) & - & 2.663\({}^{\pm 0.003}\) & - & - & - \\ & HamNoSys & 0.485\({}^{\pm 0.003}\) & 0.682\({}^{\pm 0.003}\) & 0.796\({}^{\pm 0.003}\) & 0.007\({}^{\pm 0.003}\) & 8.574\({}^{\pm 0.003}\) & - & 2.113\({}^{\pm 0.003}\) & - & - & - \\ & Wind300\({}^{\pm 0.009}\) & 0.831\({}^{\pm 0.004}\) & 0.683\({}^{\pm 0.003}\) & 0.000\({}^{\pm 0.003}\) & 8.664\({}^{\pm 0.003}\) & - & 1.855\({}^{\pm 0.003}\) & - & - & - \\ \hline \multirow{3}{*}{**Holistic**} & Language & 0.735\({}^{\pm 0.003}\) & 0.535\({}^{\pm 0.004}\) & 0.661\({}^{\pm 0.003}\) & 1.220\({}^{\pm 0.003}\) & 11.85\({}^{\pm 0.003}\) & 1.215\({}^{\pm 0.003}\) & - & 3.156\({}^{\pm 0.003}\) & 0.625\({}^{\pm 0.003}\) & 0.719\({}^{\pm 0.003}\) \\ & HamNoSys & 0.492\({}^{\pm 0.003}\) & 0.687\({}^{\pm 0.003}\) & 0.765\({}^{\pm 0.003}\) & 0.884\({}^{\pm 0.003}\) & 0.941\({}^{\pm 0.003}\) & 2.651\({}^{\pm 0.003}\) & 0.552\({}^{\pm 0.003}\) & 0.745\({}^{\pm 0.003}\) & 0.731\({}^{\pm 0.003}\) \\ \hline \multirow{3}{*}{**Gender**} & Language & 0.042\({}^{\pm 0.003}\) & 0.532\({}^{\pm 0.003}\) & 0.625\({}^{\pm 0.003}\) & 0.726\({}^{\pm 0.003}\) & 5.806\({}^{\pm 0.003}\) & - & 2.601\({}^{\pm 0.003}\) & 0.633\({}^{\pm 0.003}\) & 0.379\({}^{\pm 0.003}\) & 0.857\({}^{\pm 0.003}\) \\ \cline{1-1} & HamNoSys & 0.314\({}^{\pm 0.004}\) & 0.694\({}^{\pm 0.003}\) & 0.759\({}^{\pm 0.003}\) & 10.008\({}^{\pm 0.003}\) & - & 10.018\({}^{\pm 0.003}\) & - & - & - \\ \cline{1-1} & HamNoSys & 0.451\({}^{\pm 0.004}\) & 0.694\({}^{\pm 0.004}\) & 0.746\({}^{\pm 0.003}\) & 0.582\({}^{\pm 0.003}\) & 8.942\({}^{\pm 0.003}\) & 0.913\({}^{\pm 0.003}\) & 2.284\({}^{\pm 0.003}\) & 0.581\({}^{\pm 0.003}\) & 0.726\({}^{\pm 0.003}\) \\ \cline{1-1} \cline{1-1} \cline{1} & Wind300\({}^{\pm 0.003}\) & 0.485\({}^{\pm 0.011}\) & 0.711\({}^{\pm 0.003}\) & 0.884\({}^{\pm 0.003}\) & 0.715\({}^{\pm 0.003}\) & 8.235\({}^{\pm 0.003}\) & 0.801\({}^{\pm 0.003}\) & 2.339\({}^{\pm 0.003}\) & 0.513\({}^{\pm 0.003}\) & 0.814\({}^{\pm 0.003}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Quantitative evaluation results for the 3D holistic SL motion generation. _Real motion_ is the sampled motions from the original holistic motion annotation in the dataset. _Holistic_ represents the results for generated holistic motion. _Gesture_ stands for the evaluation conducted on two arms. ## Ethics Statement Our work is driven by a dedication to the advancement of knowledge and the betterment of society. As an important step towards bringing our 3D digital world to the hear-impaired communities, most importantly, our first publicly available large-scale 3D holistic SL motion dataset will significantly boost the development and novel research ideas of 3D SL applications especially 3D SLP and 3D SLT, which can be used for SL understanding and simultaneous SL virtual signer. Furthermore, our dataset is not only capable of 3D SL research, but it also possesses a good potential for motion capture areas to gain more accurate hand pose, as well as many 3D applications in AR/VR, animation, and human-computer interaction (HCI). Our dataset will follow the license of the collection source and will not present any negative social impact. All of our experiments were either run on publicly available datasets or on data that is synthetically generated. While these datasets contained human annotations, they have been licensed appropriately. We included these licenses in our appendix. No human or animal subjects have been involved at any stage of the work we were involved. Trained on this data, our models are designed to enhance the understanding and production of sign languages, without causing harm or perpetuating unjust biases, unless provided in the datasets. While we do not foresee any issue with methodological bias, we have not analyzed the inherent biases of our algorithm and there might be implications in applications demanding utmost fairness. We aptly acknowledge the contributions of researchers whose work laid the foundation for our own. Proper citations and credit are given to previous studies and authors. All authors declare that there are no conflicts of interest that could compromise the impartiality and objectivity of this research. All authors have reviewed and approved the final manuscript before submission. ## Reproducibility Statement We are committed to transparency in research and for this reason will make our implementation publicly available upon publication. To demonstrate our dedication, we have submitted all source code as part of the appendix.
2303.18091
Clamped and sideband-resolved silicon optomechanical crystals
Optomechanical crystals (OMCs) are a promising and versatile platform for transduction between mechanical and optical fields. However, the release from the substrate used in conventional suspended OMCs also prevents heat-carrying noise phonons from rapidly leaking away. Thermal anchoring may be improved by attaching the OMCs directly to the substrate. Previous work towards such clamped, i.e. non-suspended, OMCs suffers from weak interaction rates and insufficient lifetimes. Here, we present a new class of clamped OMCs realizing -- for the first time -- optomechanical interactions in the resolved-sideband regime required for quantum transduction. Our approach leverages high-wavevector mechanical modes outside the continuum. We observe a record zero-point optomechanical coupling rate of $g_0/(2\pi) \approx 0.50$ MHz along with a sevenfold improvement in the single-photon cooperativity of clamped OMCs. Our devices operate at frequencies commonly used in superconducting qubits. This opens a new avenue using clamped OMCs in both classical and quantum communications, sensing, and computation through scalable mechanical circuitry that couples strongly to light.
Johan Kolvik, Paul Burger, Joey Frey, Raphaël Van Laer
2023-03-31T14:29:33Z
http://arxiv.org/abs/2303.18091v1
# Clamped and sideband-resolved silicon optomechanical crystals ###### Abstract Optomechanical crystals (OMCs) are a promising and versatile platform for transduction between mechanical and optical fields. However, the release from the substrate used in conventional suspended OMCs also prevents heat-carrying noise phonons from rapidly leaking away. Thermal anchoring may be improved by attaching the OMCs directly to the substrate. Previous work towards such _clamped_, i.e. non-suspended, OMCs suffers from weak interaction rates and insufficient lifetimes. Here, we present a new class of clamped OMCs realizing - for the first time - optomechanical interactions in the resolved-sideband regime required for quantum transduction. Our approach leverages high-wavevector mechanical modes outside the continuum. We observe a record zero-point optomechanical coupling rate of \(g_{0}/(2\pi)\approx 0.50\) MHz along with a sevenfold improvement in the single-photon cooperativity of clamped OMCs. Our devices operate at frequencies commonly used in superconducting qubits. This opens a new avenue using clamped OMCs in both classical and quantum communications, sensing, and computation through scalable mechanical circuitry that couples strongly to light. + Footnote †: preprint: APS/123-QED The field of optomechanics receives great interest for applications such as sensing and microwave-to-optics transduction [1; 2; 3; 4]. A leading class of optomechanical devices is the optomechanical crystal (OMC) [5]. In state-of-the-art suspended OMCs, the low-wavevector GHz mechanical modes are confined partly by suspending the device layer. Along with engineered bandgaps, this eliminates mechanical leakage into the substrate [5]. However, suspension also comes at the cost of losing a channel through which heat-carrying noise phonons created by optical absorption can dissipate [6]. Confining coherent GHz phonons while letting heat-carrying noise phonons leak away fast is an understudied and challenging problem. One approach is to laterally connect a suspended OMC region with the rest of the device layer. While these two-dimensional OMCs have led to impressive results, they also require in-plane bandgaps and the associated fine-tuning of geometry [7]. Another approach to provide thermal anchoring is to attach the OMC directly to a substrate. We call this unconventional class of OMCs _clamped_. Besides reducing fabrication complexity and improving thermal anchoring, clamped devices could ease co-integration between phononic, electronic, and photonic devices, all commonly fabricated in the silicon-on-insulator (SOI) platform. Efforts along these lines have been made by using e.g. bound states in the continuum [8] and geometrical softening [9; 10]. These approaches have not yet been able to compete with the conventional suspended systems because of weaker interactions and shorter coherence times. Here, we propose and demonstrate a new class of clamped, i.e. unreleased, OMCs. In our SOI-based OMCs the optomechanical three-wave-mixing interaction takes places between two counter-propagating optical modes and a high-wavevector mechanical mode. We demonstrate the first clamped OMCs with mechanical frequencies exceeding their optical loss rates. This resolved-sideband condition is essential for low-noise quantum transduction between optical and mechanical fields [11]. Our new clamped OMCs have record zero-point optomechanical coupling rates of \(g_{0}/(2\pi)=0.50\pm 0.01\) MHz at mechanical frequencies of \(\omega_{\rm m}/(2\pi)=5.4\) GHz. In addition, our clamped OMCs can have a thermal contact area exceeding that of their suspended counterparts. Our results provide a new path for unreleased optomechanical structures to become a competitive platform for both classical and quantum optomechanical circuits. In the following, we first outline the design process of the clamped OMCs and move on to show finite-element simulations of both photonic-phononic crystal waveguides and resonators. Finally, we present fabrication and measurement results of our devices at room temperature. **Design.** A key challenge in designing clamped OMCs is avoiding excessive mechanical leakage into the substrate while keeping a high optomechanical interaction rate. In contrast to its suspended counterpart, a clamped OMC supports a continuum of states for _both_ optical and mechanical waves. Here, we limit leakage from the OMC mechanical mode to substrate bulk- and surface-acoustic-waves (SAWs) by exploiting large mechanical wavevectors. This enables the mechanical mode to be phase-protected from the acoustic continuum, similar to optical waveguides based on total internal reflection. Realizing strong three-wave-mixing optomechanical interactions with such high-wavevector mechanical modes requires that \(k_{\rm m}\approx 2k_{\rm o}\) where \(k_{\rm m}\) (\(k_{\rm o}\)) is the mechanical (optical) wavevector. This phase-matching condition is familiar from the realm of counter-propagating Brillouin interactions [12]. We illustrate the principles of phase-matching for our clamped OMCs in Fig.1a. For a mechanical mode with frequency \(\omega_{\rm m}\) to fall below the continuum, the mechanical wavevector must satisfy \(k_{\rm m}>\omega_{\rm m}/v_{\rm SAW}\), where \(v_{\rm SAW}\) is the substrate SAW phase-velocity. In addition, the first Brillouin zone associated with the crystal period \(a\) sets an upper bound to the set of unique wavevectors. This defines an operating window where both mechanical and optical modes are outside their continua while their three-wave-mixing is phase-matched (\(k_{\rm m}\approx 2k_{\rm o}\)). To visualize this operating window, we plot SAW frequencies for different operating points in Fig.1b. At a given wavevector \(k_{\rm m}\), there is a maximum frequency the OMC mechanical mode can have before entering the continuum. This upper bound for guided mechanical mode frequencies is calculated as \(\omega_{\rm SAW}=k_{\rm m}v_{\rm SAW}\), where \(k_{\rm m}=2k_{\rm o}\). We use optical wavevectors for a pump with effective index \(n_{\rm eff}\) at a vacuum wavelength \(\lambda_{0}=1550\) nm. In addition, we use \(v_{\rm SAW}\approx 3400\) m/s for silicon dioxide [13]. Clamped and phase-matched OMC operation with low mechanical and optical radiation losses into the substrate is accessible for all \((n_{\rm eff},a)\) such that \(n_{\rm eff}>n_{\rm cladding}=1.45\) while also \(\omega_{\rm m}<\omega_{\rm SAW}\). This analysis encourages exploration of unit cells with smaller periods \(a\) as they provide larger operating windows. Investigating unit cells with periods around \(a=188\) nm and widths around \(w=643\) nm, we find a guided \(X\)-point mechanical mode around \(\omega_{\rm m}/(2\pi)=5.4\) GHz. At this mechanical wavevector, we calculate the SAW frequency to be \(\omega_{\rm SAW}/(2\pi)=10.3\) GHz. This places our modes firmly outside both the mechanical and optical continua. The mechanical mode profile resembles the "pinch mode" first reported in [14] for suspended OMCs (Fig.3a). The mechanical motion is primarily longitudinal along the OMC, in contrast to recently explored modes [15]. The mode is guided despite the silicon device layer having faster speed of sound than the substrate. We attribute this to the unit cell's relatively large surface-to-volume ratio. This is known to reduce the effective stiffness of the structure in an effect known as geometrical softening [3; 10; 16]. We design a defect unit cell based on this mechanical mode and calculate mechanical and optical dispersion diagrams for a photonic-phononic crystal waveguide with periodic symmetry (Fig.2). We choose the parameters of the unit cell such that the waveguide supports a C-band optical mode at half the mechanical wavevector. Considering counter-propagating optomechanical interactions, we calculate a unit cell zero-point optomechanical coupling rate of \(g_{0,{\rm uc}}/(2\pi)=4.2\) MHz. This large coupling is mostly mediated by the moving boundary effect and is comparable with those of conventional suspended OMC defect cells [5]. We give further details on the implications of high-wavevector mechanics in standing-wave OMCs in appendix. To confine the waveguide mode described above in a standing-wave OMC, we design a second type of unit cell: the mirror cell. By increasing its period compared to the defect cell by about a factor two to \(a=375\) nm, the optical and mechanical bands are pulled below the localized mode frequencies, opening a quasi-bandgap (Fig.2). The OMC is thus assembled by transforming the defect cell in the center into the mirror cell at the cavity perimeters [5; 17]. In this case, the mechanical quasi-bandgap only exists for the first half of the perturbation before the mirror cell becomes a host to the continuum modes. Yet, a mechanical quasi-bandgap lasting for only a few unit cells suffices to reflect the vast majority of the mechanical field. A consequence of the optical mode not being at the \(X\)-point is that it is not the cavity's fundamental- but a higher order mode. Additionally, the optical mode decays Figure 1: **(a)** Phase-matching diagram for clamped and counter-propagating optomechanical interactions. The optical mode (blue) with frequency \(\omega_{\rm o}\) couples to a mechanical mode at \(\omega_{\rm m}\). Counter-propagating optical modes interact with a phase-protected mechanical mode at \(k_{\rm m}\approx 2k_{\rm o}\) (red) whose wavevector lies outside the continuum of mechanical modes in the substrate. **(b)** SAW frequencies in silicon dioxide at the mechanical wavevector \(k_{\rm m}=2k_{\rm o}\). The optical pump has a vacuum wavelength \(\lambda_{0}=1550\) nm and effective index \(n_{\rm eff}\) and interacts with mechanics in an OMC with unit cell with period \(a\). Phase-protection from both optical and mechanical continua is achieved for all \((n_{\rm eff},a)\) where \(n_{\rm eff}>n_{\rm cladding}=1.45\) and \(\omega_{\rm m}<\omega_{\rm SAW}\). The operating point for the presented OMC’s defect unit cell is marked by a star. Figure 2: Mechanical (a) and optical (b) band diagram of the OMC defect unit cell. The left inset in (a) shows the unit cell with elliptic hole. The right insets in (a) and (b) show the X-point frequencies as a function of the perturbation from defect to mirror cell. The shaded area denotes the realm of continuum modes. Blue lines denote modes with symmetry with respect to the \(xz\) plane, red lines denote modes of other symmetries. The modes of interest are solid. The horizontal dashed lines indicate the approximate OMC mode frequencies at the operating point. slower in the mirror transition region compared to \(X\)- or \(\Gamma\)-point modes. Both of these effects reduce the spatial overlap between optical and mechanical modes. We keep most of the defect unit cell's interaction strength in the full OMC by adding additional defect unit cells before starting the mirror region. Indeed, the field profiles of mechanics and optics approach the defect unit cell's fields for a clamped OMC as the number of defect unit cells \(N\) increases. While this reduces the zero-point coupling rates as \(1/\sqrt{N}\), the increased interface area between a longer clamped OMC and the substrate is also expected to improve thermal anchoring. At this stage, we optimize our design using a Nelder-Mead algorithm. For an optimized clamped OMC with \(N=31\) defect cells, we simulate an optical mode with \(\omega_{\mathrm{o}}/(2\pi)=195\) THz and a mechanical mode with \(\omega_{\mathrm{m}}/(2\pi)=5.64\) GHz with radiation-limited quality factors \(Q_{\mathrm{o}}=1.3\cdot 10^{6}\) and \(Q_{\mathrm{m}}=3.6\cdot 10^{5}\) respectively (Fig.3a). Fourier transformation of the cavity fields indicates that the phase-matching condition \(k_{\mathrm{m}}\approx 2k_{\mathrm{o}}\) is met for counter-propagating optomechanical interaction (Fig.3b). The simulated zero-point optomechanical coupling of the resulting new clamped OMC design is \(g_{0}/(2\pi)=0.50\) MHz. **Experiments.** We transfer the device pattern to a 220 nm silicon device layer using electron beam lithography (Raith EBPG 5200) followed by an HBr/Cl\({}_{2}\)-based reactive ion etch (STS ICP MPX). Next, we clean the samples in 3:1 piranha solution before measurement. A top-down scanning electron micrograph of the finished device is shown in Fig.4a which features two clamped SOI OMCs (false color). We measure the properties of the OMC at room temperature and atmospheric pressure. A pump laser (Santec TSL570) at 1550 nm is injected into an on-chip bus waveguide through focusing grating couplers [18] with a coupling efficiency of 18%. The bus waveguide couples to the OMC evanescently. Light reflected off the OMCs is amplified and subsequently detected on a high-speed photoreceiver (see appendix for details). By sweeping the laser wavelength, we detect the optical resonance at \(\omega_{\mathrm{o}}/(2\pi)=193.1\) THz with a total linewidth \(\kappa/(2\pi)=1.41\) GHz and external coupling rate \(\kappa_{\mathrm{e}}/(2\pi)=600\) MHz. Next, we modulate the pump intensity and demodulate the reflected signal with a vector network analyzer (Fig.4b). We thus perform an \(s_{11}\) measurement which is used to extract the pump detuning \(\Delta=\omega_{\mathrm{L}}-\omega_{\mathrm{o}}\) where \(\omega_{\mathrm{L}}\) is the frequency of the pump laser [19]. This data is also used to confirm external optical coupling rates. In addition, at an on-chip power level of 375 \(\mu\)W, we observe an electromagnetically induced transparency window at the mechanical frequency [20; 21]. Placing the pump blue-detuned from the optical resonance at roughly \(\Delta=\omega_{\mathrm{m}}\), we measure the mechanical spectrum in the reflected light with a microwave spectrum analyzer. This reveals several mechanical modes of which we show three in Figure 4c. The full spectrum is shown in appendix. The spectral spacing and relative optomechanical coupling of the three mechanical modes qualitatively agree with simulation, and the absolute frequencies agree to within 150 MHz (Fig.3c inset). The fundamental mechanical mode frequency is at \(\omega_{\mathrm{m}}/(2\pi)=5.365\) GHz. Crucially, this puts the device operation in the resolved-sideband regime with \(\omega_{\mathrm{m}}/\kappa=3.6\). To the best of our knowledge, this is the first demonstration of a clamped OMC in this regime. We note a slight asymmetric feature in the fundamental mode (Fig.4c) which we suspect is related to geometrical disorder. We move on to measure the zero-point optomechanical coupling rate by measuring the mechanical linewidth at varying optical pump powers [11] for both blue- and red-detuned pumps (Fig.3d). We find a strong zero-point optomechanical coupling rate of \(g_{0}/(2\pi)=0.50\pm 0.01\) MHz and a mechanical linewidth of \(\gamma/(2\pi)=6.32\) MHz (Fig.4d). The zero-point coupling rate is in excellent agreement with simulations. The uncertainty in coupling rate \(g_{0}\) stems mainly from inaccuracy in the measured on-chip powers. The single-photon cooperativity of the device is \(\mathcal{C}_{0}\equiv 4g_{0}^{2}/(\kappa\gamma)=1.13\cdot 10^{-4}\). This cooperativity exceeds that of previously measured clamped devices by about a factor of 7 [8; 9; 10]. We suspect higher single-photon cooperativities are in reach at cryogenic temperatures. To highlight our device performance in relation to state-of-the-art, we present a summary of important parameters for previous clamped work in Table 1. At a pump power of 375 \(\mu\)W in the bus waveguide with a blue-detuned pump at \(\Delta=\omega_{\mathrm{m}}\), we reach a cooperativity of unity. This is demonstrated in Fig.4e where we show mechanical power spectra as the pump detuning is swept closer to \(\Delta=\omega_{\mathrm{m}}\). When the detuning approaches the mechanical frequency, we observe self-induced oscillations in the fundamental mechanical mode. In conclusion, we designed and demonstrated a new class of clamped OMCs in SOI leveraging high-wavevector mechanical modes at gigahertz frequencies and counter-propagating optomechanical interactions. To the best of our knowledge, they are the first clamped OMCs in the resolved-sideband regime - a key requirement for low-noise quantum transduction between Figure 3: (a) Optical (top) and mechanical (bottom) mode profiles of the clamped OMC with \(\tilde{E}_{y}\) the normalized electric field of the optical mode and \(\tilde{\mathbf{u}}\) the normalized mechanical displacement. (b) Fourier transform of the optical field \(E_{y}\) (blue) and mechanical field \(u_{x}\) (red). optics and mechanics. We observe a record zero-point optomechanical coupling rate for clamped OMCs of \(g_{0}/(2\pi)=0.50\) MHz - in excellent agreement with simulation. Their single-photon cooperativity exceeds that of previous clamped OMCs by about an order of magnitude. We suspect that further improvements of the optomechanical overlap are in reach. In addition, clamped OMCs can have significantly larger thermal contact area than suspended structures so they may suffer less from pump-induced mechanical heating in cryogenic environments [6; 7]. The operation of our clamped OMCs does not require in-plane bandgaps and relies on robust confinement of mechanical modes with frequencies and wavevectors outside the mechanical continuum. Our approach is not restricted to SOI; it is applicable to a wide range of materials and substrates. The clamped OMCs can be combined with e.g. spins or superconducting qubits. This opens a new avenue for scalable classical and quantum optomechanical circuits for applications in transduction, sensing, and acoustic processing of electromagnetic signals [3; 22]. Mechanical systems are often seen as a universal bus. Having them clamped on a substrate while coupling strongly to light unlocks new opportunities in communication and computation. **Funding.** We gratefully acknowledge support from the Wallenberg Centre for Quantum Technology and from the European Research Council via Starting Grant 948265. **Acknowledgement.** We acknowledge Trond Hjperpekjon Haug, Witlef Wieczorek, and Per Delsing for helpful discussions. J.K. led the nanofabrication and measurement and assisted with design. P.B. led the design and assisted with nanofabrication and measurement. J.F. assisted with nanofabrication and measurement. J.K., P.B., and R.V.L. wrote the manuscript. R.V.L. provided experimental and theoretical support and conceived as well as supervised the project. **Data availability.** The datasets generated and an \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline Reference & Liu et al. (est.) [8] & Zhang et al. [9] & Sarabalis et al. [10] & This work \\ \hline \(g_{0}/(2\pi)\) (kHz) & 87 & 51 & 290 & **500** \\ \(\omega_{\mathrm{m}}/(2\pi)\) (GHz) & **7.5** & 0.66 & 0.48 & **5.37** \\ \(\kappa/(2\pi)\) (GHz) & 9.7 & 4.9 & 8.2 & **1.5** \\ \(\gamma/(2\pi)\) (MHz) & 16 & **0.6** & 2.6 & 6.3 \\ \(\omega_{\mathrm{m}}/\kappa\) (-) & 0.77 & 0.14 & 0.058 & **3.6** \\ \(C_{0}\equiv 4g_{0}^{2}/(\kappa\gamma)\) (-) & \(2.0\cdot 10^{-7}\) & \(3.5\cdot 10^{-6}\) & \(1.6\cdot 10^{-5}\) & \(\mathbf{1.1\cdot 10^{-4}}\) \\ \hline \end{tabular} \end{table} Table 1: **Table comparing clamped optomechanical structures on key parameters.** Figure 4: **Fabrication and measurement of the OMC.****(a)** Top-down scanning electron micrograph of two silicon clamped OMCs next to an optical bus waveguide. We measure the device in reflection. Scale bar indicates 1 \(\mu\)m. Light enters through the center waveguide at the top of the figure and couples evanescently to the OMCs. **(b)** To extract the pump detuning \(\Delta\), we perform optical sideband spectroscopy of one of the OMCs measured with a vector network analyzer and a blue-detuned pump laser. The inset shows a zoomed in view of the dashed region with higher resolution where we observe an electromagnetically induced transparency feature at the mechanical frequency. **(c)** The measured thermal spectrum with model fit (orange) shows a strongly coupled fundamental mechanical mode at \(\omega_{\mathrm{m}}/(2\pi)=5.365\) GHz along with two higher-order modes. **(d)** Mechanical linewidth of the fundamental mechanical mode as a function of optical intracavity photons. We consecutively carry out the experiment with a first blue- then a red-detuned pump. **(e)** We observe cooperativity of unity and mechanical lasing for an on-chip pump power of 375 \(\mu\)W. alyzed for the current study are available from the corresponding author on reasonable request. ## Appendix A Phase-matching for counter-propagating interactions in a standing-wave OMC As is motivated in the main text, mechanical modes used for optomechanical coupling in clamped OMCs require high wavevectors to be phase-protected from the acoustic continuum. Using mechanical modes with non-zero wavevector implies a varying mechanical phase along the length of the OMC. This is in stark contrast to the \(\Gamma\)-point mechanical modes commonly adopted in suspended OMCs [5; 17; 7]. In this section, we explore how this affects the spatial phase of the local optomechanical coupling. Crucially, we require that there is no significant phase-induced cancellation of coupling when considering a cavity consisting of several unit cells. We start by looking at the source of the optomechanical coupling. For the mechanical modes presented in the main text, the moving boundary effect is the most prominent contribution to the coupling. The interaction rate associated with this coupling is [17] \[g_{0}=\sqrt{\frac{\hbar}{2\omega_{\rm m}}}\frac{\omega_{\rm o}}{2}\frac{\int( \mathbf{u}(\mathbf{r})\cdot\mathbf{n})(\Delta\epsilon|\mathbf{E}^{\parallel}|^{2}-\Delta( \epsilon^{-1})|\mathbf{D}^{\perp}|^{2}){\rm d}A}{\sqrt{\int\rho|\mathbf{u}(\mathbf{r})|^{2} {\rm d}^{3}\mathbf{r}}\int\epsilon(\mathbf{r})|\mathbf{E}(\mathbf{r})|^{2}{\rm d}^{3}\mathbf{r}}. \tag{10}\] We restrict the following discussion to the contribution of this term; however, the same analysis is valid for the contribution of photoelasticity. Next, we turn our attention to the participating electrical (mechanical) fields \(\mathbf{E}\) and \((\mathbf{u})\). Crucially, we analyze the coupling between fields in a standing-wave cavity. This implies that the fields consist of both forwards (f) and backwards (b) propagating mechanical and optical components, i.e., \[\begin{split}&\mathbf{E}(\mathbf{r})=(\tilde{\mathbf{E}}_{\rm f}(\mathbf{r})e^{ i\pi k_{\rm o,f}}+\tilde{\mathbf{E}}_{\rm b}(\mathbf{r})e^{i\pi k_{\rm o,b}})f_{\rm o}(x)\\ &\mathbf{u}(\mathbf{r})=(\tilde{\mathbf{u}}_{\rm f}(\mathbf{r})e^{i\pi k_{\rm m, f}}+\tilde{\mathbf{u}}_{\rm b}(\mathbf{r})e^{i\pi k_{\rm m,b}})f_{\rm m}(x).\end{split} \tag{11}\] Here, we assume that the longitudinal direction of the cavity is oriented in the \(x\)-direction. The fields \(\tilde{\mathbf{E}}\) (\(\tilde{\mathbf{u}}\)) are unit cell Bloch functions and \(f_{\rm o}(x)\) (\(f_{\rm m}(x)\)) are envelope functions for the optical (mechanical) fields. For the sake of brevity, we have without loss of generality omitted the presence of pump and sideband optical fields with differing wavevectors. The analysis remains applicable to the full three-wave-mixing scenario involving interactions between a counter-propagating pump and sideband of slightly differing frequencies. We can now use (10) to calculate the total coupling. First, we assume that the envelopes varies slowly, i.e. the cavity consists of many unit cells, we can therefore take the envelope functions to be effectively uniform along the OMC to illustrate the principles at work. Due to the squared optical fields, the result can be expanded into six terms with differently varying phase along the \(x\)-direction. The terms have factors on the following forms \[\begin{split}&\int e^{ix(k_{\rm m,f})}(...){\rm d}A,\qquad\qquad \qquad\qquad(i)\\ &\int e^{ix(k_{\rm m,f}+k_{\rm o,f}-k_{\rm o,b})}(...){\rm d}A, \qquad\qquad(ii)\\ &\int e^{ix(k_{\rm m,f}-k_{\rm o,f}+k_{\rm o,b})}(...){\rm d}A, \qquad\qquad(iii)\\ &\int e^{ix(k_{\rm m,b})}(...){\rm d}A,\qquad\qquad(iv)\\ &\int e^{ix(k_{\rm m,b}+k_{\rm o,f}-k_{\rm o,b})}(...){\rm d}A, \qquad\qquad(v)\\ &\int e^{ix(k_{\rm m,b}-k_{\rm o,f}+k_{\rm o,b})}(...){\rm d}A. \qquad\qquad(vi)\end{split} \tag{12}\] In the case that the mechanics has vanishing wavevector like in most suspended OMCs we have \(k_{\rm m,f}=k_{\rm m,b}=0\) such that the terms (\(i\)) and (\(iv\)) will lose their spatial phase, preventing cancellations in the total overlap integral. For the remaining terms to generate finite coupling, we find that \(k_{\rm o,f}=k_{\rm o,b}\) in this scenario with small mechanical wavevector. This however leads to a contradiction that standing waves requires counter-propagating fields. Therefore, in a sufficiently long cavity, only two terms will contribute to the overall coupling and other terms are strongly suppressed. In essence, we see that low-wavevector mechanics couples well to co-propagating optical pump and sideband as is familiar from forward intra-modal Brillouin interactions [12]. On the other hand, in our clamped OMCs we have \(k_{\rm m,f}=-k_{\rm m,b}=k_{\rm m}\neq 0\) such that the integrands of (\(i\)) and (\(iv\)) tend to be strongly suppressed due to cancellations arising from different parts of the OMC. The phase-matching condition therefore becomes \(k_{\rm m}=\pm(k_{\rm o,f}-k_{\rm o,b})\), where the plus (minus) sign satisfies coupling in terms (\(iii\)) and (\(v\)) ((\(ii\)) and (\(vi\))). As an example, when \(k_{\rm o,f}=-k_{\rm o,b}=k_{\rm o}\) we have that \(k_{\rm m}=2k_{\rm o}\) which is the principle upon which coupling is generated in the OMCs presented in the main text. This condition is familiar from backward intra-modal Brillouin interactions [12]. Generally, integration over terms in (12) with phase mismatch \(\Delta k\) leads to suppression by a factor \(\sin(\Delta kL)/\Delta kL\). As expected, we see that the implications of phase-matching are most prominent for long cavities where wavevectors are well-defined. For the OMC presented in the main text, co-propagating terms involving our high-wavevector mechanical mode are suppressed by a factor \(\Delta kL=2k_{\rm o}L\approx\pi N\approx 10^{2}\) compared to counter-propagating terms. Here we used \(k_{\rm o}\approx\pi/(2a)\) and \(L=Na\) with \(N=31\) the number of unit cells. In summary, this analysis shows that phase-matched counter-propagating optomechanical interactions are possible in OMCs when using high-wavevector mechanical modes. Therefore, these interactions can be as strong and scale similarly to the more common approach based on co-propagating optomechanical interactions and low-wavevector mechanical modes. ## Appendix B Experimental setup A simplified diagram of the measurement setup is presented in Fig.5. We carry out the optical characterization with a fiber-coupled continuously tunable laser in the C-band (Santec TSL570). An electro-optic intensity modulator (IXblue MX-LN-20) generates sideband tones at the mechanical frequency. To control the power sent to the device under test, we use digitally controlled variable optical attenuators (Sercalo VP1-9N-12-16). Light polarization in the fiber network is managed through fiber polarization controllers. After interacting with the device under test, the reflected light is circulated and amplified with an erbium doped fiber amplifier (Amonics AEDFA-PA-35-B). A high-speed photoreceiver (Newport 1544-B) downconverts the GHz signals contained in the reflected light. We detect and display the thermal mechanical power spectrum with an electrical spectrum analyzer (R&S FSW26) in real-time mode with a resolution bandwidth of 250 kHz. Finally, we use a vector network analyzer (R&S ZNB20) to drive and demodulate the GHz optical modulation. ## Appendix C Wide mechanical spectrum The mechanical spectrum shown in Fig.4c in the main text features three prominent mechanical modes. Searching in a wider span of frequencies reveals four additional modes. We present the spectrum along with the measured frequencies in Fig. 6. To reduce the risk of misinterpreting laser noise as mechanical modes, we perform the measurement with two independent laser sources (Santec TSL570 and Toptica CTL1550).
2309.10496
Note on general functional flows in equilibrium systems
We study the response of generating functionals to a variation of parameters (couplings) in equilibrium systems i.e. in quantum field theory (QFT) and equilibrium statistical mechanics. These parameters can be either physical ones such as coupling constants or artificial ones which are intentionally introduced such as the renormalization scale in field theories. We first derive general functional flow equations for the generating functional (grand-canonical potential) $W[J]$ of the connected diagrams. Then, we obtain functional flow equations for the one-particle irreducible ($1$PI) vertex functional (canonical potential) $\Gamma[\phi]$ by performing the Legendre transformation. By taking the functional derivatives of the flow equations, we can obtain an infinite hierarchical equations for the $1$PI vertices. We also point out that a Callan-Symanzik type equation holds among the vertices when partition function is invariant under some changes of the parameters. After discussing general aspects of parameter response, we apply our formalism to several examples and reproduce the well-known functional flow equations. Our response theory provides us a systematic and general way to obtain various functional flow equations in equilibrium systems.
Kiyoharu Kawana
2023-09-19T10:17:32Z
http://arxiv.org/abs/2309.10496v1
# Note on general functional flows ###### Abstract We study the response of generating functionals to a variation of parameters (couplings) in equilibrium systems i.e. in quantum field theory (QFT) and equilibrium statistical mechanics. These parameters can be either physical ones such as coupling constants or artificial ones which are intentionally introduced such as the renormalization scale in field theories. We first derive general functional flow equations for the generating functional (grand-canonical potential) \(W[J]\) of the connected diagrams. Then, we obtain functional flow equations for the one-particle irreducible (1PI) vertex functional (canonical potential) \(\Gamma[\phi]\) by performing the Legendre transformation. By taking the functional derivatives of the flow equations, we can obtain an infinite hierarchical equations for the 1PI vertices. We also point out that a Callan-Symanzik type equation holds among the vertices when partition function is invariant under some changes of the parameters. After discussing general aspects of parameter response, we apply our formalism to several examples and reproduce the well-known functional flow equations. Our response theory provides us a systematic and general way to obtain various functional flow equations in equilibrium systems. ###### Contents * 1 Introduction * 2 General response theory * 2.1 General flow equations in grand-canonical formulation * 2.2 General flow equations in canonical formulation * 2.3 Generalized Callan-Symanzik equations * 3 Examples * 3.1 Functional renormalization group in QFT * 3.2 Quantum spin systems * 3.3 Classical liquids * 3.4 Classical nonequilibirum systems * 4 Summary and discussion ## 1 Introduction For the past half century, Renormalization Group (RG) has played a central role in fundamental physics. The essence of RG is the coarse-graining of the microscopic degrees of freedom and the construction of effective theory at low-energy scales (long distances). In particular, one of the great successes is the explanation of universality class of critical phenomena in nature. For example, in the Wilsonian RG [1, 2, 3], we can discuss the RG flow of the effective action by integrating out the high-energy modes. Then, critical behaviors of a system are explained by the (linearlized) flow structures around a fixed point. Not only are they very powerful for describing critical phenomena, the RG is also useful for the (non)perturbative calculations in field theories; The independence of correlation functions or effective action on the renormalization scale implies nontrivial differential flow equations (Callan-Symanzik equations) [4, 5], and we can systematically improve the perturbative calculations by using these equations [6, 7, 8]. Functional Renormalization Group (FRG) [9, 10, 11, 12, 13, 14, 15, 16] provides us a non-perturbative way to implement the idea of RG in general many-body systems. Essentially, the FRG combines the functional method in mathmatical physics with the Wilsonian RG by introducing an artificial (IR) cut-off parameter \(k\) in the action or Hamiltonian, and we see the response of a system to the continuous change of \(k\). By construction, the flow equation of the (average) effective action \(\Gamma_{k}[\phi]\) is one-loop exact, that is, it is expressed as an one-loop integral with the exact two-point correlation function. Thus, some reasonable approximations are necessary when one wants to solve the flow equation practically. See Refs. [15, 16] and references therein for various approximation methods and for countless applications of the FRG. In this paper, we discuss the basic ideas and formulations that underlie these functional flow approaches in general equilibrium systems i.e. in quantum field theory (QFT) and in equilibrium statistical mechanics. The essence of the flow approach is to see the response of a system to the variation of parameters in the action or Hamiltonian. The parameters can be either physical ones such as coupling constants \(\{g_{k}\}\), temperature \(T\), chemical potential \(\mu\), or artificial ones that are intentionally added such as the renormalization scale \(M\). We derive general functional flow equations for the generating functionals in both grand-canonical and canonical formulations. In QFT, the grand-canonical (canonical) generating functional \(W[J]\) (\(\Gamma[\phi]\)) corresponds to the generating functional of the connected diagrams (one-particle irreducible (1PI) diagrams). In particular, when the partition function is invariant under some changes of parameters, we can obtain another non-trivial relation for the correlation functions or 1PI vertices, which can be interpreted as a generalized Callan-Symanzik equation. After discussing general aspects of parameter response in Section 2, we apply them to several examples and reproduce the well-known flow equations based on our general results. The first (trivial) example is the FRG in QFT, where an IR regulator function \(R_{k}(x-y)\) is introduced in the propagator and we reproduce the Wetterich's equation. The second example is quantum spin systems (quantum Heisenberg models) [17] and it is shown that the canonical generating functional follows the Wetterich's equation by introducing an artificial parameter \(t\) and modifying the exchange coupling as \(J_{ij}\to J_{ij}(t)\,,\;J_{ij}(t=1)=J_{ij}\). The third example is the classical liquid systems [18, 19]. Here, we study two different flow approaches; One is called the "Hierarchical Reference Theory" (HRT) [20, 21, 22, 23, 24] where an artificial IR cut off \(k\) is introduced to the microscopic two-body pair potential \(v(x-y)\) as in the FRG. We show that the resultant flow equation is completely the same as the Wetterich's equation by identifying the cut-off dependent pair potential \(v_{k}(x-y)\) as a regulator \(R_{k}(x-y)\). Our general theory also enables us to obtain an exact flow equation in the presence of higher-order many-body potentials \(v_{n}(\{x_{i}\})\;,\;n\geq 3\). The other application in classical liquids is the density renormalization group which was investigated in our previous paper [25]. In this case, we consider a scale transformation \(v(x)\;\rightarrow\;v(\lambda x)\) and regard \(\lambda\) as a response parameter. In the previous study, we have relied on the field theoretical technique [26] to derive the flow equations, but they can be derived more directly (and easily) by using our general results of response theory. As a last example, we discuss classical nonequilibrium systems, in particular, the Langevin stochastic systems. These systems can be treated as equilibrium ones by casting them into the path-integral forms. We again see that the flow equation of the effective action of these nonequlibirum systems is given by the Wetterich's equation when a cut-off regulator \(R_{k}(x-y)\) is added to the propagator. In this paper, we mainly focus on theoretical formulations and general aspects of response theory. More detailed analysis in some concrete models/systems is left for future investigation. The organization of this paper is as follows. In Section 2, we formulate general response theory. We derive the exact functional flow equations for the generating functionals by considering general variations of parameters. We also point out that a non-trivial functional relation among the correlation functions or 1PI vertices can hold when a system has a redundancy under some changes of the parameters. In Section 3, we apply our general results to typical equilibrium systems and reproduce the well-known flow equations. Summary and discussion is presented in Section 4. Throughout the paper, we work in the \(d\) dimensional flat spacetime with the metric convention \(\eta_{\mu\nu}=(-,+,\cdots,+)\). ## 2 General response theory We discuss general parameter response in equilibrium systems. The main goal is to derive the exact functional flow equations for the 1PI generating functional Eq. (27). ### General flow equations in grand-canonical formulation We consider a broad class of equilibrium systems whose bare action \(S\) or Hamiltonian \(H\) in the \(d\) dimensional flat spacetime is given by \[S\ [\phi,\{\lambda_{k}\};J]\ =\sum_{n=1}^{\infty}\int\left(\prod_{i=1}^{n}d^{d}x_ {i}\right)\frac{v_{n}(\{x_{i}\};\{\lambda_{k}\})}{n!}\left(\prod_{j=1}^{n}\phi (x_{j})\right)+(i\hbar^{-1})^{-1}\int d^{d}xJ(x)\phi(x)\, \tag{1}\] \[H[\phi,\{\lambda_{k}\};J]\ =\sum_{n=1}^{\infty}\int\left(\prod_{i=1}^{n}d^{d-1}x_ {i}\right)\frac{v_{n}(\{x_{i}\};\{\lambda_{k}\})}{n!}\left(\prod_{j=1}^{n} \phi(x_{j})\right)-\beta^{-1}\int d^{d-1}xJ(x)\phi(x)\, \tag{2}\] where \(x_{i}=\overrightarrow{x}_{i}\) denotes a position of the \(i\)-th particle, \(J(x)\) is an external source, \(\{x_{i}\}=\{x_{i}\}_{i=1}^{n}\), and \(\{\lambda_{k}\}\) represents a set of general parameters (couplings) that can include artificial ones as well. Here, we simply consider one real scalar model in a continuous spacetime, but generalization to vector models and lattice systems is straightforward. See Sections 3.2 and 3.4 for examples. Note also that the microscopic \(n\)-body potential \(v_{n}(\{x_{i}\};\{\lambda_{k}\})\) is symmetric under the permutation of positions and can contain the spacetime derivatives \(\partial_{\mu}\) in general. For example, an ordinary scalar QFT corresponds to \[v_{2}(x_{1},x_{2})=\delta^{(d)}(x_{1}-x_{2})(\partial^{2}-m^{2})\,\quad v_{n}( \{x_{i}\};\{\lambda_{k}\})=-\lambda_{n}\prod_{i=2}^{n}\delta^{(d)}(x_{1}-x_{i} )\ (\mbox{for}\ n\neq 2)\, \tag{3}\] and the two-body classical liquid system corresponds to [18, 19] \[v_{1}(x)=-\frac{1}{2}v(x,x)\,\quad v_{2}(x,y)=v(x_{1},x_{2})\,\quad v_{n}=0 \quad(\mbox{for}\ n\geq 3)\, \tag{4}\] where \(v(x,y)\) is a microscopic two-body pair potential. In this case, \(\phi(x)=\rho(x)\) corresponds to the density of the system and \(J(x)=\beta U(x)\) corresponds to the external chemical potential. See Section 3.3 for more details. To simplify the expressions, we also use the following notations for the first two interactions: \[\int d^{\tilde{d}}xv_{1}(x)\phi(x)=\langle v_{1}|\phi\rangle\,\quad\int d^{ \tilde{d}}x\int d^{\tilde{d}}y\phi(x)v_{2}(x,y)\phi(y)=\langle\phi|v_{2}|\phi \rangle. \tag{5}\] where \(\tilde{d}=d\)\((d-1)\) for QFT (statistical mechanics). The partition function is defined by \[Z[\{\lambda_{k}\};J]=e^{sW[\{\lambda_{k}\};J]}=\begin{cases}\int \mathcal{D}\phi e^{i\hbar^{-1}S[\phi,\{\lambda_{k}\};J]}\\ \operatorname{Tr}\left(e^{-\beta H[\phi,\{\lambda_{k}\};J]}\right)\end{cases} \tag{6}\] where \(s=i\hbar^{-1}\)\((-\beta)\) for \(S\)\((H)\). Here, \(W[\{\lambda_{k}\};J]\) corresponds to the generating functional of the connected diagrams (grand-canonical potential) in QFT (statistical mechanics). We consider the response of the generating functional \(sW[\{\lambda_{k}\};J]\) to a small change of the parameters \(\{\delta\lambda_{k}\}\): \[\delta(sW[\{\lambda_{k}\};J])=s\sum_{n=1}^{\infty}\frac{1}{n!}\int\left(\prod _{i=1}^{n}d^{\tilde{d}}x_{i}\right)\delta v_{n}(\{x_{i}\};\{\lambda_{k}\}) \left\langle T\prod_{j=1}^{n}\phi(x_{i})\right\rangle_{J}\, \tag{7}\] where \[\left\langle T\prod_{j=1}^{n}\phi(x_{i})\right\rangle_{J} =\frac{1}{Z}\begin{cases}\int\mathcal{D}\phi\left(\prod_{i=1} ^{n}\phi(x_{i})\right)e^{i\hbar^{-1}S[\phi,\{\lambda_{k}\};J]}\\ \operatorname{Tr}\left(\prod_{i=1}^{n}\phi(x_{i})e^{-\beta H[\phi,\{\lambda_{ k}\};J]}\right)\end{cases} \tag{8}\] \[:=G^{(n)}(\{x_{i}\};\{\lambda_{k}\};J). \tag{9}\] Then, Eq. (7) can be written as \[\frac{d(sW[\{\lambda_{k}\};J])}{d\lambda_{k}}=s\sum_{n=1}^{\infty}\frac{1}{n! }\int\left(\prod_{i=1}^{n}d^{\tilde{d}}x_{i}\right)\frac{\partial v_{n}(\{x_{ i}\};\{\lambda_{k}\})}{\partial\lambda_{k}}G^{(n)}(\{x_{i}\};\{\lambda_{k}\};J) \quad\text{(for each $k$)}. \tag{10}\] More generally, for a parametric variation \(\lambda_{k}\to\lambda_{k}(t)\), we have \[\frac{d(sW[\{\lambda_{k}(t)\};J])}{dt} =\sum_{k=1}^{\infty}\frac{d\lambda_{k}(t)}{dt}\frac{\partial(sW[ \{\lambda_{k}\};J])}{\partial\lambda_{k}}\bigg{|}_{\lambda_{k}=\lambda_{k}(t)} \tag{11}\] \[=s\sum_{k=1}^{\infty}\frac{d\lambda_{k}(t)}{dt}\sum_{n=1}^{\infty }\frac{1}{n!}\int\left(\prod_{i=1}^{n}d^{\tilde{d}}x_{i}\right)\frac{\partial v _{n}}{\partial\lambda_{k}}G^{(n)}(\{x_{i}\};\{\lambda_{k}\};J)\bigg{|}_{ \lambda_{k}=\lambda_{k}(t)}. \tag{12}\] Note that \(G^{(n)}\) contains non-connected diagrams and is related to the connected correlation functions defined by \[\frac{\delta^{n}(sW[\{\lambda_{k}\};J])}{\delta J(x_{1})\cdots\delta J(x_{n})}=F^{ (n)}(\{x_{i}\};\{\lambda_{k}\};J). \tag{13}\] For example, \[F^{(1)}(x;\{\lambda_{k}\};J) =G^{(1)}(x;\{\lambda_{k}\};J)\, \tag{14}\] \[F^{(2)}(x,y;\{\lambda_{k}\};J) =G^{(2)}(x,y;\{\lambda_{k}\};J)-F^{(1)}(x;\{\lambda_{k}\};J)F^{(1 )}(y;\{\lambda_{k}\};J). \tag{15}\] By definition, the connected correlation functions satisfy \[\frac{\delta F^{(n)}(\{x_{i}\};\{\lambda_{k}\};J)}{\delta J(x_{n+1})}=F^{(n+1) }(\{x_{i}\};\{\lambda_{k}\};J). \tag{16}\] For simplicity, we denote the correlation function without the source term as \[F^{(n)}(\{x_{i}\};\{\lambda_{k}\})=F^{(n)}(\{x_{i}\};\{\lambda_{k}\};J=0). \tag{17}\] Eq. (10) is a general functional flow equation in the grand-canonical formulation. By taking the functional derivatives of Eq. (10) with respect to \(J(x)\) (and putting \(J(x)=0\)), we can obtain hierarchical equations for the correlation functions. For example, the first two equations are \[\frac{d}{d\lambda_{k}}F^{(1)}(x;\{\lambda_{k}\}) =s\sum_{n=1}^{\infty}\frac{1}{n!}\int\left(\prod_{i=1}^{n}d^{ \bar{d}}x_{i}\right)\frac{\partial v_{n}}{\partial\lambda_{k}}\frac{\delta G^{ (n)}(\{x_{i}\};\{\lambda_{k}\})}{\delta J(x)}\, \tag{18}\] \[\frac{d}{d\lambda_{k}}F^{(2)}(x,y;\{\lambda_{k}\}) =s\sum_{n=1}^{\infty}\frac{1}{n!}\int\left(\prod_{i=1}^{n}d^{\bar{ d}}x_{i}\right)\frac{\partial v_{n}}{\partial\lambda_{k}}\frac{\delta^{2}G^{ (n)}(\{x_{i}\};\{\lambda_{k}\})}{\delta J(x)\delta J(y)}. \tag{19}\] The calculation of the right hand sides needs more information of a specific model. In particular, when we consider parameter changes such that only \(G^{(1)}\) or \(G^{(2)}\) appears in the right hand side in Eq. (10), Eqs. (14)(15)(16) are sufficient to determine the flow equations for the higher-order correlation functions \(F^{(l)}\). For example, when only \(G^{(2)}\) appears in Eq. (10), the first two equations (18)(19) become \[\frac{d}{d\lambda_{k}}F^{(1)}(x) =\frac{s}{2}\int d^{\bar{d}}x_{1}\int d^{\bar{d}}x_{2}\frac{ \partial v_{2}(x_{1},x_{2})}{\partial\lambda_{k}}\left\{F^{(3)}(x,x_{1},x_{2} )+2F^{(2)}(x,x_{1})F^{(1)}(x_{2})\right\}\, \tag{20}\] \[\frac{d}{d\lambda_{k}}F^{(2)}(x,y) =\frac{s}{2}\int d^{\bar{d}}x_{1}\int d^{\bar{d}}x_{2}\frac{ \partial v_{2}(x_{1},x_{2})}{\partial\lambda_{k}}\] \[\qquad\qquad\times\left\{F^{(4)}(x,y,x_{1},x_{2})+2F^{(3)}(x,y,x_ {1})F^{(1)}(x_{2})+2F^{(2)}(x,x_{1})F^{(2)}(y,x_{2})\right\}\, \tag{21}\] where we have omitted \(\{\lambda_{k}\}\) in the correlation functions for simplicity. In this case, the flow equations for the higher-order correlation functions \(F^{(l)}\) (\(l\geq 3\)) can be obtained straightforwardly by taking the functional derivative of Eq. (19) with the use of Eq. (16). ### General flow equations in canonical formulation Usually it is more convenient to consider the generating functional of 1PI vertices (canonical potential) \(\Gamma[\phi]\) instead of the grand-canonical one \(W[J]\). Here, we derive the flow equation of \(\Gamma[\phi]\). The effective action is defined by the Legendre transformation of \(W[\{x_{i}\};\{\lambda_{k}\};J]\): \[s\Gamma[\{\lambda_{k}\};\phi] =\min_{J}\left(sW[\{\lambda_{k}\};J]-\langle J|\phi\rangle\right) \tag{22}\] \[=sW[\{\lambda_{k}\};J_{\phi}]-\langle J_{\phi}|\phi\rangle\, \tag{23}\] where \(J_{\phi}\) is a solution of \[\frac{\delta(sW[\{\lambda_{k}\};J])}{\delta J(x)}=F^{(1)}(x;\{ \lambda_{k}\};J)=\phi(x). \tag{24}\] This implies that the two-point function is given by \[F^{(2)}(x,y;\{\lambda_{k}\};J_{\phi})=\frac{\delta(sW[\{\lambda_ {k}\};J])}{\delta J(x)\delta J(y)}\bigg{|}_{J=J_{\phi}}=\frac{\delta\phi(x)}{ \delta J_{\phi}(y)} \tag{25}\] for a given field \(\phi(x)\). The parameter derivative is calculated as \[\frac{d(s\Gamma[\{\lambda_{k}\};\phi])}{d\lambda_{k}} =\frac{d}{d\lambda_{k}}\left(sW[\{\lambda_{k}\};J_{\phi}]-\langle J _{\phi}|\phi\rangle\right)\] \[=\frac{d(sW[\{\lambda_{k}\};J])}{d\lambda_{k}}\bigg{|}_{J=J_{ \phi}}-\int d^{\tilde{d}}x\frac{\partial J_{\phi}}{\partial\lambda_{k}}\frac {\delta}{\delta J_{\phi}(x)}\left(sW[\{\lambda_{k}\};J_{\phi}]-\langle J_{ \phi}|\phi\rangle\right)\] \[=\frac{d(sW[\{\lambda_{k}\};J])}{d\lambda_{k}}\bigg{|}_{J=J_{ \phi}}\, \tag{26}\] where the second term in the second line vanishes by Eq. (24). Thus, by Eq. (10), we obtain a general flow equation of \(\Gamma[\{\lambda_{k}\};\phi]\): \[\frac{d(s\Gamma[\{\lambda_{k}\};\phi])}{d\lambda_{k}}=s\sum_{n=1 }^{\infty}\frac{1}{n!}\int\left(\prod_{i=1}^{n}d^{\tilde{d}}x_{i}\right)\frac {\partial v_{n}(\{x_{i}\};\{\lambda_{k}\})}{\partial\lambda_{k}}G^{(n)}(\{x_{ i}\};\{\lambda_{k}\};J_{\phi}). \tag{27}\] To derive the flow equations for the higher-oder vertices, we define the \(n-\)point (1PI) vertex by \[c^{(n)}(\{x_{i}\};\{\lambda_{k}\};\phi)=\frac{\delta^{n}(s\Gamma[ \{\lambda_{k}\};\phi])}{\delta\phi(x_{1})\cdots\delta\phi(x_{n})}. \tag{28}\] In particular, the first two vertices can be written as \[c^{(1)}(x;\{\lambda_{k}\};\phi)=-J_{\phi}(x)\,\quad c^{(2)}(x,y;\{ \lambda_{k}\};\phi)=-\frac{\delta J_{\phi}(x)}{\delta\phi(y)}. \tag{29}\] One can check that these vertices and \(F^{(2)}\) satisfy \[\int d^{\tilde{d}}zF^{(2)}(x,z;\{\lambda_{k}\};J_{\phi})c^{(2)}(z,y; \{\lambda_{k}\};\phi)=-\delta^{(\tilde{d})}(x-y)\, \tag{30}\] \[\frac{\delta F^{(2)}(x_{1},x_{2};\{\lambda_{k}\};\phi)}{\delta \phi(x_{3})}=\int d^{\tilde{d}}z\int d^{\tilde{d}}z^{\prime}F^{(2)}(x_{1},z;\{ \lambda_{k}\};\phi)c^{(3)}(x_{3},z,z^{\prime};\{\lambda_{k}\};\phi)F^{(2)}(x_{ 2},z^{\prime};\{\lambda_{k}\};\phi)\,\] (31) \[\frac{\delta c^{(n)}(\{x_{i}\};\{\lambda_{k}\};\phi)}{\delta\phi(x _{n+1})}=c^{(n+1)}(\{x_{i}\};\{\lambda_{k}\};\phi)\ \ \ ({\rm for}\ n\geq 3). \tag{32}\] To rewrite the flow equation (27) in terms of the 1PI vertices, we have to express the \(n-\)point correlation function \(G^{(n)}\) or \(F^{(n)}\) by \(\{c^{(l)}\}\) in general. Then, by taking the functional derivatives of Eq. (27) with respect to \(\phi(x)\), we can obtain the flow equations for the vertices \(\{c^{(l)}\}\). In particular, as in the grand-canonical case, Eqs. (30)(31)(32) are sufficient to derive the flow equations for the higher-order vertices when only \(G^{(2)}\) appears in Eq. (27). Here, we summarize the first two equations in this case: \[\frac{d}{d\lambda_{k}}c^{(1)}(x_{1})=\frac{s}{2}\int d^{\tilde{d }}y\int d^{\tilde{d}}y^{\prime}\frac{\partial v_{2}(y,y^{\prime})}{\partial \lambda_{k}}\bigg{\{}\int d^{\tilde{d}}z\int d^{\tilde{d}}z^{\prime}F^{(2)}(y, z)c^{(3)}(x_{1},z,z^{\prime})F^{(2)}(z^{\prime},y^{\prime})\] \[+2\delta^{(\tilde{d})}(x_{1}-y)\phi(y^{\prime})\bigg{\}}\, \tag{33}\] \[\frac{d}{d\lambda_{k}}c^{(2)}(x_{1},x_{2})=\frac{s}{2}\int d^{ \tilde{d}}y\int d^{\tilde{d}}y^{\prime}\frac{\partial v_{2}(y,y^{\prime})}{ \partial\lambda_{k}}\bigg{\{}\int d^{\tilde{d}}z\int d^{\tilde{d}}z^{\prime}F^ {(2)}(y,z)c^{(4)}(x_{1},x_{2},z,z^{\prime})F^{(2)}(z^{\prime},y^{\prime})\] \[+2\int d^{\tilde{d}}z\int d^{\tilde{d}}z^{\prime}\int d^{\tilde{d }}\omega\int d^{\tilde{d}}\omega^{\prime}F^{(2)}(y,z)c^{(3)}(x_{1},z,\omega)F^ {(2)}(\omega,\omega^{\prime})c^{(3)}(x_{2},\omega^{\prime},z^{\prime})F^{(2)}( z^{\prime},y^{\prime})\] \[+2\delta^{(\tilde{d})}(y-x_{1})\delta^{(\tilde{d})}(y^{\prime}-x_ {2})\bigg{\}}\, \tag{34}\] where we have omitted the parameter dependences and \(\phi(x)\) in the vertices for simplicity. The flow equations for higher-order vertices \(c^{(l)}\) (\(l\geq 3\)) can be straightforwardly obtained by using Eqs. (31)(32). ### Generalized Callan-Symanzik equations Let us consider a situation such that the variation of a parameter \(\lambda_{0}:=t\) can be compensated by the changes of other variables as \[\Gamma[\{t-\delta t,\lambda_{k},V\};\phi]=\Gamma[\{t,\lambda_{k} +\delta\lambda_{k}(\delta t),V+\delta V(\delta t)\};\phi+\delta\phi(\delta t )]\, \tag{35}\] or \[\Gamma[\{t,\lambda_{k},V\};\phi]=\Gamma[\{t+\delta t,\lambda_{k} +\delta\lambda_{k}(\delta t),V+\delta V(\delta t)\};\phi+\delta\phi(\delta t)]\, \tag{36}\] which can be interpreted as a symmetry of the generating functional. Note that we have also included the volume dependence explicitly. The above relations mean \[-\frac{d\Gamma[\{t,\lambda_{k},V\};\phi]}{dt} =\left(\sum_{k}\frac{\delta\lambda_{k}(\delta t)}{\delta t}\frac{ \partial}{\partial\lambda_{k}}+\frac{\delta V(\delta t)}{\delta t}\frac{ \partial}{\partial V}+\int d^{\tilde{d}}x\frac{\delta\phi(\delta t)}{\delta t }\frac{\delta}{\delta\phi(x)}\right)\Gamma[\{t,\lambda_{k},V\};\phi] \tag{37}\] \[: =\mathcal{D}_{t}\Gamma[\{t,\lambda_{k},V\};\phi]. \tag{38}\] By using the general flow equation (27) (with \(\lambda_{k}\to t\)), we have another relation among the correlation functions: \[\mathcal{D}_{t}(s\Gamma[\{t,\lambda_{k}\},V;\phi])=-s\sum_{n=1}^{\infty}\frac {1}{n!}\int\left(\prod_{i=1}^{n}d^{\tilde{d}}x_{i}\right)\frac{\partial v_{n}( \{x_{i}\},\{t,\lambda_{k}\})}{\partial t}G^{(n)}(\{x_{i}\};\{t,\lambda_{k}\}; J_{\phi})\, \tag{39}\] which corresponds to the Callan-Symanzik equation in QFT. 1 Footnote 1: In QFT, the effective action is completely independent of the renormalization scale \(t=\log M\) and \(\Gamma_{t}[\{\lambda_{k}\};\phi]\) satisfies the Callan-Symanzik equation: \[0=\frac{d\Gamma_{t}[\{\lambda_{k}\};\phi]}{dt}=\left(\frac{\partial}{\partial t }+\sum_{k}\beta_{k}\frac{\partial}{\partial\lambda_{k}}-\int d^{d}x\gamma \frac{\partial}{\partial\log\phi(x)}\right)\Gamma_{t}[\{\lambda_{k}\};\phi]. \tag{40}\] In particular, when \(t\) is an artificial parameter such that \(t=1\) corresponds to the original system of interest (i.e. \(v_{n}(\{t=1,\lambda_{k}\})=v_{n}(\{\lambda_{k}\})\)), we have \[\mathcal{D}_{t}(s\Gamma[\{t,\lambda_{k}\},V;\phi])\bigg{|}_{t=1}=-s\sum_{n=1} ^{\infty}\frac{1}{n!}\int\left(\prod_{i=1}^{n}d^{\tilde{d}}x_{i}\right)\frac {\partial v_{n}(\{x_{i}\},\{t,\lambda_{k}\})}{\partial t}\bigg{|}_{t=1}G^{(n) }(\{x_{i}\};\{\lambda_{k}\};J_{\phi})\, \tag{41}\] which indicates another nontrivial relation among the correlation functions and vertices in the original system. Let us see a couple of examples. **GENERAL PRESSURE EQUATION** Here, we represent the generating functional as \(W[\{v_{n}(\{x_{i}\})\},V;J(x)]\). Consider a scale transformation of the volume \(V\rightarrow(1+\tilde{d}\epsilon)V\), where \(\epsilon\) is an infinitesimal parameter and regarded as \(t\) in the above general discussion. In this case, by the definition of the partition function (6), \(W\) satisfies \[W[\{v_{n}(\{x_{i}\})\},(1+\tilde{d}\epsilon)V;J(x)]=W[\{v_{n}(\{(1-\epsilon) x_{i}\})\},V;J((1-\epsilon)x)]\, \tag{42}\] which leads to (L.H.S) \[= \frac{d(sW[\{v_{n}(\{x_{i}\})\},(1+\tilde{d}\epsilon)V;J(x)])}{d \epsilon}\bigg{|}_{\epsilon=0,J=0}=\tilde{d}\frac{\partial(sW[\{v_{n}(\{x_{i}\}) \},V])}{\partial\log V}\] (R.H.S) \[= -s\sum_{n=1}^{\infty}\frac{1}{n!}\int\left(\prod_{i=1}^{n}d^{ \tilde{d}}x_{i}\right)G^{(n)}(\{x_{i}\})\sum_{i=1}^{n}x_{i}^{\mu}\partial_{\mu }^{(i)}v_{n}(\{x_{i}\})-\int d^{\tilde{d}}x\int d^{\tilde{d}}yy^{\mu}\frac{ \delta(\partial_{\mu}^{(y)}J(y))}{\delta J(x)}F^{(1)}(x)\] (43) \[\therefore\ \frac{\partial(sW[\{v_{n}(\{x_{i}\})\},V;J])}{\partial\log V}= \int d^{\tilde{d}}xF^{(1)}(x)-\frac{s}{d}\sum_{n=1}^{\infty}\frac{1}{n!}\int \left(\prod_{i=1}^{n}d^{\tilde{d}}x_{i}\right)G^{(n)}(\{x_{i}\})\sum_{i=1}^{n} x_{i}^{\mu}\partial_{\mu}^{(i)}v_{n}(\{x_{i}\}). \tag{44}\] This is a generalization of the _pressure equation_ in the classical liquid systems [18, 19]. In fact, the classical liquid systems correspond to \[\left.\frac{\partial(-\beta W[V])}{\partial V}\right|_{T,\mu}=\beta p\,\quad F^{(1)}(x)=\rho\, \tag{45}\] and Eq. (44) becomes \[\frac{p}{T}=\rho+\frac{1}{T(d-1)}\sum_{n=1}^{\infty}\frac{1}{n!}\int\left( \prod_{i=1}^{n}d^{d-1}x_{i}\right)G^{(n)}(\{x_{i}\})\sum_{i=1}^{n}x_{i}^{\mu} \partial_{\mu}^{(i)}v_{n}(\{x_{i}\}). \tag{46}\] In Section 3.3, we will see another example of parameter redundancy in the classical simple liquid systems. **SYSTEMS WITH SCALING POTENTIALS** When the microscopic potential \(v_{n}(\{x_{i}\},\{t,\lambda_{k}\})\) satisfies the following scaling property \[v_{n}(\{x_{i}\},\{t,\lambda_{k}\})=(t^{\Delta})^{n}v_{n}(\{x_{i}\},\{\lambda_ {k}\}) \tag{47}\] for some (artificial) parameter \(t\), the effective action satisfies \[\Gamma[\{t,\lambda_{k}\},V;\phi]=\Gamma[\{\lambda_{k},V\};t^{\Delta}\phi]\, \tag{48}\] which leads to \[\int d^{\tilde{d}}x\frac{\delta(s\Gamma[\{\lambda_{k},V\};\phi])}{\delta\log \phi(x)}=s\sum_{n=1}^{\infty}\frac{1}{(n-1)!}\int\left(\prod_{i=1}^{n}d^{ \tilde{d}}x_{i}\right)v_{n}(\{x_{i}\},\{\lambda_{k}\})G^{(n)}(\{x_{i}\};\{ \lambda_{k}\};J_{\phi}) \tag{49}\] \[\therefore\ \int d^{\tilde{d}}x\phi(x)c^{(1)}(x,\{\lambda_{k}\};\phi)=s\sum_{n=1}^{ \infty}\frac{1}{(n-1)!}\int\left(\prod_{i=1}^{n}d^{\tilde{d}}x_{i}\right)v_{n} (\{x_{i}\},\{\lambda_{k}\})G^{(n)}(\{x_{i}\};\{\lambda_{k}\};J_{\phi}). \tag{50}\] For \(\phi=\)constant, this gives another relation between thermodynamic quantities and correlation functions. For example, we can consider a two-body system with \[v_{2}(x-y)=\frac{v_{0}}{|x-y|^{m}}\,\quad v_{n}(\{x_{i}\})=0\quad(n\neq 2). \tag{51}\] which satisfies the scaling property \(v_{2}(tx)=t^{-m}v_{2}(x)\). In this case, Eq. (50) becomes \[\int d^{\bar{d}}x\phi(x)c^{(1)}(x)=s\int d^{\bar{d}}x\int d^{\bar{d}}yv_{2}(x-y )G^{(2)}(x,y;J_{\phi}). \tag{52}\] By putting \(\phi(x)=\phi=\)constant, we then obtain \[-J_{\phi}V\phi=s\int d^{\bar{d}}x\int d^{\bar{d}}yv_{2}(x-y)G^{(2)}(x,y;J_{ \phi})\, \tag{53}\] where we have used Eq. (29). This resembles the sum rules of correlation functions in classical liquid systems [18, 19]. 2 Footnote 2: In liquid systems, \(\phi=\rho\) corresponds to the density and \(J_{\rho}\) corresponds the chemical potential \(\beta\mu\). Thus, \(V\phi=N\) is the total particle number in the left hand side. ## 3 Examples We apply the general results developed in the previous section to several (non)equilibrium systems and reproduce the well-known functional flow equations discussed in the literatures. ### Functional renormalization group in QFT In the functional renormalization approach in QFT [13, 15, 16], we introduce a cut off \(k\) in the two-point vertex as \[v_{2}(x,y;m^{2},k)=\delta^{(d)}(x-y)(\partial^{2}-m^{2})-R_{k}(x-y)\, \tag{54}\] where \(R_{k}(x-y)\) is a regulator function whose Fourier mode qualitatively satisfies \[\tilde{R}_{k}(p)\sim\begin{cases}0&\text{for }p\gg k\\ \mathcal{O}(k^{2})&\text{for }p\ll k\end{cases}. \tag{55}\] Namely, it suppresses the low-energy modes and takes only the high-energy modes in the partition function. See Refs. [15, 16] and references therein for various proposals of \(\tilde{R}_{k}(p)\). In this case, the general flow equation (27) becomes \[\frac{d(s\Gamma[k;\phi])}{dk} =\frac{s}{2}\int d^{d}x\int d^{d}y\partial_{k}R_{k}(x-y)G_{k}^{(2)} (x,y;J_{\phi}) \tag{56}\] \[=\frac{s}{2}\text{Tr}\left((\partial_{k}R_{k})G_{k}^{(2)}\right)\, \tag{57}\] with \(s=(i\hbar^{-1})^{-1}\). Or by introducing a new effective action and two-point vertex by \[s\Gamma_{k}[\phi] :=s\Gamma[k;\phi]+\frac{s}{2}\langle\phi|R_{k}|\phi\rangle \tag{58}\] \[\Gamma_{k}^{(2)}(x,y) :=\frac{\delta^{2}\Gamma_{k}[\phi]}{\delta\phi(x)\delta\phi(y)}= s^{-1}c_{k}^{(2)}(x,y)+R_{k}(x,y)\, \tag{59}\] we have \[\frac{d(s\Gamma_{k}[\phi])}{dk} =\frac{\partial(s\Gamma[k;\phi])}{\partial k}+\frac{s}{2}\langle \phi|\partial_{k}R_{k}|\phi\rangle \tag{60}\] \[=\frac{s}{2}\int d^{d}x\int d^{d}y(\partial_{k}R_{k})G_{k}^{(2)} (x,y;J_{\phi})+\frac{s}{2}\langle\phi|\partial_{k}R_{k}|\phi\rangle_{J_{\phi}}\] (61) \[=\frac{s}{2}\int d^{d}x\int d^{d}y(\partial_{k}R_{k})F_{k}^{(2)} (x,y;J_{\phi})\.\] (62) \[\therefore\quad\frac{d(i\hbar^{-1}\Gamma_{k}[\phi])}{dk} =-\frac{1}{2}\int d^{d}x\int d^{d}y(\partial_{k}R_{k})(\Gamma_{k} ^{(2)}-R_{k})^{-1}=-\frac{1}{2}\text{Tr}\left[(\partial_{k}R_{k})(\Gamma_{k} ^{(2)}-R_{k})^{-1}\right]\, \tag{63}\] which is the Wetterich's equation [11] in the Minkowski spacetime. The Euclidean case is obtained by the replacement \[\Gamma_{k}[\phi]=i\Gamma_{k}^{E}[\phi]\,\quad\Gamma_{k}^{(2)}=-\Gamma_{k}^{E }{}^{(2)}. \tag{64}\] Note that the right-hand side is \(\mathcal{O}(s^{0})=\mathcal{O}(\hbar^{0})\), which indicates that this term corresponds to the one-loop diagram. As generally explained in Section 2.2, we can obtain the flow equations for higher-order vertices by taking the functional derivative of Eq. (63) with respect to \(\phi(x)\). ### Quantum spin systems Quantum spin systems [17] have played an significant role in statistical mechanics because they capture the essence of quantum many-body systems. We can also develop the functional flow in these systems [27]. In the following, a \((d-1)\) dimensional lattice is represented by \(\Lambda\), and \(i,j,\cdots\) denote the sites. Besides, a collection of bonds, i.e. \((i,j)\) such that \(i\neq j\), is represented by \(\mathscr{B}\), which determines the interactions of sites. We consider the quantum Heisenberg model: \[\hat{H} =\sum_{(i,j)\in\mathscr{B}}J_{ij}\hat{\mathbf{S}}_{i}\cdot\hat{ \mathbf{S}}_{j}-\beta^{-1}\sum_{i\in\Lambda}\mathbf{U}_{i}\cdot\hat{\mathbf{S }}_{i}\, \tag{65}\] \[Z[\{\mathbf{U}_{i}\}] =\exp\left(-\beta W[\{\mathbf{U}^{i}\}]\right)=\text{Tr}(e^{- \beta\hat{H}}). \tag{66}\] where \(J_{ij}\) is the (anti)ferromagnetic coupling constant and \({\bf H}_{i}=\beta^{-1}{\bf U}_{i}=\beta^{-1}(U_{i}^{x},U_{i}^{y},U_{i}^{z})\) is an external magnetic field. The spin operators \(\hat{\bf S}_{i}=(\hat{S}_{i}^{x},\hat{S}_{i}^{y},\hat{S}_{i}^{z})\) satisfy the commutation relation \[[\hat{S}_{i}^{\alpha},\hat{S}_{j}^{\beta}]=i\delta_{ij}\epsilon^{\alpha\beta \gamma}\hat{S}_{i}^{\gamma}\, \tag{67}\] where \(\epsilon^{\alpha\beta\gamma}\) is the totally antisymmetric tensor. The magnitude of the spin is defined by \(\hat{\bf S}^{2}=S(S+1)\). Now we introduce an artificial parameter \(t\in[0,1]\) and deform the couplings as \(J_{ij}\to J_{ij}(t)\) with the boundary condition \(J_{ij}(t=1)=J_{ij}\). Correspondingly, we represent the generating functionals as \[W[t;\{{\bf U}_{i}\}]=W_{t}[\{{\bf U}_{i}\}]\,\quad\Gamma[t;\{{\bf S}_{i}\}]= \Gamma_{t}[\{{\bf S}_{i}\}]. \tag{68}\] As for the boundary (initial) condition at \(t=0\), it is often assumed that \(J_{ij}(t=0)\) corresponds the coupling of some (exactly) solvable system [27]. The correlation functions are defined by \[G^{i_{1}\cdots i_{n}}_{\alpha_{1}\cdots\alpha_{n}}(t;\{{\bf U}_{i }\}): = \frac{1}{Z[t;\{{\bf U}_{i}\}]}\frac{\delta^{n}Z[t;\{{\bf U}_{i}\}] }{\delta U^{\alpha_{1}}_{i_{1}}\cdots\delta U^{\alpha_{n}}_{i_{n}}}\, \tag{69}\] \[F^{i_{1}\cdots i_{n}}_{\alpha_{1}\cdots\alpha_{n}}(t;\{{\bf U}_{i }\}): = \frac{\delta^{n}(-\beta W_{t}[\{{\bf U}_{i}\}])}{\delta U^{\alpha_ {1}}_{i_{1}}\cdots\delta U^{\alpha_{n}}_{i_{n}}}\,\] (70) \[c^{i_{1}\cdots i_{n}}_{\alpha_{1}\cdots\alpha_{n}}(t;\{{\bf U}_{i }\}): = \frac{\delta^{n}(-\beta\Gamma_{t}[\{{\bf S}_{i}\}])}{\delta S^{ \alpha_{1}}_{i_{1}}\cdots\delta S^{\alpha_{n}}_{i_{n}}}. \tag{71}\] Then, by using the general result (27), we obtain the flow equation for the canonical potential as \[\frac{d(-\beta\Gamma_{t}[\{{\bf S}_{i}\}])}{dt} = -\beta\sum_{\alpha=x,y,z}\sum_{(i,j)\in\mathscr{B}}(\partial_{t} J_{ij}(t))G^{ij}_{\alpha\alpha}(t;\{{\bf U}^{S}_{i}\}) \tag{72}\] \[= -\beta\sum_{\alpha=x,y,z}\sum_{(i,j)\in\mathscr{B}}(\partial_{t} J_{ij}(t))\left[F^{ij}_{\alpha\alpha}(\{{\bf U}^{S}_{i}\})+S^{\alpha}_{i}S^{ \alpha}_{j}\right]. \tag{73}\] where \({\bf U}^{S}_{i}\) is the solution of the Legendre transformation. Note that we do not have a symmetric factor \(\frac{1}{2}\) in this case because of the definition of the Hamiltonian (65). As in the FRG in QFT, we can also introduce another canonical functional \[-\beta\mathscr{C}_{t}[\{{\bf S}_{i}\}]: = -\beta\Gamma_{t}[\{{\bf S}_{i}\}]+\beta\sum_{(i,j)\in\mathscr{B} }R_{ij}(t){\bf S}_{i}\cdot{\bf S}_{j}. \tag{74}\] where \(R_{ij}(t)=J_{ij}(t)-J_{ij}\). Now we have \[\frac{d(-\mathscr{P}_{t}[\{\mathbf{S}_{i}\}])}{dk} =-\beta\sum_{\alpha=x,y,z}\sum_{(i,j)\in\mathscr{B}}(\partial_{t}J _{ij}(t))F_{\alpha\alpha}^{ij}(t;\{\mathbf{U}_{i}^{S}\}) \tag{75}\] \[=-\sum_{\alpha=x,y,z}\sum_{(i,j)\in\mathscr{B}}(\partial_{t}R_{ ij}(t))\left([\mathbf{\Gamma}_{t}^{(2)}]_{\alpha\alpha}^{ij}+\partial_{t}R_{ij}(t) \right)^{-1}\] (76) \[=-\mathrm{Tr}\left[(\partial_{t}\mathbf{R})(\mathbf{\Gamma}_{t}^ {(2)}+\mathbf{R}_{t})^{-1}\right]\, \tag{77}\] where \[[\mathbf{\Gamma}_{t}^{(2)}]_{\alpha\beta}^{ij}=\frac{\delta \mathscr{C}_{t}[\{\mathbf{S}_{i}\}]}{\delta S_{i}^{\alpha}\delta S_{i}^{\beta} }\,\quad[\mathbf{R}_{t}]_{\alpha\beta}^{ij}=\delta_{\alpha\beta}R_{ij}(t). \tag{78}\] Generalization to higher order interactions such as \(J_{ijkl}(t)(\hat{\mathbf{S}}_{i}\cdot\hat{\mathbf{S}}_{j})(\hat{\mathbf{S}}_{ k}\cdot\hat{\mathbf{S}}_{l})\) is also straightforward by using the general result (27). ### Classical liquids Next example is the classical liquid system: \[H_{N}=\sum_{i=1}^{N}\frac{p_{i}^{2}}{2m}+V_{N}(\{x_{i}\}_{i=1}^{ N})\, \tag{79}\] where \[V_{N}(\{x_{i}\}_{i=1}^{N})=\sum_{i<j}^{N}v(x_{i},x_{j})+\sum_{i<j <k}^{N}v_{3}(x_{i},x_{j},x_{k})+\cdots \tag{80}\] is a general potential energy of \(N\) particles. The grand-canonical partition function \(\Xi\) and grand potential \(W\) are defined by \[\Xi[T,V;U] =\exp\left(-\beta W[T,V;U]\right) \tag{81}\] \[=\sum_{N=0}^{\infty}\frac{1}{N!}\int_{V}d^{d-1}x_{1}\int d^{d-1}p _{1}\cdots\int_{V}d^{d-1}x_{N}\int d^{d-1}p_{N}\exp\left(-\beta H_{N}+\beta \sum_{i=1}^{N}U(x_{i})\right)\] \[=\sum_{N=0}^{\infty}\frac{(2\pi mT)^{N(d-1)/2}}{N!}\int_{V}d^{d-1 }x_{1}\cdots\int_{V}d^{d-1}x_{N}\exp\left(-\beta V_{N}+\beta\sum_{i=1}^{N}U(x _{i})\right)\,\] where \(U(x)\) is a position-dependent chemical potential. The usual thermodynamic equilibrium corresponds to \(U(x)=\mu\). By using the density operator \[\rho(x):=\sum_{i=1}^{N}\delta^{(d-1)}(x-x_{i})\, \tag{83}\] the grand-canonical partition function can be rewritten as \[\Xi[T,\mu,V;U] =\sum_{N=0}^{\infty}\frac{(2\pi mT)^{N(d-1)/2}}{N!}\int_{V}d^{d-1} x_{1}\cdots\int_{V}d^{d-1}x_{N}\] \[\quad\times\exp\left(-\frac{1}{2}\langle\rho|\beta v|\rho\rangle+ \frac{\beta}{2}\int d^{d-1}xv(x,x)\rho(x)+\cdots+\langle\beta U|\rho\rangle \right)\, \tag{84}\] where \(\cdots\) represents the higher-order potential terms and \[\langle\rho|\beta v|\rho\rangle=\int d^{d-1}x\int d^{d-1}y\rho(x) \beta v(x,y)\rho(y)\, \tag{85}\] \[\langle\beta U|\rho\rangle=\beta\int d^{d-1}xU(x)\rho(x). \tag{86}\] Comparing Eq. (84) with general Hamiltonian (2), we can read \[v_{1}(x)=-\frac{v(x,x)}{2}\,\quad v_{2}(x,y)=v(x,y) \tag{87}\] in this case. The Legendre transformation of \(-\beta W[T,V;U]\) with respect to \(U(x)\) gives the canonical free energy \(-\beta\Gamma[T,V;\rho]\): \[-\beta\Gamma[T,V;\rho] =\mathop{\rm Min}_{U}\left(-\beta W[T,V;U]-\langle\beta U|\rho\rangle\right)\] \[=-\beta W[T,V;U_{\rho}]-\beta\int d^{d-1}xU_{\rho}(x)\rho(x)\, \tag{88}\] where \(\rho(x)\) represents a general density field, and \(U_{\rho}(x)\) is a solution of \[\frac{\delta(-\beta W[T,V;U])}{\delta(\beta U(x))}=\rho(x). \tag{89}\] **Hierarchical Reference Theory** In the following, we focus on the simple liquids, that is, \[v(x,y)=v(|x-y|)\,\quad v_{n}(\{x_{i}\})=0\ \text{for}\ n\geq 3. \tag{90}\] In this case, the Fourier mode \[\tilde{v}(q)=\int d^{d-1}xe^{-iq\cdot x}v(x) \tag{91}\] is a function of \(|q|\) and real \(\tilde{v}(q)^{*}=\tilde{v}(-q)=\tilde{v}(q)\). In the HRT [20, 21, 24], an artificial IR cut-off \(k\) is introduced to the Fourier mode of the two-body pair potential \(v(x)\) as \[\tilde{v}_{k}(q)\sim\begin{cases}\tilde{v}(q)&\text{for}\ |q|\gg k\\ 0&\text{for}\ |q|\ll k\end{cases}\, \tag{92}\] with the boundary conditions \[\lim_{k\to\infty}\tilde{v}_{k}(q)=\tilde{v}_{R}(q)\,\quad\lim_{k\to 0} \tilde{v}_{k}(q)=\tilde{v}(q)\, \tag{93}\] where \(v_{R}(x)\) is the pair potential of some reference system. In the literatures [20, 21, 22, 23], a repulsive potential (such as the hard core potential) is often chosen for \(v_{R}(x)\). 3 The expression (92) is not mathematically complete yet, but we do not need an exact form of \(\tilde{v}_{k}(q)\) to derive the functional flow equation. All the quantities calculated by using \(v_{k}(x,y)\) are represented as Footnote 3: Recently, a new approach using the cavity distribution functions was proposed in Ref. [24]. The utilization of the cavity distribution functions eliminates possible divergences coming from the strong short-range repulsive potential. \[v_{k1}(x)=-\frac{v_{k}(x,x)}{2}\,\quad v_{k2}(x,y)=v_{k}(x,y)\, \tag{94}\] \[W_{k}[T,V;U]\,\quad\Gamma_{k}[T,V;\rho]\,\quad F_{k}^{(l)}(x_{1}, \cdots,x_{l})\,\quad c_{k}^{(l)}(x_{1},\cdots,x_{l}). \tag{95}\] Now, according to the general result (27), we have \[\frac{d(-\beta\Gamma_{k}[T,V;\rho])}{dk}=-\beta\bigg{[}\int d^{d -1}x\partial_{k}v_{k1}(x)F_{k}^{(1)}(x)\] \[\qquad\quad+\frac{1}{2}\int d^{d-1}x\int d^{d-1}y\partial_{k}v_{k2 }(x,y)\left(F_{k}^{(2)}(x,y)+F_{k}^{(1)}(x)F_{k}^{(1)}(y)\right)\bigg{]}\] \[= -\frac{\beta}{2}\int d^{d-1}x\int d^{d-1}y\partial_{k}v_{k}(x,y) n_{k}^{(2)}(x,y)\, \tag{96}\] where \[n_{k}^{(2)}(x,y)=F_{k}^{(2)}(x,y)+F_{k}^{(1)}(x)F_{k}^{(1)}(y)- \delta^{(d-1)}(x-y)F_{k}^{(1)}(y) \tag{97}\] is called the total correlation function in liquid theory. To derive a more simple expression, we introduce another canonical free energy \({\cal A}_{k}\) as follows: \[-\beta{\cal A}_{k}[T,V;\rho]=-\beta\Gamma_{k}[T,V;\rho]+\frac{ \beta}{2}\int d^{d-1}x\int d^{d-1}y\left\{v_{k}(x,y)-v(x,y)\right\}\left\{ \rho(x)\rho(y)-\delta^{(d)}(x-y)\rho(x)\right\}\, \tag{98}\] which satisfies \[\lim_{k\to\infty}{\cal A}_{k}[T,V;\rho]=\Gamma_{R}[T,V;\rho]\,\quad\lim_{k\to 0}{\cal A }_{k}[T,V;\rho]=\Gamma[T,V;\rho]\, \tag{99}\] where \(\Gamma_{R}[T,V;\rho]\) is the canonical potential for a reference system. By taking the \(k\) derivative of \(-\beta{\cal A}_{k}\), we obtain \[\frac{d(-\beta{\cal A}_{k}[T,V;\rho])}{dk}=-\frac{\beta}{2}\int d ^{d-1}x\int d^{d-1}x\partial_{k}v_{k}(x-y)F_{k}^{(2)}(x-y)\.\] \[=-\frac{\beta}{2}{\rm Tr}[(\partial_{k}v_{k})F_{k}^{(2)}]\, \tag{100}\] We represent the two-point vertex defined by \(\beta{\cal A}_{k}[T,V;\rho]\) as \[C_{k}^{(2)}(x-y):=\frac{\delta(\beta{\cal A}_{k}[T,V;\rho])}{\delta\rho(x)\delta \rho(y)}\bigg{|}_{\rho(x)=n}=-c_{k}^{(2)}(x-y)-\beta v_{k}(x-y)+\beta v(x-y)\;, \tag{101}\] which means that the inverse of \(F_{k}^{(2)}(x)\) is given by \(C_{k}^{(2)}(x)+\beta v_{k}(x)-\beta v(x)\). Thus, Eq. (100) can be also written as \[\frac{d(\beta{\cal A}_{k}[T,V;\rho])}{dk}=\frac{1}{2}{\rm Tr}\left[\partial_{k }(\beta v_{k})(C_{k}^{(2)}+\beta v_{k}-\beta v)^{-1}\right]\;, \tag{102}\] which is exactly the same form as the Wetterich's equation (63) with the identification \(\beta(v_{k}-v)\to R_{k}\). This fact indicates that the simple liquid systems essentially belong to the same universality class of the scalar QFT. See Refs. [20, 21] and references therein for the study of critical phenomena based on the above functional flow equation. **DENSITY RERORMALIZATION GROUP** Another application is the density renormalization group which was investigated in our previous paper [25]. We are interested in how the classical liquid systems respond to the variation of the density \(\rho(x)=n\). To see this, we consider the scale transformation of the pair potential: \[v(x)\quad\rightarrow\quad v(\lambda x)\;,\quad\lambda>0\;, \tag{103}\] where \(\lambda\) is regarded as one of the parameters in the general Hamiltonian (2). The corresponding canonical potential is represented as \(\Gamma_{\lambda}[T,V;\rho]\). Then, by repeating the same calculation as Eq. (96), we obtain the flow equation \[\frac{d(-\beta\Gamma_{\lambda}[T,V;\rho])}{d\lambda}=-\frac{\beta}{2}\int d^{ d-1}x\int d^{d-1}y\ n_{\lambda}^{(2)}(x,y)(x-y)^{i}\partial_{i}v(\lambda(x-y))\;. \tag{104}\] In this case, we can use the idea developed in Section 2.3, that is, the scale parameter \(\lambda\) can be compensated by other variables. In fact, the original grand partition function satisfies \[\Xi_{\lambda}[T,V;U(x)]:=\Xi[T,V;U(x)]\bigg{|}_{v(x)\to v( \lambda x)}\] \[= \sum_{N=0}^{\infty}\frac{(2\pi mT)^{N(d-1)/2}}{N!}\int_{V}\left( \prod_{i=1}^{N}d^{d-1}x_{i}\right)\exp\left(-\beta\sum_{i<j}v(\lambda(x_{i}-x _{j}))+\beta\sum_{i}U(x_{i})\right)\] \[= \sum_{N=0}^{\infty}\frac{(2\pi mT)^{N(d-1)/2}}{N!}\int_{\lambda^ {d-1}V}\left(\prod_{i=1}^{N}d^{d-1}x_{i}\right)\exp\left(-\beta\sum_{i<j}v(x_ {i}-x_{j})+\beta\sum_{i}\left\{U(x_{i}/\lambda)-(d-1)T\log\lambda\right\}\right)\] \[= \Xi[T,\lambda^{d-1}V;U(x/\lambda)-(d-1)T\log\lambda]\;. \tag{105}\] We see that the change of the pair potential under the scale transformation, \(v(x)\to v(\lambda x)\), can be compensated by the changes of the chemical potential \(\mu\), the volume \(V\), and the external chemical potential \(U(x)\). For an infinitesimal scale transformation \(\lambda=1+\epsilon\), we have \[\Xi_{1+\epsilon}[T,V;U(x)]=\Xi[T,(1+(d-1)\epsilon)V;U(x(1-\epsilon))-(d-1)T \epsilon]\, \tag{106}\] or equivalently the grand potential satisfies \[-\beta W_{1+\epsilon}[T,V;U(x)]=-\beta W[T,(1+(d-1)\epsilon)V;U(x(1-\epsilon) )-(d-1)T\epsilon]. \tag{107}\] The Legendre transformation of the left hand side gives \(-\beta\Gamma_{1+\epsilon}[T,V;\rho(x)]\) by definition. On the other hand, the Legendre transformation of the right hand side is \[\underset{U}{\text{Min}}\left[-\beta W[T,(1+(d-1)\epsilon)V;U(x (1-\epsilon))-(d-1)T\epsilon]-\beta\int_{V}d^{d-1}xU(x)\rho(x)\right]\] \[= \underset{U}{\text{Min}}\bigg{[}-\beta W[T,(1+(d-1)\epsilon)V;U( x(1-\epsilon))-(d-1)T\epsilon]\] \[-(1-(d-1)\epsilon)\beta\int_{(1+(d-1)\epsilon)V}d^{d-1}x\left\{U (x(1-\epsilon))-(d-1)T\epsilon\right\}\rho(x(1-\epsilon))\bigg{]}-(d-1) \epsilon\int_{V}d^{d}x\rho(x)\] \[= -\beta\Gamma[T,(1+(d-1)\epsilon)V;(1-(d-1)\epsilon)\rho(x(1- \epsilon))]-(d-1)\epsilon\int_{V}d^{d-1}x\rho(x)\, \tag{108}\] which leads to \[-\beta\Gamma_{1+\epsilon}[T,V;\rho(x)]=-\beta\Gamma[T,(1+(d-1) \epsilon)V;(1-(d-1)\epsilon)\rho(x(1-\epsilon))]-(d-1)\epsilon\int_{V}d^{d-1} x\rho(x)+\mathcal{O}(\epsilon^{2}). \tag{109}\] This relation corresponds to Eq. (35) in the general formalism. By taking the functional derivative with respect to \(\rho(x)\), we have4 Footnote 4: By writing the parameter dependences explicitly, the \(l\)-th order functional derivative of the first term in the R.H.S is calculated as \[(1-(d-1)l\epsilon)\left(\prod_{i=1}^{l}\int d^{d-1}y_{i}\frac{ \delta(\rho(y_{i})-\epsilon y_{i}^{\mu}\partial_{\mu}\rho(y_{i}))}{\delta \rho(x_{i})}\right)c_{l}(\{y_{i}\},T,V+(d-1)\epsilon V;\rho)\] \[=(1-(d-1)l\epsilon)c^{(l)}(\{x_{i}\},T,V+(d-1)eV;\rho)+\epsilon \sum_{i=1}^{l}\partial_{i\mu}(x_{i}^{\mu}c^{(l)}(\{x_{i}\},T,V;\rho))\] \[=c^{(l)}(\{x_{1}\},T,V;\rho)+(d-1)\epsilon\frac{\partial}{ \partial\log V}c^{(l)}(\{x_{1}\},T,V;\rho)+\epsilon\sum_{i=1}^{l}x_{i}^{\mu} \partial_{i\mu}c^{(l)}(\{x_{1}\},T,V;\rho) \tag{110}\] . This relation corresponds to Eq. (35) in the general formalism. By taking the functional derivative with respect to \(\rho(x)\), we have5 Footnote 5: By writing the parameter dependences explicitly, the \(l\)-th order functional derivative of the first term in the R.H.S is calculated as \[(1-(d-1)l\epsilon)\left(\prod_{i=1}^{l}\int d^{d-1}y_{i}\frac{ \delta(\rho(y_{i})-\epsilon y_{i}^{\mu}\partial_{\mu}\rho(y_{i}))}{\delta \rho(x_{i})}\right)c_{l}(\{y_{i}\},T,V+(d-1)\epsilon V;\rho)\] \[=(1-(d-1)l\epsilon)c^{(l)}(\{x_{i}\},T,V+(d-1)\epsilon V;\rho)+ \epsilon\sum_{i=1}^{l}\partial_{i\mu}(x_{i}^{\mu}c^{(l)}(\{x_{i}\},T,V;\rho))\] \[=c^{(l)}(\{x_{1}\},T,V;\rho)+(d-1)\epsilon\frac{\partial}{ \partial\log V}c^{(l)}(\{x_{1}\},T,V;\rho)+\epsilon\sum_{i=1}^{l}x_{i}^{\mu} \partial_{i\mu}c^{(l)}(\{x_{1}\},T,V;\rho) \tag{111}\] \[\therefore\quad\frac{dc^{(l)}_{1+\epsilon}(\{x_{i}\})}{d\epsilon} \bigg{|}_{\epsilon=0}=\left[(d-1)\frac{\partial}{\partial\log V}\bigg{|}_{T,N}+ \sum_{i=1}^{l}x_{i}^{\mu}\partial_{i\mu}\right]c^{(l)}(\{x_{i}\})\] \[-\delta_{l0}(d-1)\int_{V}d^{d-1}x\rho(x)-\delta_{l1}(d-1). \tag{112}\] On the other hand, the left hand side can be also calculable by taking the functional derivatives of the flow equation (104). Thus, we obtain \[\left[(d-1)\frac{\partial}{\partial\log V}\bigg{|}_{T,N}+\sum_{i= 1}^{l}x_{i}^{\mu}\partial_{i\mu}\right]c^{(l)}(\{x_{i}\})-\delta_{l0}(d-1)\int _{V}d^{d-1}x\rho(x)-\delta_{l1}(d-1)\] \[= -\frac{\beta}{2}\int d^{d-1}x\int d^{d-1}y\ \frac{\delta^{l}n^{(2)}(x,y)}{\delta\rho(x_{1})\cdots \delta\rho(x_{l})}(x-y)^{i}\partial_{i}v(x-y). \tag{113}\] Since the volume derivative is related to the density derivative by \[\frac{\partial}{\partial\log V}\bigg{|}_{T,N}=-\frac{\partial}{\partial\log n }\bigg{|}_{T,N}\, \tag{114}\] Eq. (113) describes the response of the system to the density variation. See also Ref. [25] for the calculations of the right hand side in Eq. (113). Note that we used the field theoretical approach in the previous work [25] to derive the above flow equation. However, one can see that it is merely a direct consequence of general response theory. ### Classical nonequilibirum systems Response theory developed in Section 2 can be also applied to classical nonequilibirum systems because the latter can be cast into the path integral formulation in some systems [28, 29, 30]. Here, we focus on the Langevin stochastic dynamics. In the following, \(x\) denotes a spacetime point \(x\in\Sigma_{d}\) while \(t\) is the Langevin time. We consider a real scalar field \(\phi(t,x)\) which obeys the Langevin equation \[\partial_{t}\phi(t,x)=-F[\phi(t,x)]+\xi(t,x)\, \tag{115}\] where \(F[\phi(t,x)]\) is a general external force and \(\xi(t,x)\) denotes the Gaussian random force i.e. its probability distribution is given by \[P[\xi(t,x)]=\mathcal{N}\exp\left(-\frac{1}{2}\langle\xi|G^{-1}|\xi\rangle \right)\, \tag{116}\] where \[\langle\xi|G^{-1}|\xi\rangle=\int dt\int dt^{\prime}\int d^{d}x\int d^{d}x^{ \prime}\xi(t,x)G^{-1}(t,x;t^{\prime},x^{\prime})\xi(t^{\prime},x^{\prime}). \tag{117}\] Here \({\cal N}\) is a normalization factor. Correspondingly, the correlation function of \(\xi(t,x)\) is given by \[\langle\xi(t,x)\xi(t^{\prime},x^{\prime})\rangle=G(t,x;t^{\prime},x^{\prime}). \tag{118}\] The Langevin system (115) can be cast into a field theory based on the Martin-Siggia-Rose-Janssen-de Dominicis formalism [28, 29, 30] as follows. The expectation value of an operator \({\cal O}[\phi(t,x)]\) can be written as \[\langle{\cal O}[\phi(t,x)]\rangle =\left\langle\int{\cal D}\phi{\cal O}[\phi(t,x)]\prod_{t,x}\delta \left(\partial_{t}\phi(t,x)-F[\phi(x,t)]-\xi(t,x)\right)\right\rangle \tag{119}\] \[\propto\left\langle\int{\cal D}\phi\int{\cal D}\phi^{\prime}{\cal O }[\phi(t,x)]\exp\left(i\int dt\int d^{d}x\phi^{\prime}(t,x)\left\{\partial_{t }\phi(t,x)+F[\phi(x,t)]-\xi(t,x)\right\}\right)\right\rangle\] (120) \[=\int{\cal D}\phi\int{\cal D}\phi^{\prime}{\cal O}[\phi(t,x)]e^{ i\int dt\int d^{d}x\phi^{\prime}(t,x)(\partial_{t}\phi(t,x)+F[\phi(x,t)])}\] \[\times\left\langle\exp\left(-i\int dt\int d^{d}x\phi^{\prime}(t, x)\xi(t,x)\right)\right\rangle\, \tag{121}\] where we have introduced another real scalar \(\phi^{\prime}(t,x)\) and \(\langle\cdots\rangle\) is the expectation value by the distribution (116). By using Eq. (116), the above expectation value is calculated as \[\left\langle\exp\left(-i\int dt\int d^{d}x\phi^{\prime}(t,x)\xi( t,x)\right)\right\rangle ={\cal N}\int{\cal D}\xi\exp\left(-\frac{1}{2}\langle\xi|G^{-1}| \xi\rangle-i\langle\phi^{\prime}|\xi\rangle\right)\] \[=\exp\left(\frac{1}{2}\langle\phi^{\prime}|G|\phi^{\prime}\rangle \right)\, \tag{122}\] which leads to the following path integral expression \[\langle{\cal O}[\phi(t,x)]\rangle=\frac{1}{Z}\int{\cal D}\phi\int{\cal D}\phi ^{\prime}{\cal O}[\phi(t,x)]e^{iS[\phi,\phi^{\prime}]} \tag{123}\] where the action is given by 5 Footnote 5: In the literatures, \(\overline{\phi}:=i\phi^{\prime}\) is often regarded as an elementary field. \[S[\phi,\phi^{\prime}]=\int dt\int d^{d}x\phi^{\prime}(t,x)\left\{\partial_{t }\phi(t,x)+F[\phi(x,t)]\right\}+\frac{1}{2i}\langle\phi^{\prime}|G|\phi^{ \prime}\rangle. \tag{124}\] One can see that the effect of random force \(\xi(t,x)\) is now taken placed by another real scalar field \(\phi^{\prime}(t,x)\) and the system is now described by a two-scalar QFT in a \((d+1)\) dimensional spacetime. As in the FRG, we can add a regulator function: \[\Delta S=-\frac{1}{2}\sum_{i,j=1}^{2}\langle\phi_{i}|R_{k,ij}|\phi_{j}\rangle\, \tag{125}\] where \(\phi_{1(2)}=\phi\) (\(\phi^{\prime}\)) and the regulator function \(R_{k,ij}(x-y)\) is now a \(2\times 2\) matrix. Then, by performing the same calculations as Section 3.1, we obtain \[\frac{d(i\Gamma_{k}[\phi,\phi^{\prime}])}{dk}=-\frac{1}{2}\mbox{ Tr}\left((\partial_{k}R_{k,ij})(\Gamma_{k,ji}^{(2)}-R_{k,ji})^{-1}\right)\, \tag{126}\] where \[\Gamma_{k,ji}^{(2)}=\frac{\delta\Gamma_{k}[\phi,\phi^{\prime}]}{\delta\phi_{i }\delta\phi_{j}}. \tag{127}\] Note that the trace in Eq. (126) is taken over the \(d+1\) dimensional functional space. The generalization to a vector field \(\phi_{a}(t,x)\) (\(i=1,2,\cdots,N\)) is straightforward. In particular, one can consider the velocity fields \(\phi_{\mu}(t,x)=v_{\mu}(t,x)\) (\(\mu=0,1,\cdots,d-1\)) which obey the Navier-Stokes equation: \[\partial_{t}v_{\mu}(t,x) =-v^{\nu}\partial_{\nu}v_{\mu}(t,x)-\frac{1}{\rho(t,x)}\partial_{ \mu}p(t,x)+\cdots+\xi_{\mu}(t,x) \tag{128}\] \[:=-F[\{v_{\mu}(t,x)\}]+\xi_{\mu}(t,x)\, \tag{129}\] where the correlation function of the random force is now written as \[\langle\xi_{\mu}(t,x)\xi_{\nu}(t^{\prime},x^{\prime})\rangle=G_{\mu\nu}(t,x;t ^{\prime}x^{\prime}). \tag{130}\] Then, by following the MSRJD formalism, the field theoretic action is given by \[S[\{v_{\mu}\},\{v_{\mu}^{{}^{\prime}}\}]=\int dt\int d^{d}x\left\{v_{\mu}^{{}^{ \prime}}\left(\partial_{t}v^{\mu}+F[v]\right)\right\}+\frac{1}{2i}\langle v^{ \prime}|G|v^{\prime}\rangle. \tag{131}\] By adding a regulator \[\Delta S=-\frac{1}{2}\sum_{\mu,\nu=0}^{d-1}\langle v_{\mu}|R_{k}^{\mu\nu}|v_{ \nu}\rangle\, \tag{132}\] we now obtain the exact flow equation for the velocity fields: \[\frac{d(i\Gamma_{k}[\{v_{\mu}\},\{v_{\mu}^{\prime}\}])}{dk}=-\frac{1}{2}\mbox {Tr}\left((\partial_{k}R_{k}^{\mu\nu})(\Gamma_{k}^{(2)}{}^{\nu\mu}-R_{k}^{\nu \mu})^{-1}\right)\, \tag{133}\] which is again the same form as the Wetterich's equation. The functional RG method has proven to be very powerful to understand the various properties of fluid systems such as turbulence [31, 32, 33] and long-range behaviours of correlation functions [34, 35, 36]. See also Ref. [16] for a recent review. Summary and discussion In this paper, we have discussed the basic ideas and formulations of functional flow approach in equilibrium systems. By considering the response of equilibrium systems to a general variation of the parameters, we obtained functional flow equations of the generating functionals. Conventionally, some IR cut-off parameter is often introduced in the two-body interaction \(v_{2}(x,y)\), which leads to the Wetterich type flow equations. However, this is not the only and unique way to introduce an artificial parameter and one can also consider similar procedures even for the higher-order potential terms \(v_{n}(\{x_{i}\})\) (\(n\geq 3\)) in general. Such a treatment would be useful when a system exhibits long-range correlations, and our response theory provides a straight and systematic way to obtain the functional flow equations for such general cases. In this paper, we have focused on explaining the theoretical bases of response theory and not studied any concrete systems/models. We would like to investigate them in future publications. ## Acknowledgements We would like to thank Satoshi Iso, Yoshimasa Hidaka, Sinya Aoki, Kengo Shimada and Philip Lu for the valuable discussions and comments.
2302.14582
ManQala: Game-Inspired Strategies for Quantum State Engineering
The ability to prepare systems in specific target states through quantum engineering is essential for realizing the new technologies promised by a second quantum revolution. Here, we recast the fundamental problem of state preparation in high-dimensional Hilbert spaces as ManQala, a quantum game inspired by the West African sowing game mancala. Motivated by optimal gameplay in solitaire mancala, where nested nearest-neighbor permutations and actions evolve the state of the game board to its target configuration, ManQala acts as a pre-processing approach for deterministically arranging particles in a quantum control problem. Once pre-processing with ManQala is complete, existing quantum control methods are applied, but now with a reduced search space. We find that ManQala-type strategies match, or outperform, competing approaches in terms of final state variance even in small-scale quantum state engineering problems where we expect the slightest advantage since the relative reduction in search space is the least. These results suggest that ManQala provides a rich platform for designing control protocols relevant to near-term intermediate-scale quantum technologies.
Onur Danaci, Wenlei Zhang, Robert Coleman, William Djakam, Michaela Amoo, Ryan T. Glasser, Brian T. Kirby, Moussa N'Gom, Thomas A. Searles
2023-02-28T14:05:48Z
http://arxiv.org/abs/2302.14582v1
# ManQala: Game-Inspired Strategies for Quantum State Engineering ###### Abstract The ability to prepare systems in specific target states through quantum engineering is essential for realizing the new technologies promised by a second quantum revolution. Here, we recast the fundamental problem of state preparation in high-dimensional Hilbert spaces as ManQala, a quantum game inspired by the West African swowing game macala. Motivated by optimal gameplay in solitrae macala, where nested nearest-neighbor permutations and actions evolve the state of the game board to its target configuration, ManQala acts as a pre-processing approach for deterministically arranging particles in a quantum control problem. Once pre-processing with ManQala is complete, existing quantum control methods are applied, but now with a reduced search space. We find that ManQala-type strategies match, or outperform, competing approaches in terms of final state variance even in small-scale quantum state engineering problems where we expect the slightest advantage, since the relative reduction in search space is the least. These results suggest that ManQala provides a rich platform for designing control protocols relevant to near-term intermediate-scale quantum technologies. ## I Introduction Quantum engineering applies traditional principles of engineering such as design and control to quantum phenomena, devices and systems. In particular, quantum state engineering (the application of control methods to quantum state preparation problems [1; 2]), or QSE, is of interest for quantum computing [3; 4], networking [5; 6], and sensing [7; 8] applications. In general, one can separate QSE into three classes: preparation [9; 10; 11], stabilization [12; 13] and purification [14; 15] of quantum states. Primarily, QSE strategies to prepare a target state in a high-dimensional Hilbert space make use of two techniques, and combinations thereof. The first is based on unitary time evolution with respect to some known control Hamiltonian(s). This evolution is deterministic and known as coherent control, or unitary control [2; 16]. The second technique uses measurement back-action to steer the quantum state stochastically and is known as incoherent control, or control-free [16; 17]. Two examples employing both methods in their QSE strategies are FUMES (fixed unitary evolution and measurements) [18] and Zeno-locked FUMES (Z-FUMES) [19]. The FUMES strategy is based on unitarily evolving a state with respect to a known Hamiltonian up to a point where the fidelity between the state in hand and the target state is maximized, \(\mathcal{F}(\psi,\psi_{\text{targ}})=\langle\psi|\psi_{\text{targ}}\rangle\), then making a probabilistic projective measurement [18]. The Z-FUMES strategy implements the same search algorithm, but makes use of the quantum Zeno effect [20; 21; 22; 23], to "lock", i.e. halt the evolution of, certain subspaces of the system to gradually shrink the total search space [24; 25; 26]. Example applications of measurement back-action methods include the control of qubits for quantum computing [17; 27; 28; 29], in control of quantum optical systems [23; 30], and in control of critical behaviour of quantum gases [31; 32; 33]. Recently, parallels between control problems and games have emerged as demonstrated by the application of AlphaZero [34], which was initially developed for playing games like Go and chess, to the optimal control of inverted pendulums [35]. Furthermore, both control problems and games are concerned with selecting actions while interacting with some environment to change the system's state; typically based on some rules and signals from the environment [36]. A substantial amount of interest has been focused on using games in quantum information science not only as a pedagogical tool to develop intuition, but also as a legitimate way to solve problems. For example, a quantum version of sudoku, referred to as SudoQ, was found to be closely related to the topic of mutually unbiased bases [37]. Many quantum analogues to games have been studied, such as Go [38], tic-tac-toe [39; 40], chess [41], blackjack [42], roulette [43], and sudoku [37; 44], with various motivations. In general, a game merges with principles of quantum mechanics by associating game pieces with quantum states and defining operators that evolve the game in analogous ways to the original gameplay. Noticeably, a missing link exists between the emerging fields of QSE and quantum games. For example, experimental implementations of quantum games and associated gameplay require extensive control of the underlying quantum systems and rely on QSE. However, as we will focus on in this paper, the reverse relationship is also meaningful: the development of QSE strategies inspired by games. Here, we introduce a quantum version of the traditional West African sowing game mancala, which we refer to as ManQala, and explore its applicability as a framework for state engineering. We present QSE strategies inspired by the game that match or surpass the performance of traditional unitary or measurement-based strategies. In particular, the advantages persist even when considering small systems and two-site interactions where one could expect the slightest advantage. Importantly, ManQala-inspired strategies show promise at scaling to higher dimensions better than (Z-)FUMES due to small search spaces and the parallelizability associated with their divide-and-conquer approach. Hence, the strategies presented in this paper represent a potential path toward developing game-based QSE techniques with improved scaling properties compared to leading alternatives. This paper is organized as follows. In Section II, we provide an overview of the ManQala game and relate it to the basic goals of QSE. From these results, we present useful methods for developing game-inspired strategies for QSE and compare and contrast these to the strategies outlined by FUMES and Z-FUMES. In Section III, we provide numerical simulations of ManQala, FUMES, and Z-FUMES algorithms for the scenario where the former has the least advantage against the latter. However, clear advantages for ManQala are demonstrated with respect to parallelization and variance. Finally, Section IV concludes the paper. ## II ManQala quantum state engineering strategy ### Overview Mancala is the generic name for a collection of related games played worldwide for over a thousand years. Mancala games consist of pieces called stones (or seeds) and a game board that includes a set of valid locations, referred to as pits, for game piece placement. Additionally, mancala game boards have a special pit, or pits, called the Ruma. Various rule sets exist, but mancala is generally a turn-based game where players alternate moving, or "sowing," stones in counter-clockwise or leftward direction from pit-to-pit in a series of chained operations to collect the stones in a Ruma. This Section develops techniques to play quantum mancala, or ManQala, that can be applied to any variation of the traditional game, including multiplayer iterations. However, we have found that even the most basic solitaire versions of mancala have a rich enough game structure to reveal significant and uniquely quantum features of ManQala. For this reason, throughout the rest of this paper, we will focus on Tchoukailon, a solitaire version of mancala which is particularly amenable to mathematical analysis. Tchoukailon was developed by Deledico et al. [45] in 1977 and is itself a derivative of another, much older mancala variant called Tchouka [46]. Game boards in Tchoukailon are linear and consist of only a single Ruma, which we assume is the left-most pit by convention. The sowing rules state that when a player picks \(N\) stones up from a pit, they leave that pit empty to sow all those stones to the pit on the left-hand side. Then, they pro Figure 1: **Example game boards for (a) Tchoukailon (a solitaire variant of mancala) and its direct quantum analogue ManQala in (b).** Here, we show both boards with \(N=3\) stones and \(M=3\) lattice sites, and we represent sowing with arrows (which become two-site unitary operators in ManQala). The sequential unitary actions \(U_{1}\) and \(U_{2}\) in the Figure represent the deterministic quantum analogue of the first two Tchoukailon moves via site-population permutations. The final step of the Tchoukailon game has no deterministic unitary realization in the quantum version of the game. Hence, \(U_{3}\) drives the state where the probability of observing the winning board is maximized. Upon observation (projective measurement), the target state is achieved with a probability of \(6/9\) (where an additional unitary may be required), or the board reverts to the configuration before \(U_{3}\), and the final step is repeated until successful. ceed by picking up \(N-1\) stones from that pit and sowing them to the next pit on the left, and so on until there are no stones remaining in their hands. In Tchoukaillon, a player must end each round of sowing by placing the final stone in the Ruma, with under- and over-shooting resulting in losing the game. The win conditions for Tchoukaillon are, in fact, so restrictive that the initial game board ultimately determines whether a win is possible and, if so, prescribes the unique series of moves needed to win [47; 48]. Assuming we begin with a Tchoukaillon board for which winning is possible, then the series of moves that will result in winning can be described succinctly. In particular, the stones from the pits nearest to the Ruma (empty pit at the edge) need to be picked and sowed first, with this strategy repeated, moving to the next non-empty pit until the game board is cleared. In Figure 1(a), we visually present an example of this strategy for a winning board with a Ruma and two additional pits. The game play in Figure 1(a) consists of two movements; the first is equivalent to a permutation, such as in the first step when sowing into an empty pit, and the second is the merging movements that combine seeds from two pits. This distinction is significant as we consider a quantum version of the game since permutation can be performed deterministically with unitary evolution, while merging operations are nondeterministic. Notably, permutation movements alone bring the board "closer" to the final configuration. This observation that deterministic permutation operations can significantly reduce the number of tunneling events needed to achieve the target state and effectively reduce board size will be the primary strategy we will use throughout this paper as we develop ManQala. Furthermore, even as we move away from problems that have direct analogies with mancala, we will see that pre-processing in this fashion reduces the problem space making the application of search-based quantum control approaches (e.g., Z-FUMES) more efficient. In this paper, we aim to devise a family of QSE strategies, termed ManQala, that mimic the mechanics of a traditional Tchoukaillon game. For context, in Figure 1(b) we include a simple example of the same game shown in Figure 1(a) but where "stones" are replaced with bosonic states and "pits" by system modes. This now constitutes a quantum system where we can apply ManQala. In this simple ManQala example, "sowing" is performed with unitary operations \(U_{i}\), for \(i=1,2,3\). Even in this basic example, with only three modes and three bosons, the gameplay diverges from mancala in two important ways. First, since the quantum version manipulates amplitudes of quantum systems and avoids "collapse," the game does not result in a "winning" configuration with certainty. Secondly, the coherent evolution of the system potentially violates the directionality rules implied by all mancala games: there are projective measurement outcomes where "stones" move "backwards" in this quantum version. This phenomenon appears in certain outcomes shown in the last step of Figure 1(b). Similar to other state engineering strategies, such as FUMES [18] and the Zeno-locked FUMES (Z-FUMES) [19], ManQala uses two different methods to direct the evolution of a state: coherent unitary evolution and projective measurements. Unlike FUMES and Z-FUMES, ManQala starts by driving the system deterministically to a point where random search via projective measurements is more feasible. Once the deterministic strategy is complete, a stochastic approach, such as (Z-)FUMES, is adopted on one or more subsets of the board while the rest remains Zeno-locked. Additionally, ManQala can also be implemented in a parallel fashion where a given lattice can be fragmented into sublattices, each of which is evolved independently. As stated, the goal in ManQala is similar to Tchoukaillon or any other mancala variant; to end the game with all of the seeds in the Ruma. Thus, our aim in ManQala is to engineer a quantum state such that final state of the board mimics the end of a mancala game. While this goal appears restrictive, in the Appendix, we describe an approach for leveraging ManQala for quantum control problems with completely general target states. ### Formulation for systems with two-site bosonic hopping We now discuss the realization of ManQala-inspired QSE methods for controlling physical systems that exhibit two-site bosonic hopping due to their prevalence in practical situations. Bosonic hopping occurs in a variety of physical systems, including coupled optical cavities [49], coupled waveguides [50], transmons coupled to superconducting cavities [51], and ultracold atoms in optical lattices that obey the Bose-Hubbard (B-H) model [52]. Implementation of the last two systems enable non-demolition measurement [53; 54; 55] of bosonic populations [51; 56; 57]. However, we restrict our states of interest to bosons on a lattice that evolve according to the one dimensional B-H model due to its simplicity in measurement [57], and intuitive similarity to a traditional mancala game. Here, each lattice site that carries bosonic modes is analogous to a pit on a mancala board such that the quantum analogue of the sowing operation can be implemented by boson hopping. Therefore, the B-H Hamiltonian governing this scenario is given by, \[\hat{H}=-J\sum_{\langle i,j\rangle}\hat{a}_{i}^{\dagger}\hat{a}_{j}+\frac{V}{ 2}\sum_{i}\hat{n}_{i}(\hat{n}_{i}-1)-\mu\sum_{i}\hat{n}_{i}. \tag{1}\] Here, \(\hat{a}_{i}^{\dagger}\) and \(\hat{a}_{i}\) are bosonic creation and annihilation operators such that \(\hat{n}_{i}=\hat{a}_{i}^{\dagger}\hat{a}_{i}\) gives the number of particles on site \(i\) and \(\langle i,j\rangle\) denotes summation over all neighboring sites \(i,j\). The hopping amplitude \(J\) describes the coupling strength between neighboring sites and the parameters \(V\) and \(\mu\) represent the on-site interaction and the chemical potential, respectively. By following the lead of Sorensen et al. [19], we restrict our method to quantum systems described solely by the quadratic hopping part of the B-H Hamiltonian as the self-interaction potential and the chemical potential are set to \(V=0\) and \(\mu=0\). Our motivation in doing so is two-folds. First, removing the terms that prevent tunneling aids the search for the target state, similar to using high temperature parameters in initial stages of simulated annealing algorithms [58]. Second, the quadratic B-H model is exactly diagonalizable in the Heisenberg picture resulting in closed-form, analytic solutions for unitary evolution once those terms are omitted [59; 50]. In general, the engineering of a quantum state is an optimization problem where the controller interacts with the given system via time-dependent actuation, \(\alpha=(\alpha_{0},\ldots,\alpha_{t},\ldots,\alpha_{T})\), for times \(t\in[0,T]\). Hence, the goal of ManQala is to specify \(\alpha\) at all times using strategies derived from the game as shortcuts to optimization. In ManQala, the control actions at any given time \(t\) are constrained to a set of unitaries and projective measurements, \(\alpha_{t}=(\mathbf{U}_{t},\mathbf{P}_{t})\). Here, a control action of pure coherent evolution, \(\alpha_{t_{ini}}=(\mathbf{U}_{t_{ini}},\mathbf{P}_{t_{ini}}=I)\), implements the time evolution operator \(\mathbf{U}_{t_{ini}}=\exp\left(-iH(t_{f}-t_{ini})\right)\) of the total Hamiltonian that drives the state from the initial time, \(\left|\psi(t_{ini})\right\rangle\), to the final, \(\left|\psi(t_{f})\right\rangle\), segment as the measurement projection operator is just the identity \(I\). Similarly, an action of pure projective number state measurements \(\alpha_{t}=\left(\mathbf{U}_{t}=I,\mathbf{P}_{t}=\bigotimes_{j=0}^{M-1}P^{(j)}\right)\) has the effect of probabilistically collapsing the state. Here \(\bigotimes_{j=0}^{M-1}P^{(j)}\) denotes the tensor product of the projective measurement operators at each site \(j\), ranging from \(0\) to \(M-1\). And, combining those, the action \(\alpha_{t_{ini}}=\left(\mathbf{U}_{t_{ini}}=I,\mathbf{P}_{t_{ini}}=\bigotimes_ {j\neq m}P^{(j)}\right)\) exerts coherent time evolution on all the sites except the sites at \(j=m\) (for \(m\in[0,M-1]\)), unless those obscure inter-site tunneling, via Zeno-locked time-evolution, \(U_{ZL}(t_{ini})=\mathbf{P}_{t_{ini}}\exp\left(-i\mathbf{P}_{t_{ini}}H\mathbf{P }_{t_{ini}}(t_{f}-t_{ini})\right)\)[26; 19; 24]. If, for the given Bose-Hubbard model, the task in hand is steering an initial state, \(\left|\psi_{0}\right\rangle\), towards a target state, \(\left|\psi_{\text{targ}}\right\rangle\), then the control problem can be formulated as a combinatorial optimization problem of finding the actions \(\alpha\) that would minimize a distance metric between states, \(\alpha^{\star}=\underset{\alpha}{\arg\min}\ d(\psi_{0},\psi_{\text{targ}};\alpha)\)[60]. Here this distance metric, \(d\), to be minimized could be coming from Schrodinger, as in the quantum fidelity \(\mathcal{F}(\psi_{1},\psi_{2})=\left\langle\psi_{1}|\psi_{2}\right\rangle\), or Heisenberg picture dynamics for a series of actions or actuators implemented in time, \(\alpha\). Also, this metric can be combined with other goal functions within a cost function to be minimized, \(\mathcal{L}=\sum_{j}\lambda_{j}d_{j}\). Here \(\lambda_{j}\) denotes a Lagrange multiplier for a metric or goal \(d_{j}\), and these multipliers can be chosen to make the cost function a convex sum, \(\sum_{j}\lambda_{j}=1\). For our system of interest, a 1-D bosonic lattice governed by a Bose-Hubbard Hamiltonian, we can define a distance metric based on the number of tunneling events. Pedersen et al. initially proposed such a metric for Fock states to judge the performance of the FUMES algorithm [18]. We can generalize the Pedersen metric to any site population via the following compact form. Given the M dimensional particle number expectation value vectors \(\mathbf{n}_{A}=(\langle n_{0}\rangle_{\psi_{A}},\ldots,\langle n_{M-1} \rangle_{\psi_{A}})\), and, \(\mathbf{n}_{B}\) for states (or density matrices) \(\left|\psi_{A}\right\rangle\) and \(\left|\psi_{B}\right\rangle\), the number of tunneling events is given by \[d_{T}\left(\mathbf{n}_{A},\mathbf{n}_{B}\right)=\sum_{k=0}^{M-2}\left|\left( \mathbf{n}_{A}-\mathbf{n}_{B}\right)_{k}+\left(\mathbf{n}_{A}-\mathbf{n}_{B} \right)_{k+1}\right|. \tag{2}\] The global optimum of the number of tunneling events and the fidelity are the same when steering a system from an initial state to a target state. The same cannot be said for the intermediate steps. If we are given specific initial and target states, or their particle number expectation value vectors (\(\mathbf{n}_{0}\) and \(\mathbf{n}_{\text{targ}}\), respectively), we can define a bosonic distance akin to quantum (in-)fidelity for the particle number expectation value vector, \(\mathbf{n}_{A}\), of a (probably unknown) state, \(\left|\psi_{A}\right\rangle\), using the number of tunneling events given above, as the following, \[d_{B}\left(\mathbf{n}_{A},\mathbf{n}_{\text{targ}};\mathbf{n}_{0}\right)=1- \frac{d_{T}(\mathbf{n}_{A},\mathbf{n}_{\text{targ}})}{d_{T}(\mathbf{n}_{0}, \mathbf{n}_{\text{targ}})}. \tag{3}\] In this current form, the bosonic distance metric starts from zero for \(\mathbf{n}_{A}=\mathbf{n}_{0}\) and takes on the maximum value of unity when \(\mathbf{n}_{A}=\mathbf{n}_{\text{targ}}\) due to scaling with the number of tunneling events between the initial and target states. ManQala initially tries to minimize a cost function \[\mathcal{L}_{M}=\lambda_{1}\left(1-d_{B}\left(\mathbf{n}(\alpha),\mathbf{n}_{ \text{targ}};\mathbf{n}_{0}\right)\right)+\lambda_{2}N_{P}(\alpha)+\lambda_{3} M_{C}(\alpha). \tag{4}\] Here the term \(\mathbf{n}(\alpha)\) denotes site populations after the actions \(\alpha\) were implemented on the initial populations and \(N_{P}(\alpha)\) denotes the number of projective measurements we apply to our system as actions. Assuming a high \(\lambda_{2}\), we try to avoid stochastic methods (e.g, Z-FUMES) while minimizing \(d_{B}\), but use deterministic unitary actions (site population permutations) instead. Also, if we want to be consistent with mancala, we can penalize the actions that are inconsistent with it via a term \(M_{C}(\alpha)\). For example, we can penalize the unitary actions that do not follow the mancala's sowing rule. On the other hand, this condition can be relaxed by tuning \(\lambda_{3}\), or dropped altogether (\(\lambda_{3}=0\)) as a modified-ManQala (mod-ManQala) algorithm without losing any generality, while converging faster. This is considered in appendix D and can be achieved by Zeno-locking all other sites. Of note, a target board to be reached (e.g, winning condition) can be decomposed into sub-boards where each sub-board population can be thought of as a pit. For example, for the three pit board configuration we examine here, the target board can be decomposed into sub-boards (sub-lattices) of \(\{2-pits,1-pit\}\). For such segmentation of the board, the target board is demarcated into sub-board populations \((3,0,0)\rightarrow((3,0),0)\rightarrow(3,0)\). Using that, we can also split our initial board configuration in hand the same way \((0,1,2)\rightarrow((0,1),2)\rightarrow(1,2)\). Both in the traditional and the quantum game, bringing the board into the winning condition first requires bringing it to the target sub-board (\((3,0)\) in our example). The winnable board configurations enable the player to move the stones from the pits with small number of stones right away into the target sub-board, effectively decoupling from the move of the pits with larger number of stones that could overshoot the Ruma. In ManQala, whether or not mimicking each traditional move, this winnable board intuition is actualized by moving the particles from the sites with a few number of particles right away to their designated smaller sub-lattices deterministically only via unitary evolution so that we can Zeno-lock them. This way we can move the particles from sites with many particles to their designated, larger, target sub-lattice. Having locked the (sub)-lattices with few particles, and moved the rest to their sub-lattice, we can use probabilistic search methods via measurement backaction on this previously unlocked sublattice to find the right set of actions that would steer our system to the target state. To help clarify the differences between ManQala, FUMES, and Z-FUMES we include Table 1. A more detailed description of these differences in available in Appendix B. ## III Numerical simulations and performance comparisons In the previous section, we presented the theoretical framework for a game-inspired quantum state engineering (QSE) strategy we call ManQala. Here, we provide numerical simulations of the performance of ManQala and compare them against FUMES and Z-FUMES. For clarity, we will consider in this section the same problem illustrated in Figure 1(b). Here, the initial state is given by \(\left|\psi\right\rangle_{0}=\left|0,1,2\right\rangle_{\text{Fock}}\) and the target state by \(\left|\psi\right\rangle_{t}=\left|3,0,0\right\rangle_{\text{Fock}}\) where each ket represents, for example, modes in a bosonic lattice. The Hilbert space dimension is \(R=\binom{N+M-1}{N}=10\) for \(M=3\) sites and \(N=3\) particles. Thus, in this configuration, there are \(R=10\) possible Fock states. Assuming prior knowledge of the total number of particles \(N\) in the system, we can deduce the state of the entire lattice by measuring only \(M-1\) of the sites. Hence, for the problem we consider here with \(N=M=3\), we can represent the state of the system as a sequence of two numbers, in this case, the number of particles in the leftmost site (Ruma) and the next nearest site. We denote all ten possible configurations as \(\mathbf{L}=[\mathbf{I}^{(0)},\ldots,\mathbf{I}^{(9)}]=[(0,\) 1),(0,0),(0,2),(1,0),(1,2),(2,1),(1,1),(0,3),(2,0),(3,0)]_{\text{Fock}}\). In the particular example shown in Figure 1(b), \(\mathbf{I}^{(0)}\) and \(\mathbf{I}^{(9)}\) are the initial and target states respectively. The FUMES protocol uses unitary evolution to maximize the fidelity with the final state followed by a single projective measurement. To find these unitary evolutions, we first identify the distance between all possible states \(\mathbf{L}\) and the target state. For each state in \(\mathbf{L}\), there is a corresponding unitary time-evolution duration that maximizes the probability of observing the target state and can be also written in a vector form. We write these times again in a vector of the form \(\mathbf{T}=\left[t_{\text{design}}^{(0)},\ldots,t_{\text{design}}^{(9)}\right]\). For the target state \(\mathbf{I}^{(9)}\) we obtain these values using QuTiP [61] by numerically solving Schrodinger's equation via exact diagonalization. The resulting time scales are given by \(\mathbf{T}=\left[1.66\,,2.22\,,1.33\,,1.33\,,0.89\,,0.555\,,1.11\,,\,0.11\,,\,0.89 \,,\,0\right]\). If we instead consider situations where we Zeno-lock certain sites, such as in Z-FUMES, the optimal time-evolution values change. Zeno-locking only occurs in this example when the \(j=2\) site has the targeted number of particles (in this case zero particles) in it, \(\left|\cdot,\cdot,0\right\rangle\), and not the others, \(\left|\cdot,0,\cdot\right\rangle\) and \(\left|0,\cdot,\cdot\right\rangle\). Locking the \(j=0\) (Ruma) site when it has zero particles would be counterproductive as the target state has nonzero particles in that site. However, one may be tempted to lock the intermediate site (\(j=1\)) when it has the target number of particles (zero), but this would obstruct the tunneling between the sites \(j=0\) and \(j=2\). Zeno-locking the site \(j=2\) when it has zero particles in it reduces the set of possible configurations \(\mathbf{L}\) to the subset \(\mathbf{L}_{Z}=[(1,2),(2,1),(0,3),(3,0)]_{\text{Fock}}\). Designated time durations become the following for this case, \(\mathbf{T}_{Z}=[0.953\,,0.615\,,1.567\,,0]\), where the designated time duration vector for the states to be evolved while Zeno-locking is replaced with the ones from the usual FUMES while keeping the rest same. Hilbert space reduction using Zeno-locking, as shown in the difference in the dimensionality between \(\mathbf{T}\) and \(\mathbf{T}_{Z}\), is precisely why Z-FUMES outperforms FUMES in general. Note, however, that Z-FUMES only applies Zeno locking in a stochastic fashion. For example, if Z-FUMES encounters a \(0\) on the second site and Zeno-locks it so that the two-site sublattice to the left has the correct amount of particles, the search space reduces to \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Strategy & Zeno-lock & \(\mathcal{F}\) & \(d_{B}\) & Parallel & \(M_{C}\) \\ \hline \hline ManQala & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline modified- & ✓ & ✓ & ✓ & ✓ & \(\times\) \\ ManQala & & & & & \\ \hline FUMES & \(\times\) & ✓ & \(\times\) & \(\times\) & \(\times\) \\ \hline Z-FUMES & ✓ & ✓ & \(\times\) & \(\times\) & \(\times\) \\ \hline \end{tabular} \end{table} Table 1: **Summary of strategies compared in this paper.** FUMES conducts a greedy, stochastic, non-parallelizable search based on fidelity \(\mathcal{F}\) alone. Z-FUMES conducts the same search, but gradually shrinks the search-space via Zeno-lock. Both (mod-)ManQala start by minimizing bosonic distance, \(d_{B}\), deterministically via permuting the populations of one two or three-site at a time to minimize stochasticity & shrink search space and then implements Z-FUMES in parallel at various sub-lattices. Unlike ManQala, mod-ManQala does not necessarily adhere to the rules of mancala, as expressed by term \(M_{C}\) in Eq. 4. \(R=\binom{N+M-1}{N}=4\) for \((N=3,M=2)\). In comparison, ManQala and its variants drive the system deterministically exactly to this configuration instead of waiting to encounter it during the random search. Finally, we note that since FUMES does not use Zeno-locking at all the algorithm searches the entire \(R=10\) dimensional space. Hence, intuitively, we expect both ManQala and Z-FUMES to outperform FUMES in terms of resources required to reach the target state. To reach the target state using ManQala, that is limiting the ability to permute to two-sites at a time while adhering to theowing rules of mancala, we begin by driving our system to the \(\mathbf{L}_{Z}\) subspace and Zeno-locking the \(j=2\) mode. To achieve this we use three two-site unitaries of \(\Delta t=\pi/2\). Here each unitary permutes the particle population of adjacent Fock states of sites \(j\) and \(j+1\) carrying \(N\) particles, \(U(\pi/2)\ket{k,N-k}=\exp\left(-i(a_{j}a_{j+1}+\text{H.C})\pi/2\right)\ket{k,N- k}=\ket{N-k,k}\). In the case given in Figure 1, we need a total deterministic evolution time of \(\Delta t=3\pi/2\) to drive the system into the Zeno-locked configuration of \(\ket{2,1,0}\). Once there we apply the aforementioned Z-FUMES with time durations \(T_{Z}\). Alternatively, if we instead use mod-ManQala, we require only one three-site unitary of duration \(\Delta t=\sqrt{2}\pi/2\) to deterministically reach the Zeno-locked configuration due to relaxing the need to imitate mancala. Here each unitary acting on Fock states of sites \(j,j+1,j+2\) permutes the populations of sites \(j\) and \(j+2\), while leaving \(j+1\) untouched. The three-site unitary given by \(U(\sqrt{2}\pi/2)=\exp\left(-i\sqrt{2}(a_{j}a_{j+1}+a_{j+1}a_{j+2}+\text{H.C}) \pi/2\right)\) implements the permutation \(U(\sqrt{2}\pi/2)\ket{k,l,N-l-k}=\ket{N-l-k,l,k}\). To be more specific, consider a demarcation of the three-dimensional lattice in sub-lattices \(\{2-site,1-site\}\), i.e, \(\mathbf{\tilde{S}^{*}}=[s_{1},s_{2}]=[(0,1),2]\). The initial site population for this \(\mathbf{\tilde{S}^{*}}\) sub-lattice segmentation is \(\mathbf{n}_{0}=((0,1),2)=(1,2)\). ManQala tries to be as consistent as possible with the winning strategy of the analogous classical Tchoukaillon board and hence drives the system into \(\mathbf{n}_{M}^{\star}=((2,1),0)=(3,0)\) populations with respect to segmentation \(\mathbf{\tilde{S}}\) and corresponding permutation \(\tilde{\pi}\). The permutation to drive the initial populations into the Zeno-locked sub-lattice \((3,0)\) is the following: \[\tilde{\pi}_{M}^{\star}=\begin{pmatrix}0&1&2\\ 2&1&0\end{pmatrix} \tag{5}\] Here \(\tilde{\pi}_{M^{\star}}\) denote the site population permutations for both the ManQala and mod-ManQala strategies, i.e, exchanging sites \(0\) and \(2\) while keeping site \(1\) the same. Its corresponding matrix representations \(\tilde{P}_{\tilde{\pi}_{M^{\star}}}\) is a column-reversed identity matrix. In the segmentation \(\mathbf{\tilde{S}^{*}}\), both of these representations yield \(\eta=\left(\tilde{P}_{\tilde{\pi}}\mathbf{n}_{0}-\mathbf{n}_{\text{targ}} \right)=(0,0)\). In practice, we first identify these permutation operators and then compile them into two and three-site unitaries (i.e, "moves" in the previous section) based on the traditional game constraints or lack thereof. Figure 2 shows the comparison of expected bosonic distance between different quantum state engineering strategies for initial state \(\ket{\psi_{0}}=\ket{0,1,2}\) and target state \(\ket{\psi_{\text{targ}}}=\ket{3,0,0}\) over a thousand stochastic, Monte-Carlo trajectories. In terms of the bosonic distance, FUMES, Z-FUMES, and mod-ManQala start with a Figure 2: **Numerical simulation of game-inspired quantum state engineering**. Given the initial state \(\ket{0,1,2}\), the average (expected) bosonic distance of observing the target state \(\ket{3,0,0}\) for different quantum state engineering strategies computed using 1000 stochastic trajectories each in QuTiP. (a) For each stochastic trajectory the non-deterministic part of a given strategy is repeated until convergence. The dashed lines mark when each strategy achieves 0.99 bosonic distance, \(d_{B}\) (defined in the text). (b) Non-deterministic part is only repeated twice for each stochastic trajectory. Red (FUMES), green (Z-FUMES), blue (ManQala), and purple (mod-ManQala) curves denote the average \(d_{B}\) of a given strategy over 1000 trajectories, while the shaded areas are the respective standard deviations. Note that a projective measurement does not occur until the emergence of a shaded area (coloring) corresponding to a standard deviation. steep linear ramp as they all implement a three-site interaction. When the fidelity between the state in hand and the target state is maximized at \(Jt_{\text{design}}^{(0)}=1.66\) both (Z-)FUMES apply a projective measurement, leading to different stochastic trajectories (green and red coloring in the Figure 2 representing the standard deviation of these). Both (mod-)ManQala aim to achieve \(|2,1,0\rangle\) deterministically, and their bosonic distances ascend during the course of their time-evolution, albeit slowly for ManQala due to constraints. Once ManQala achieves a local peak at \(Jt=3\pi/2\) and mod-ManQala achieves the same at \(Jt=\sqrt{2}\pi/2\), they make a downward turn as they are both optimizing for fidelity now (using Z-FUMES) by both evolving for \(Jt_{\text{design}}^{Z}=0.615\) in the Zeno-locked space (i.e, the second element of \(\mathbf{T}_{Z}\)), and make projective measurements at those spots (leading to blue and purple coloring for their respective variances). In this Zeno-locked space mod-ManQala follows the designated measurement times \(\mathbf{T}_{Z}\) based on measurement results, while ManQala, following the rules of the traditional game, always brings the system back to the \(|2,1,0\rangle\) and repeats. Because of this, the mean (expected) trajectory progresses differently in time for ManQala, and mod-ManQala converges to a \(d_{B}=0.99\) much faster (\(Jt\sim 5.1\)), although they achieve the same exact statistics given in part \(b\)) of Figure 2. Since we are driving the system into Zeno-locked subspaces deterministically, and querying a smaller search space (mod)-ManQala has much less variance. In Figure 2(a) each protocol is repeated indefinitely until achieving a unity fidelity such that FUMES has an average standard deviation of \(0.21\), while Z-FUMES has \(0.17\), mod-ManQala has \(0.06\), and ManQala has \(0.06\) in bosonic distance until observing \(0.99\) bosonic distance. This phenomenon is much more pronounced in Figure 2(b) as just a small number of protocol repetitions is enough to achieve near unity fidelities with the target state. For completeness, we provide results related to the repeated application of all three protocols in Appendix C. To further illustrate the difference between these quantum-state engineering strategies, we compare the expected particle numbers at each site with the same aforementioned initial and target states, as shown in Figure 3. This figure shows individual (randomly chosen but representative) evolutions for FUMES, Z-FUMES and mod-ManQala as compared to Figure 2 which averages many such evolutions to find averages. While not as useful to understanding the general behavior of each protocol, Figure 3 provides an intuitive understanding of how (Z-)FUMES and mod-MaQala strategies differ in their overall approach to solving the same problem. Note that the dashed lines in Figure 3 indicate projective measurements. (Z-)FUMES implements a measurement each time the probability of observing the target state is maximized in a given (Zeno-locked) configuration. Alternatively (mod-)ManQala strategies evolve the sites deterministically towards states where Zeno-locking can be used to reduce the Hilbert space (in this case making the population in the \(j=2\) mode \(0\)). As pictured in Figure 3, this particular stochastic run is successful at projecting to the correct state in panel (c) on the first projection, however, if this were not the case mod-ManQala would again try to maximize fidelity before repeating the projective measurement. Note that (Z-)FUMES evolves the states in time for the next projective measurement even when the particle expectation numbers, as well as the bosonic distance we defined \(d_{B}\), are frozen in time. In parts a) and b) of Figure 3, the first measurement represented by a dashed line projects the system into a Mott-Insulator state \(|1,1,1\rangle\), where particle expectations are not changing in time under the effect of unitary evolution. The second projective measurement, represented by a second dashed line, takes them out of this state. In this section, we have considered scenarios where the target state mimics the end state of the original solitaire mancala game, meaning all bosons end up in a single mode. However, we note that ManQala can be applied more generally to problems with arbitrary target states. These general approaches require us to loosen our adherence to mancala (game rules, directions). For concreteness, in Appendix D we describe how ManQala can be applied to two important physical systems, those of superfluids and Mott-insulator systems with site and particle numbers of five. In particular, we show in Appendix D how the ManQala framework can divide more significant, hard-to-tackle problems into small, manageable, Figure 3: **Time evolutions of the expected number of particles for three selected stochastic trajectories: (a) FUMES, (b) Z-FUMES and (c) mod-ManQala**. The horizontal axis shows the site number and the color corresponds to the expected particle number. The dashed lines represent the times a projective measurement occurs. Note in (a) and (b), the time between the first and second projective measurements is where the Heisenberg picture dynamics halt (Mott-insulator state). In (c) there is only one probabilistic event (measurement). This Figure represents a single (randomly chosen but representative) stochastic instance for each strategy out of many possibilities. and parallelizable ones. ## IV Conclusion In this paper, we devised a quantum engineering strategy that we term ManQala, inspired by the traditional solitaire game Tchoukailhon and illustrated the differences between our approach and other competing strategies. In particular, we provided numerical comparisons of ManQala and ManQala-inspired strategies against FUMES and Z-FUMES. In all cases we found that ManQala strategies ultimately match or outperform FUMES and Z-FUMES. ManQala augments existing quantum state engineering strategies by adding a preprocessing stage that consists of deterministic unitary permutations. These permutations reduce the Hilbert space of the problem, improving the performance of search-based strategies such as FUMES and Z-FUMES. More specifically, FUMES is a greedy, stochastic algorithm that optimizes the fidelity between the initial and target state through unitary evolution of the whole Hamiltonian followed by the collapse of the state by observation. On the other hand, Z-FUMES uses the same stochastic and greedy algorithm as FUMES but with the additional feature that if we end up in subspaces of the target state during probabilistic jumps, we can Zeno-lock these subspaces and only evolve the remaining sites. By contrast, ManQala-based strategies intentionally drive the system into target subspaces/sublattices of the overall state through deterministic unitary evolution into configurations that allow for an overall reduction in Hilbert space size through the Zeno-locking of certain modes. Then, once demarcated, ManQala strategies use local algorithms to control each subspace/sublattice to minimize the bosonic hopping distance. ManQala's use of subspaces and sublattices naturally lends itself to parallelization, which would further improve the performance of ManQala over existing strategies as problem spaces increase in dimension. In our formulation of ManQala, we have chosen to use the bosonic hopping distance to inform the algorithm's actions instead of overall fidelity. Since ManQala focuses on sublattices, within the execution of the protocol, the overall fidelity may decrease before rapid improvement. This decrease in fidelity is due to the algorithm focusing on bosonic distances between sublattices at any individual step and not the global fidelity. Ultimately our quantum version of a mancala game has provided a helpful framework for thinking about state engineering in quantum systems. The observed performance advantages of ManQala over competing strategies suggest an exciting connection between games and quantum systems engineering, and future work should continue exploring this relationship. ###### Acknowledgements. We acknowledge the primary support of this work from the IBM-HBCU Quantum Center at Howard University. Additionally, the views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. Additionally, this material is based upon work supported by, or in part by, the Army Research Laboratory and the Army Research Office under contract/grant numbers W911NF-19-2-0087 and W911NF-20-2-0168. TAS's contribution is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704.
2309.11788
New combinatorial perspectives on MVP parking functions and their outcome map
In parking problems, a given number of cars enter a one-way street sequentially, and try to park according to a specified preferred spot in the street. Various models are possible depending on the chosen rule for collisions, when two cars have the same preferred spot. We study a model introduced by Harris, Kamau, Mori, and Tian in recent work, called the MVP parking problem. In this model, priority is given to the cars arriving later in the sequence. When a car finds its preferred spot occupied by a previous car, it "bumps" that car out of the spot and parks there. The earlier car then has to drive on, and parks in the first available spot it can find. If all cars manage to park through this procedure, we say that the list of preferences is an MVP parking function. We study the outcome map of MVP parking functions, which describes in what order the cars end up. In particular, we link the fibres of the outcome map to certain subgraphs of the inversion graph of the outcome permutation. This allows us to reinterpret and improve bounds from Harris et al. on the fibre sizes. We then focus on a subset of parking functions, called Motzkin parking functions, where every spot is preferred by at most two cars. We generalise results from Harris et al., and exhibit rich connections to Motzkin paths. We also give a closed enumerative formula for the number of MVP parking functions whose outcome is the complete bipartite permutation. Finally, we give a new interpretation of the MVP outcome map in terms of an algorithmic process on recurrent configurations of the Abelian sandpile model.
Thomas Selig, Haoyue Zhu
2023-09-21T05:31:08Z
http://arxiv.org/abs/2309.11788v2
# New combinatorial perspectives on MVP parking functions and their outcome map ###### Abstract. In parking problems, a given number of cars enter a one-way street sequentially, and try to park according to a specified preferred spot in the street. Various models are possible depending on the chosen rule for _collisions_, when two cars have the same preferred spot. We study a model introduced by Harris, Kamau, Mori, and Tian in recent work, called the _MVP parking problem_. In this model, priority is given to the cars arriving later in the sequence. When a car finds its preferred spot occupied by a previous car, it "bumps" that car out of the spot and parks there. The earlier car then has to drive on, and parks in the first available spot it can find. If all cars manage to park through this procedure, we say that the list of preferences is an MVP parking function. We study the outcome map of MVP parking functions, which describes in what order the cars end up. In particular, we link the fibres of the outcome map to certain subgraphs of the inversion graph of the outcome permutation. This allows us to reinterpret and improve bounds from Harris _et al._ on the fibre sizes. We then focus on a subset of parking functions, called _Motzkin parking functions_, where every spot is preferred by at most two cars. We generalise results from Harris _et al._, and exhibit rich connections to Motzkin paths. We also give a closed enumerative formula for the number of MVP parking functions whose outcome is the complete bipartite permutation. Finally, we give a new interpretation of the MVP outcome map in terms of an algorithmic process on recurrent configurations of the Abelian sandpile model. ## 1. Introduction In this section we introduce classical and MVP parking functions, and their outcome maps. Throughout the paper, \(n\) represents a positive integer, and we denote \([n]:=\{1,\cdots,n\}\). ### Parking functions and their variations A _parking preference_ is a vector \(p=(p_{1},\cdots,p_{n})\in[n]^{n}\). We think of \(p_{i}\) as denoting the preferred parking spot of car \(i\) in a car park with \(n\) labelled spots. The car park is one-directional, with cars entering on the left in spot \(1\) and driving through to spot \(n\) (or until they park). Cars enter sequentially, in order \(1,\cdots,n\). If the spot \(p_{i}\) is unoccupied when car \(i\) enters, it simply parks there. If this is not the case, then a previous car \(j<i\) has already occupied spot \(p_{i}\). We call this a _collision_ between cars \(i\) and \(j\). In classical parking functions, such collisions are handled by giving priority to the earlier car \(j\). This means that car \(i\) is forced to drive on, and looks for the first unoccupied spot \(k>p_{i}\). If no such spot exists, then car \(i\) exits the car park, having failed to find a spot. We say that \(p\) is a _parking function_ if all cars manage to park. Parking functions were originally introduced by Konheim and Weiss [19] in their study of hashing functions. Since then, they have been a popular research topic in Mathematics and Computer Science, with rich connections to a variety of fields such as graph theory, representation theory, hyperplane arrangements, discrete geometry, and the Abelian sandpile model [6, 10, 11, 14, 25, 27]. We refer the interested reader to the excellent survey by Yan [29]. One may notice that the collision rule for parking functions has many possible variations, and indeed many variants of parking functions have been studied in the literature. * **Defective parking functions**[5]. In this model, \(m\) cars enter a one-way street with \(n\) parking spots, and try to park following the classical parking rules. If \(k\) cars are not able to park, we call the parking preference a _defective parking function of defect \(k\)_. These correspond to classical parking functions when taking \(m=n\) and \(k=0\). * **Naples parking functions**[8, 9]. In this model, when a car's preferred spot is occupied, it can first reverse up to some fixed number \(k\) of spots to try to park, before driving on as in the classical parking function. These correspond to classical parking functions when \(k=0\). * **Parking assortments and parking sequences**[1, 7, 15, 16]. In these models we have cars of different sizes, with just enough space in total for all cars to park. A car will try to park in the first available spot on or after its preference, but can only park there if there is sufficient space (i.e. the consecutive number of available spots at that location is greater than or equal to the car's size). In parking sequences, if this is not the case, the car immediately gives up and exits, whereas in parking assortments it will drive on and attempt to find a large enough space further along in the car park. Both models correspond to classical parking functions if all cars have size 1. * **Vector parking functions**, or \(\mathbf{u}\)-parking functions [24, 30]. Given a parking preference \(p=(p_{1},p_{2},\cdots,p_{n})\) and a vector \(\mathbf{u}=(u_{1},u_{2},\cdots,u_{n})\), we say that \(p\) is a \(\mathbf{u}\)-parking function if its non-decreasing rearrangement \((a_{1},a_{2},\cdots,a_{n})\) satisfies \(a_{i}\leq u_{i}\) for all \(i\in[n]\). These correspond to classical parking functions in the case where \(\mathbf{u}=(1,2,\cdots,n)\) (see e.g. [29, Section 1.1] for this characterisation of classical parking functions). * **Graphical parking functions** on some graph \(G\), or \(G\)-parking functions [20]. In this model, cars try to park on vertices of a graph instead of in a one-way street. Like classical parking functions, \(G\)-parking functions are also connected with the Abelian sandpile model, via a bijection to its recurrent configurations [20, Lemma 13.6]. * **Higher-dimensional parking functions**. There are various models of these. The first, called \((p,q)\)**-parking functions**, were introduced by Cori and Poulahon [10], in connection with the Abelian sandpile model on complete bipartite graphs with one extra distinguished root vertex. Dukes [14] introduced a notion of **tiered parking functions**, which he connected to the Abelian sandpile model on complete split graphs. Both of these models involve cars of different colours or tiers, with extra conditions on where a car can park depending on its tier/colour. Motivated by previous work in this direction, Snider and Yan [23, 24] defined a notion of **multidimensional U-parking functions**, where \(\mathbf{U}=\{(u_{i,j},v_{i,j})_{1\leq i\leq p,1\leq j\leq q}\}\) is a set of multidimensional vectors. These generalise both the \((p,q)\)-parking functions of Cori and Poulahon, and the (one-dimensional) vector parking functions discussed above (see also [18] for a beautiful connection to Goncarov polynomials). In this paper, we are interested in another variant called _MVP parking functions_, introduced recently by Harris _et al._[17]. In this model, if there is a collision between two cars \(j<i\), priority is given to the later car \(i\). In other words, car \(i\) will park in its preferred spot \(p_{i}\). If that spot is already occupied by a previous car \(j\), then car \(j\) gets "bumped" out, and has to drive on. It then (re-)parks in the first available spot \(k\geq p_{i}\). Note that bumpings do not propagate: the "bumped" car \(j\) does not subsequently bump any other car. If all cars manage to park in this process, we say that \(p\) is an MVP parking function. It is in fact straightforward to check that a parking preference \(p\) is an MVP parking function if, and only if, \(p\) is a (classical) parking function. Indeed, in both MVP and classical processes, in determining whether all cars can park, the labels of the cars are unimportant: all that matters is which set of spots is occupied at any given time. We denote \(\mathrm{PF}_{n}\) or \(\mathrm{MVP}_{n}\) the set of parking functions of length \(n\). These sets are the same due to the previous observation, but it will be convenient to use different notation depending on whether we are considering the classical or MVP parking process. ### The outcome maps While the sets of MVP and classical parking functions are the same, these two processes differ in their _outcome map_. This map describes where the cars end up. More precisely, if \(p\) is a parking function, its _outcome_ is a permutation \(\pi=\pi_{1}\cdots\pi_{n}\), where for all \(i\in[n]\), \(\pi_{i}\) is the label of the car occupying spot \(i\) when all cars have parked. The classical, resp. MVP, outcome map, denoted \(\mathcal{O}_{\mathrm{PF}_{n}}\), resp. \(\mathcal{O}_{\mathrm{MVP}_{n}}\), is then the map \(p\mapsto\pi\) describing the outcome of the classical, resp. MVP, parking process. The following example illustrates the classical and MVP parking processes, and shows how their outcomes may differ. **Example 1.1**.: Consider the parking function \(p=(3,1,1,2)\). Under the classical parking process in Figure 1, car 1 first parks in spot 3, followed by car 2 parking in spot 1. Then car 3 wishes to park in spot 1 but cannot do so, so it drives on and parks in spot 2 (the first available spot at this point). Finally, car 4 wishes to park in spot 2. However, 2 is occupied, so car 4 drives on: 3 is also occupied (by car 1), so car 4 ends up parking in spot 4. Finally, we get the outcome \(\pi=\mathcal{O}_{\mathrm{PF}_{4}}\left(p\right)=2314\). Now consider the same parking function \(p\), but for the MVP parking process. In Figure 2, again, cars 1 and 2 park in spots 3 and 1 respectively. Now car 3 arrives, and sees car 2 in its preferred spot (spot 1). It bumps car 2 out of spot 1, forcing it to drive on. Spot 2 is available, so car 2 parks there. Finally car 4 arrives and sees car 2 in its preferred spot (spot 2). It bumps car 2, forcing it to drive on and park in the only remaining spot, which is spot 4. Finally, we get the outcome \(\pi=\mathcal{O}_{\mathrm{MVP}_{4}}\left(p\right)=3412\). Figure 1. The classical parking process with \(p=(3,1,1,2)\) and \(\mathcal{O}_{\mathrm{PF}_{4}}\left(p\right)=2314\). Figure 2. The MVP parking process with \(p=(3,1,1,2)\) and \(\mathcal{O}_{\mathrm{MVP}_{4}}\left(p\right)=3412\). ### Content of the paper In this paper, our main goal is to study the _fibres_ of the MVP outcome map. That is, for a given, fixed permutation \(\pi\in S_{n}\), we are interested in the set \(\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\) of parking functions whose (MVP) outcome is the permutation \(\pi\). The paper is organised as follows. In Section 2, we provide an interpretation of the MVP outcome fibre set \(\mathcal{O}_{\mathrm{MVP}}^{-1}\left(\pi\right)\) in terms of certain subgraphs, called \(1\)-subgraphs, of the corresponding permutation inversion graph \(G_{\pi}\) (see Section 2.1 for a definition of these). We describe mappings from parking functions to \(1\)-subgraphs (Definition 2.2) and from \(1\)-subgraphs to parking functions (Definition 2.3), and show that these are inverse to each other when restricted to so-called _valid_\(1\)-subgraphs (Theorem 2.5). We use this characterisation in Section 2.3 to give improved upper (Proposition 2.12) and lower (Proposition 2.13) bounds on the fibre sizes, by providing necessary or sufficient conditions for a \(1\)-subgraph to be valid. Section 3 is dedicated to a study of a certain subset of MVP parking functions in which each parking space is preferred by at most two cars. We call these _Motzkin_ parking functions due to a rich connection to Motzkin paths (see Theorem 3.2 and Theorem 3.4). These results generalise a bijection between Motzkin paths and parking functions whose MVP outcome is the decreasing permutation \(\mathrm{dec}^{n}:=n(n-1)\cdots 1\) established by Harris _et al._[17, Theorem 4.2]. Inspired by this, we apply the constructions of Section 2 to obtain a bijection between the decreasing fibre set \(\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\mathrm{dec}^{n}\right)\) and the set of non-crossing matching arc-diagrams on \(n\) vertices (Theorem 3.6). In Section 4 we study another family of MVP parking function, namely those whose outcome permutation \(\mathrm{bipart}^{m,n}\) corresponds to the complete bipartite graph \(K_{m,n}\), i.e. \(\mathrm{bipart}^{m,n}:=(n+1)(n+2)\cdots(n+m)12\cdots n\). We give a closed formula enumerating the MVP outcome fibre in this case when \(n=2\) (Theorem 4.2). Section 5 connects MVP parking functions with recurrent configurations of the _Abelian sandpile model_ (ASM) on complete graphs (Theorem 5.4). We provide an algorithmic process (Algorithm 5.5) to track the bumpings that occur in the MVP parking process through the corresponding recurrent configuration. This then allows us to interpret the MVP outcome map by combining this process with a classical notion of "canonical toppling" in the ASM (Theorem 5.9). Finally, Section 6 summarises our results and discusses some possible future research directions. ## 2. General case In this section we study the MVP outcome map in the general setting. We will give an interpretation of the fibres in terms of certain subgraphs of the inversion graph of the outcome permutation. ### Inversion graphs and subgraphs Given a permutation \(\pi=\pi_{1}\cdots\pi_{n}\in S_{n}\), we say that a pair \((j,i)\) is an _inversion_ of \(\pi\) if \(j<i\) and \(\pi_{j}>\pi_{i}\). We denote \(\mathrm{Inv}\left(\pi\right)\) the set of inversions of \(\pi\). For any \(i\in[n]\) we define the set of _left-inversions_ at \(i\) in \(\pi\) by \(\mathrm{LeftInv}_{\pi}\left(i\right):=\{j\in[n];\,(j,i)\in\mathrm{Inv}\left( \pi\right)\}\). The _inversion graph_ of a permutation \(\pi\), denoted \(G_{\pi}\), is the graph with vertex set \([n]\) and edge set \(\mathrm{Inv}\left(\pi\right)\). It will be convenient to represent permutations and their inversion graphs graphically in a \(n\times n\) grid. We label rows and columns \(1,\cdots,n\) from top to bottom and left to right respectively. The graphical representation of a permutation \(\pi\) consists in placing a dot in each row \(\pi_{i}\) and column \(i\). The edges of the corresponding inversion graph are then pairs of dots where one is above and to the right of the other. We may sometimes think of edges \((j,i)\) with \(j<i\) as directed from \(j\) to \(i\) (i.e. from left to right), and refer to them as _arcs_. **Example 2.1**.: Consider the permutation \(\pi=42315\). The inversions are the pairs of indices \((1,2)\), \((1,3)\), \((1,4)\), \((2,4)\) and \((3,4)\). Figure 3 shows the graphical representations of \(\pi\) and of its inversion graph. We will use certain subgraphs of inversion graphs to represent MVP parking functions. Here, subgraphs are considered to be vertex-spanning, so that a subgraph is simply a subset of edges of the original graph. For a permutation \(\pi\) and corresponding inversion graph \(G_{\pi}\), we define \(\operatorname{Sub}^{1}\left(G_{\pi}\right):=\{S\subseteq\operatorname{Inv} \left(\pi\right);\;\forall i\in[n],\,|\{j\in[n];\,(j,i)\in S\}|\leq 1\}\). In words, this is the set of subgraphs of \(G_{\pi}\) where the number of incident _left-arcs_ at any vertex is at most \(1\). We refer to elements of \(\operatorname{Sub}^{1}\left(G_{\pi}\right)\) as _\(1\)-subgraphs_ of \(G_{\pi}\). Figure 4 below shows the three \(1\)-subgraphs of \(G_{231}\) in the three left-most graphs. The right-most graph (crossed-out) is not a \(1\)-subgraph, since the vertex in column \(i=3\) has two incident left-arcs. ### Connecting \(1\)-subgraphs to the MVP outcome map In this section, we explain how to represent parking functions in the MVP outcome fibre of a given permutation \(\pi\) via \(1\)-subgraphs of the permutation's inversion graph, and vice versa. **Definition 2.2**.: Let \(\pi\in S_{n}\) be a permutation. We define a map \(\Psi_{\operatorname{PF}\to\operatorname{Sub}}:\mathcal{O}_{\operatorname{ MVP}_{n}}^{-1}\left(\pi\right)\to\operatorname{Sub}^{1}\left(G_{\pi}\right)\), \(p\mapsto S(p)\) as follows: \[S(p):=\{(j,i)\in\operatorname{Inv}\left(\pi\right);\,p_{\pi_{i}}=j\}. \tag{1}\] In words, if the car \(\pi_{i}\) that ends up in spot \(i\) initially preferred some spot \(j<i\) in the parking function \(p\) (so was eventually bumped to \(i\) in the MVP parking process), then we put an edge from \(j\) to \(i\) in \(S(p)\). Note that for this bumping to occur, the car \(\pi_{j}\) which eventually ends up in spot \(j\) must enter the car park after car \(\pi_{i}\), which exactly means that \((j,i)\) is an inversion of \(\pi\) (i.e. an edge of \(G_{\pi}\)). Moreover, since exactly one car ends up in any given spot \(i\), there is at most one left-arc incident to \(i\) in \(S(p)\) (in the case where \(p_{\pi_{i}}=i\), i.e. the car that ends up in \(i\) wanted to park there, we have no incident left-arc), so that \(S(p)\) is indeed a \(1\)-subgraph of \(G_{\pi}\), as desired. We can then define an inverse for this map from parking functions to sub-graphs, as follows. Figure 4. The three \(1\)-subgraphs of the inversion graph \(G_{231}\); the right-most is not a \(1\)-subgraph as there are two left edges incident to \(i=3\). Figure 3. The permutation \(\pi=42315\) and its inversion graph \(G_{\pi}\). **Definition 2.3**.: Let \(\pi\in S_{n}\) be a permutation. We define a map \(\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}:\mathrm{Sub}^{1}\left(G_{\pi}\right) \rightarrow\mathrm{MVP}_{n}\), \(S\mapsto p=p(S)\) as follows: \[p_{\pi_{i}}=\begin{cases}i&\text{ if }\quad\left|\{j\in[n];(j,i)\in S\}\right|=0, \\ j&\text{ if }\quad\text{$j$ is the unique $j<i$ such that $(j,i)\in S.$}\end{cases} \tag{2}\] In words, if there is no left-arc incident to \(i\) in \(S\), we set \(p_{\pi_{i}}=i\). Otherwise, since \(S\) is a \(1\)-subgraph, there is a unique left-arc \((j,i)\) incident to \(i\) in \(S\), and we set \(p_{\pi_{i}}=j\). **Example 2.4**.: Consider the permutation \(\pi=34125\) and the \(1\)-subgraph \(S\in\mathrm{Sub}^{1}\left(G_{\pi}\right)\) consisting of the arcs \((2,3)\) and \((2,4)\) as in Figure 5. We calculate \(p:=\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}(S)\) as follows. First, let us determine \(p_{1}\) the preference of car \(1\). Note that \(1=\pi_{3}\) here, so we are looking at the vertex in row \(1\), column \(3\) (labelled \(3\) in our inversion graph labelling). Here there is a left-arc incident to this vertex, whose left end-point is in column \(2\), yielding \(p_{1}=2\). Similarly, \(p_{2}=2\) also, since there is a left arc incident to the dot in row \(2\), column \(4\), whose left-end point is also in column \(2\). However, the dot in row \(3\), column \(1\), has no incident left-arc, and neither does the dot in row \(4\), column \(2\), or the dot in row \(5\), column \(5\). We therefore set \(p_{3}=1\), \(p_{4}=2\), and \(p_{5}=5\). Finally, we get the preference \(p(S)=(p_{1},p_{2},p_{3},p_{4},p_{5})=(2,2,1,2,5)\). Our main result of this Section 2 is the following. **Theorem 2.5**.: _Let \(\pi\in S_{n}\) be a permutation. The maps \(\Psi_{\mathrm{PF}\rightarrow\mathrm{Sub}}:\mathcal{O}_{\mathrm{MVP}_{n}}^{-1} \left(\pi\right)\rightarrow\mathrm{Sub}^{1}\left(G_{\pi}\right)\) and \(\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}:\mathrm{Sub}^{1}\left(G_{\pi} \right)\rightarrow\mathrm{MVP}_{n}\) are injective. Moreover, for any \(p\in\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\), we have \(\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}(\Psi_{\mathrm{PF}\rightarrow \mathrm{Sub}}(p))=p\)._ Proof.: We first show that for any \(p\in\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\), we have \(\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}(\Psi_{\mathrm{PF}\rightarrow \mathrm{Sub}}(p))=p\). This follows essentially from the constructions in Definitions 2.2 and 2.3. Let \(p\in\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\), and define \(S:=\Psi_{\mathrm{PF}\rightarrow\mathrm{Sub}}(p)\) and \(p^{\prime}:=\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}(S)\). We wish to show that \(p^{\prime}=p\). Fix some spot \(i\in[n]\), and consider the car \(\pi_{i}\) which ends up in spot \(i\) in the MVP parking process for \(p\). There are two cases to consider. **Case 1:** car \(\pi_{i}\) ended up in its preferred spot. By definition, this means that \(p_{\pi_{i}}=i\). Moreover, in this case, by construction there is no left edge incident to \(i\) in \(S\) (the car ending up in spot \(i\) originally preferred that spot). Therefore, by Definition 2.3, we have \(p^{\prime}_{\pi_{i}}=i=p_{\pi_{i}}\), as desired. **Case 2:** car \(\pi_{i}\) had initially preferred some spot \(j<i\), and finally ended up in \(i\) after (possibly multiple) bumpings. Then by definition we have \(p_{\pi_{i}}=j\). Moreover, in this case, by construction we put an edge \((j,i)\) in \(S\). Finally, by Definition 2.3, we have \(p^{\prime}_{\pi_{i}}=j=p_{\pi_{i}}\), as desired. The equality \(\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}(\Psi_{\mathrm{PF}\rightarrow \mathrm{Sub}}(p))=p\) for all \(p\in\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\) further implies the injectivity of the map \(\Psi_{\mathrm{PF}\rightarrow\mathrm{Sub}}\) on \(\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\) since it has an inverse. It remains to be shown that \(\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}\) Figure 5. A \(1\)-subgraph \(S\) of \(G_{34125}\); the corresponding parking function is \(\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}(S)=(2,2,1,2,5)\). is injective on its domain \(\mathrm{Sub}^{1}\left(G_{\pi}\right)\). For this, fix \(S,S^{\prime}\in\mathrm{Sub}^{1}\left(G_{\pi}\right)\) two \(1\)-subgraphs such that \(S\neq S^{\prime}\). Define \(p:=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S)\) and \(p^{\prime}:=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S^{\prime})\). We wish to show that \(p\neq p^{\prime}\). Since \(S\neq S^{\prime}\), we may assume that there exists some edge \((j,i)\) with \(j<i\) which is in \(S\) but not in \(S^{\prime}\). By construction we have \(p_{\pi_{i}}=j\) in this case. Moreover, depending on whether \(i\) has an incident left edge in \(S^{\prime}\), say to \(j^{\prime}\neq j\), or whether \(i\) has no incident edge, we will have either \(p^{\prime}_{\pi_{i}}=j^{\prime}\) or \(p^{\prime}_{\pi_{i}}=i\). In both cases \(p^{\prime}_{\pi_{i}}\neq p_{\pi_{i}}\), and thus \(p^{\prime}\neq p\), as desired. As such, for a given permutation \(\pi\), the map \(\Psi_{\mathrm{PF}\to\mathrm{Sub}}\) induces a bijection from the fibre \(\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\) into its image \(\Psi_{\mathrm{PF}\to\mathrm{Sub}}\left(\mathcal{O}_{\mathrm{MVP}_{n}}^{-1} \left(\pi\right)\right)\). The question of calculating the fibre \(\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\) then becomes that of calculating the image set, or, equivalently, calculating the set of \(1\)-subgraphs \(S\) of \(G_{\pi}\) such that \(\mathcal{O}_{\mathrm{MVP}_{n}}\left(\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S) \right)=\pi\). **Definition 2.6**.: We say that a \(1\)-subgraph \(S\in\mathrm{Sub}^{1}\left(G_{\pi}\right)\) is _valid_ if \(\mathcal{O}_{\mathrm{MVP}_{n}}\left(\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S) \right)=\pi\). Otherwise, we say that \(S\) is _invalid_. We denote \(\mathrm{Valid}\left(G_{\pi}\right)\) the set of valid \(1\)-subgraphs of \(G_{\pi}\). With this terminology, Theorem 2.5 states that the map \(\Psi_{\mathrm{PF}\to\mathrm{Sub}}\) is a bijection from the fibre set \(\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\) to the set \(\mathrm{Valid}\left(G_{\pi}\right)\) of valid \(1\)-subgraphs of \(G_{\pi}\). In particular, we get the following enumeration and upper bound. **Corollary 2.7**.: _For any permutation \(\pi\in S_{n}\), we have_ \[\left|\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\right|=\left| \mathrm{Valid}\left(G_{\pi}\right)\right|\leq\left|\mathrm{Sub}^{1}\left(G_{ \pi}\right)\right|=\prod_{i\in\left[n\right]}\big{(}1+\left|\mathrm{LeftInv}_ {\pi}\left(i\right)\right|\big{)}.\] In fact, the upper bound of Corollary 2.7 was already established by Harris _et al._[17, Theorem 3.1], although its formulation in terms of inversions is new (in the cited work, it is stated in terms of cars/spots). In the same paper ([17, Theorem 3.2]), the authors also determined a full characterisation of when this upper bound is tight, which we again reformulate in the subgraph context. This characterisation is in terms of permutation _patterns_. Let \(\pi\in S_{n}\), \(\tau\in S_{k}\) be two permutations with \(k\leq n\). We say that \(\pi\)_contains_ the _pattern_\(\tau\) if there exist indices \(i_{1}<i_{2}<\cdots<i_{k}\) such that \(\pi_{i_{1}},\pi_{i_{2}},\cdots,\pi_{i_{k}}\) appear in the same relative order as \(\tau\). If \(\pi\) does not contain the pattern \(\tau\), we say that \(\pi\)_avoids_\(\tau\). For example, the permutation \(\pi=561243\) contains two occurrences of the pattern \(321\) (in **bold**): \(\mathbf{561243}\), \(\mathbf{561243}\). However, it avoids the pattern \(4321\) since there is no sequence of \(4\) numbers in decreasing order. The inversion graph \(G_{\pi}\) of a permutation \(\pi\) is acyclic if, and only if, \(\pi\) avoids the patterns \(321\) and \(3412\), which correspond to cycles of length \(3\) and \(4\) respectively (see e.g. [4]). **Theorem 2.8**.: _Let \(\pi\in S_{n}\) be a permutation. Then all \(1\)-subgraphs of \(G_{\pi}\) are valid if, and only if, \(\pi\) avoids the patterns \(321\) and \(3412\), or equivalently if the graph \(G_{\pi}\) is acyclic. In particular, in that case, we have \(\left|\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\right|=\prod\limits_ {i\in\left[n\right]}\big{(}1+\left|\mathrm{LeftInv}_{\pi}\left(i\right) \right|\big{)}\)._ **Example 2.9**.: Consider the permutation \(\pi=312\). The four \(1\)-subgraphs of \(G_{\pi}\) are shown in Figure 6. Since \(G_{\pi}\) is acyclic, all four of these are valid. Therefore to calculate the fibre set \(\mathcal{O}_{\mathrm{MVP}_{3}}^{-1}\left(312\right)\) we simply apply the map \(\Psi_{\mathrm{Sub}\to\mathrm{PF}}\) to each subgraph in turn. Finally, we get \(\mathcal{O}_{\mathrm{MVP}_{3}}^{-1}\left(312\right)=\{(2,3,1),(1,3,1),(2,1,1),(1,1,1)\}\). **Example 2.10**.: Consider the permutation \(\pi=321\), and the \(1\)-subgraph \(S\) with edges \((1,2)\) and \((1,3)\) (see Figure 7 below). The corresponding parking function is \(p=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S)=(1,1,1)\). But we saw in Example 2.9 above that \(\mathcal{O}_{\mathrm{MVP}_{3}}\left(p\right)=312\neq 321\). As such, the \(1\)-subgraph \(S\) is invalid. Another useful feature of the subgraph representation introduced in this section is that it allows certain statistics of parking functions to be easily read from the corresponding subgraph. Given a parking function \(p\), the _displacement_ of car \(i\) in \(p\) as the number of spots car \(i\) ends up from its original preference, i.e. \(|p_{i}-\pi_{i}^{-1}|\). The displacement of \(p\), denoted \(\operatorname{disp}_{\operatorname{MVP}}(p)\), is simply the sum of the displacements of all cars in \(p\). We have the following. **Proposition 2.11**.: _Let \(p\in\operatorname{MVP}_{n}\) be a parking function, and \(S:=\Psi_{\operatorname{PF}\to\operatorname{Sub}}(p)\) the corresponding \(1\)-subgraph. Then we have \(\operatorname{disp}_{\operatorname{MVP}}(p)=\sum\limits_{(j,i)\in S}(i-j)\)._ Proof.: Given an MVP parking function \(p\), let \(S:=\Psi_{\operatorname{PF}\to\operatorname{Sub}}(p)\) be the corresponding \(1\)-subgraph. By construction, we have an edge \((j,i)\) (with \(j<i\)) if the car \(\pi_{i}\) that ends up in spot \(i\) had initially preferred spot \(j\). This means exactly that the displacement of car \(\pi_{i}\) is given by \((i-j)\). The result follows immediately from this observation. ### Improved bounds on the fibre sizes In this part, we improve the upper bound from Corollary 2.7, and also give a lower bound for the fibre size. We call a \(1\)-subgraph \(S\xrightarrow{\raisebox{0.0pt}[0.0pt][0.0pt]{$P$}}_{2}\)-_free_ if there is no triple \(i<j<k\) such that \((i,j)\) and \((j,k)\) are both edges in \(S\). **Proposition 2.12**.: _Let \(\pi\in S_{n}\) be a permutation, and \(S\in\operatorname{Sub}^{1}\left(G_{\pi}\right)\) a \(1\)-subgraph of \(G_{\pi}\). If \(S\) is valid, then \(S\) is \(\xrightarrow{\raisebox{0.0pt}[0.0pt][0.0pt]{$P$}}_{2}\)-free. In particular, we have \(\left|\mathcal{O}_{\operatorname{MVP}_{n}}^{-1}\left(\pi\right)\right|\leq| \{S\in\operatorname{Sub}^{1}\left(G_{\pi}\right);\,S\text{ is }\xrightarrow{ \raisebox{0.0pt}[0.0pt][0.0pt]{$P$}}_{2}\text{-free}\}|\)._ Proof.: We proceed by contraposition. Suppose that \(S\in\operatorname{Sub}^{1}\left(G_{\pi}\right)\) is not \(\xrightarrow{\raisebox{0.0pt}[0.0pt][0.0pt]{$P$}}_{2}\)-free. Define \(p=\Psi_{\operatorname{Sub}\to\operatorname{PF}}(S)\) to be the corresponding parking function, and \(\pi^{\prime}:=\mathcal{O}_{\operatorname{MVP}_{n}}\left(p\right)\) to be its MVP outcome. We wish to show that \(\pi^{\prime}\neq\pi\). By definition, there is a triple \(i<j<k\) such that \((i,j)\) and \((j,k)\) are both edges in the \(S\) (see Figure 8). Since edges are inversions in \(\pi\), this implies that \(\pi_{i}>\pi_{j}>\pi_{k}\), i.e. that car \(\pi_{k}\) arrives earlier than car \(\pi_{j}\), and car \(\pi_{j}\) arrives earlier than car \(\pi_{i}\). Without loss of generality, assume that \(i\) is the left-most vertex of such a "chain", i.e. that the vertex \(i\) has no incident left edge in \(S\). This implies by Definition 2.3 that \(p_{\pi_{i}}=i\). Also by this definition, we get \(p_{\pi_{j}}=i\) (because \((i,j)\) is an edge of \(S\)), and \(p_{\pi_{k}}=j\). In particular we have \(p_{\pi_{i}}=p_{\pi_{j}}\). Since \(\pi_{i}>\pi_{j}\), this implies that car \(\pi_{j}\) will be bumped out of its spot at the latest by car \(\pi_{i}\) (it may be bumped first by another car also preferring the same spot). However, when this bumping occurs, spot \(j\) was necessarily already occupied, either by car \(\pi_{k}\) (since \(p_{\pi_{k}}=j\)), or by a car which arrived later and bumped car \(\pi_{k}\). Hence, when car \(\pi_{j}\) is bumped out of its preferred spot, it could not finally park in spot \(j\). This implies that the car \(\pi^{\prime}_{j}\) Figure 6. The four \(1\)-subgraphs of the inversion graph \(G_{312}\), which are all valid. Figure 7. An example of an invalid \(1\)-subgraph of \(G_{321}\). which ends up occupying spot \(j\) in the MVP parking process for \(p\), cannot be \(\pi_{j}\), i.e. \(\pi_{j}^{\prime}\neq\pi_{j}\), and thus \(\pi^{\prime}\neq\pi\) as desired. We now give a lower bound on the fibre sizes. We say that a \(1\)-subgraph \(S\in\mathrm{Sub}^{1}\left(G_{\pi}\right)\) is _horizontally separated_ (HS) if for any pair of edges \(\left(j,i\right)\) and \(\left(j^{\prime},i^{\prime}\right)\) of \(S\), we either have \(i<j^{\prime}\) or \(i^{\prime}<j\). In words, there is no pair of edges in \(S\) which "overlap horizontally" in the graphical representation, end-points included. For example, in Figure 9, Cases (A) and (B) are HS with the condition \(i<j^{\prime}\) and \(i^{\prime}<j\) respectively, while Cases (C) and (D) are not HS since \(i^{\prime}>j\) and \(i>j^{\prime}\) respectively. **Proposition 2.13**.: _Let \(\pi\in S_{n}\) be a permutation, and \(S\in\mathrm{Sub}^{1}\left(G_{\pi}\right)\) a \(1\)-subgraph of \(G_{\pi}\). If \(S\) is HS, then \(S\) is valid. In particular, we have \(\left|\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\right|\geq\left|\{S \in\mathrm{Sub}^{1}\left(G_{\pi}\right);\,S\text{ is HS}\}\right|\)._ Proof.: Let \(S\in\mathrm{Sub}^{1}\left(G_{\pi}\right)\) be HS, \(p:=\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}(S)\) the corresponding parking function, and \(\pi^{\prime}=\mathcal{O}_{\mathrm{MVP}_{n}}\left(p\right)\) its outcome. We wish to show that \(\pi^{\prime}=\pi\). We proceed by induction on the number \(k\geq 0\) of edges of \(S\). For \(k=0\), \(S\) is the empty subgraph with no edges. By construction, this means that \(p_{\pi_{i}}=i\) for all \(i\in[n]\). In particular, all cars have distinct preferences. As such, there are no bumpings/collisions, and every car ends up in its preferred spot in the outcome \(\pi^{\prime}\), i.e. \(\pi_{i}^{\prime}=\pi_{i}\) for all \(i\in[n]\), as desired. Suppose now that the result is proved for any HS subgraph \(S^{\prime}\) with at most \(k-1\) edges, and that \(S\) is a HS subgraph with \(k\) edges. Let \(\left(j,i\right)\), with \(j<i\) and \(\pi_{j}>\pi_{i}\), denote the right-most edge in \(S\), i.e. (given the HS property) all other edges \(\left(j^{\prime},i^{\prime}\right)\) with \(j^{\prime}<i^{\prime}\) must satisfy \(i^{\prime}<j\). Let \(S^{\prime}\) denote the subgraph \(S\) with the edge \(\left(j,i\right)\) removed. By construction, \(S^{\prime}\) is HS. We define \(p^{\prime}:=\Psi_{\mathrm{Sub}\rightarrow\mathrm{PF}}(S^{\prime})\) to be the corresponding parking function, whose outcome is \(\pi\) by the induction hypothesis (\(S^{\prime}\) is valid). By construction, we have \(p_{\pi_{i}}=j\), \(p_{\pi_{i}}^{\prime}=i\), and \(p_{k}=p_{k}^{\prime}\) for all other values of \(k\). Moreover, by the HS assumption, cars \(\pi_{i}\) and \(\pi_{j}\) are the only cars to prefer spot \(j\) in \(p\). Now consider the MVP parking process for \(p\). Up to the arrival of car \(\pi_{j}\), the only difference compared with the process for Figure 8. A triple \(i<j<k\) in \(1\)-subgraph \(S\) Figure 9. Examples of HS and non-HS graphs: (A) and (B) are HS; (C) and (D) are not HS. \(p^{\prime}\) lies in the fact that car \(\pi_{i}\) occupies spot \(j\) instead of spot \(i\). Otherwise the process is identical, and in particular spot \(i\) is unoccupied at that point. When car \(\pi_{j}\) arrives, it parks in spot \(j\), and bumps \(\pi_{i}\). Similarly, after the arrival of \(\pi_{j}\), the processes in \(p\) and \(p^{\prime}\) are the same, except perhaps for what happens to car \(\pi_{i}\). It therefore suffices to show that \(\pi_{i}\) will finally end up in spot \(i\). By the preceding remark, spot \(i\) is available when car \(\pi_{j}\) arrives, so car \(\pi_{i}\) will first park in some available spot \(i^{\prime}\leq i\). But by the HS assumption of \(S,\) all the vertices between \(j\) and \(i\) are isolated. This means that \(p_{\pi_{i^{\prime}}}=i^{\prime}\) for all \(i^{\prime}\in[j+1,i-1]\). In particular, all such spots \(i^{\prime}\) are either already occupied when \(\pi_{i}\) is first bumped (if \(\pi_{i^{\prime}}<\pi_{j}\)), or the arrival of car \(\pi_{i^{\prime}}\) will subsequently bump car \(\pi_{i}\) out of the spot \(i^{\prime}\) (if \(\pi_{i^{\prime}}>\pi_{j}\)). This implies that car \(\pi_{i}\) cannot end up in such a spot \(i^{\prime}\), and therefore it can only end up in \(i\), as desired. Note that any subgraph consisting of a single arc is horizontally separated, as is the empty subgraph. As such, Proposition 2.13 implies in particular that \(\left|\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\pi\right)\right|\geq 1+\left| \mathrm{Inv}\left(\pi\right)\right|\). One might ask how tight the bounds of Propositions 2.12 and 2.13 are. This is discussed in Section 6 at the end of the paper. ## 3. Motzkin parking functions ### Motzkin parking functions and Motzkin paths We consider lattice paths starting from \((0,0)\) with steps \(U=(1,1)\) (upwards step), \(D=(1,-1)\) (downwards step), and \(H=(1,0)\) (horizontal step). A _Motzkin path_ is a lattice path with these steps ending at some point \((n,0)\) which never goes below the X-axis (see Figure 10). We denote \(\mathrm{Motz}_{n}\) the set of Motzkin paths ending at \((n,0)\) (i.e. with \(n\) steps). Motzkin paths are enumerated by the ubiquitous _Motzkin numbers_ and are in bijection with a number of different combinatorial objects (see e.g. [3] or [26]). Given a parking function \(p\in\mathrm{PF}_{n}\), we define a lattice path with \(n\) steps \(\Phi(p):=\phi_{1}\cdots\phi_{n}\) by: \[\forall j\in[n],\quad\phi_{j}=\begin{cases}U&\text{if}\quad\left|\{j;p_{i}=j \}\right|\geq 2,\\ H&\text{if}\quad\left|\{j;p_{i}=j\}\right|=1,\\ D&\text{if}\quad\left|\{j;p_{i}=j\}\right|=0.\end{cases} \tag{3}\] **Definition 3.1**.: Let \(p=(p_{1},\cdots,p_{n})\in\mathrm{PF}_{n}\). We say that \(p\) is a _Motzkin parking function_ if \(\forall j\in[n]\), \(\left|\{j;p_{i}=j\}\right|\leq 2\). We denote \(\mathrm{MotzPF}_{n}\) the set of Motzkin paking functions of length \(n\). In words, a Motzkin parking function is a parking function in which each spot is preferred by at most two cars. The terminology of Motzkin parking function comes from the following result. **Theorem 3.2**.: _Let \(p\) be a parking preference. Then \(p\in\mathrm{MotzPF}_{n}\) if and only if \(\Phi(p)\) is a Motzkin path._ Proof.: Let \(p\in\mathrm{MotzPF}_{n}\), meaning each spot will be preferred by at most \(2\) cars in \(p\). Denote \(\phi:=\Phi(p)=\phi_{1}\cdots\phi_{n}\) the corresponding lattice path. We wish to show that \(\phi\) is a Motzkin path. Fix some index \(k\in[n]\) such that \(\phi_{k}=U\). This means that spot \(k\) is preferred by two cars, say \(i\), \(j\), with \(i<j\) (i.e., \(p_{i}=p_{j}=k\)). Because one spot can only hold one car, this means that after bumping, car \(j\) will park on spot \(k\), and the other car \(i\) will be bumped from the current spot \(k\) to the first available spot. Denote \(k^{\prime}\) the spot where it eventually ends up once all cars have parked. We must have \(k^{\prime}>k\), and we claim that spot \(k^{\prime}\) is preferred by no cars. Seeking contradiction, suppose that \(k^{\prime}\) is preferred by a car \(i^{\prime}\), and let \(j^{\prime}\) denote the last car to arrive in the parking sequence before car \(i\) parks in \(k^{\prime}\). On the one hand, if \(i^{\prime}<j^{\prime}\), then it is impossible for \(i\) to park in \(k^{\prime}\), since it would already be occupied at that point. On the other hand, if \(i^{\prime}>j^{\prime}\), then car \(i^{\prime}\) would bump \(i\) out of spot \(k^{\prime}\), so car \(i\) cannot end up in spot \(k^{\prime}\). As such, \(k^{\prime}\) is preferred by no cars, i.e. \(\phi_{k^{\prime}}=D\). We have therefore defined a map \(k\mapsto k^{\prime}\) which maps \(U\) steps in \(\Phi\) to \(D\) steps, such that \(k<k^{\prime}\) for any \(U\) step \(k\). We claim that this map is bijective by exhibiting its inverse. If \(k^{\prime}\in[n]\) is such that \(\phi_{k^{\prime}}=0\), we simply set \(k\) to be the spot which was initially preferred by that car which ends up in spot \(k^{\prime}\). By construction, that car did not initially prefer \(k^{\prime}\) (no car did, since \(\phi_{k^{\prime}}=0\)), and therefore its initial preference must be to the left of \(k^{\prime}\), i.e. \(k<k^{\prime}\). Moreover, since that car ends up in a spot that is not its first preference, it must have been bumped out, meaning that its initial preference \(k\) must be preferred by at least two cars, i.e. \(\phi_{k}=2\). This shows that the map \(k\mapsto k^{\prime}\) is indeed bijective, which implies that \(\phi\) is a Motzkin path, as desired. Conversely, suppose that \(p\in\mathrm{MVP}_{n}\) is such that \(\phi=\Phi(p)\) is a Motzkin path. We wish to show that \(p\) is a Motzkin parking function. Let \(\mathcal{U}:=\{k\in[n];\,\phi_{k}=U\}\), resp. \(\mathcal{D}:=\{k\in[n];\,\phi_{k}=D\}\), denote the set of \(U\), resp. \(D\), steps in \(\Phi\). For any \(k\in\mathcal{U}\), consider the set \(C_{k}\) of cars that prefer spot \(k\). We wish to show that \(|C_{k}|=2\) for all \(k\). Let \(i_{k}\) denote the car that ends up in spot \(k\) after all cars have parked. By construction \(i_{k}\in C_{k}\). As above, to each element \(j\neq i_{k}\) in \(C_{k}\), we can associate injectively a parking spot \(k_{j}>k\) where car \(j\) ends up, and this spot \(k_{j}\) must be preferred by no cars, i.e. \(k_{j}\in\mathcal{D}\). This therefore describes an injection \(\bigcup\limits_{k\in\mathcal{U}}\left(C_{k}\setminus\{i_{k}\}\right)\hookrightarrow \mathcal{D}\). Finally, we get: \[|\mathcal{D}|=|\mathcal{U}|=\sum_{k\in\mathcal{U}}1\leq\sum_{k\in\mathcal{U}} \left(|C_{k}|-1\right)\leq|\mathcal{D}|,\] where the equality \(|\mathcal{D}|=|\mathcal{U}|\) stems from the fact that \(\phi\) is a Motzkin path, the left inequality follows from the fact that \(|C_{k}|\geq 2\) by construction of the map \(\Phi\) (since \(\phi_{k}=U\)), and the right inequality follows from the injective map described above. For the left-most and right-most terms to be equal, we must therefore have that all terms in the sums are equal, i.e. that \(|C_{k}|=2\) for all \(k\in\mathcal{U}\), as desired. **Example 3.3**.: Consider the Motzkin parking function \(p=(2,2,1,4,3,6,4,6)\in\mathrm{MotzPF}_{8}\). The corresponding Motzkin path is \(\Phi(p)=HUHUDUD\). Indeed, spots \(1\) and \(3\) are preferred by one car, spots \(2\), \(4\) and \(6\) by two cars, and spots \(5\), \(7\) and \(8\) by no cars. We can check that \(\Phi(p)\), illustrated on Figure 10, is indeed a Motzkin path. The definition of the map \(\Phi\) in Equation (3) only depends on the number of cars which prefer each spot, and not on the labels of the cars in question. It is therefore natural to define an equivalence relationship \(\sim\) on \(\mathrm{MotzPF}_{n}\) by \(p\sim p^{\prime}\) if \(p^{\prime}\) is obtained by permuting the preferences in \(p\). For example, the parking functions \((2,1,1,4)\) and \((1,4,1,2)\) are equivalent. We write \(\mathrm{MotzPF}_{n}\diagup\) for the set of equivalence classes of Motzkin parking functions. The above observation implies that \(\Phi\) is constant on the equivalence classes of \(\sim\), so that with slight abuse of notation, we can consider \(\Phi\) to be defined on the set \(\mathrm{MotzPF}_{n}\diagup\). We then have the following. **Theorem 3.4**.: _The map \(\Phi:\mathrm{MotzPF}_{n}\diagup\rightarrow\mathrm{Motz}_{n}\) is a bijection._ Proof.: As discussed above, the map is well defined since \(\Phi\) is constant on the equivalence classes of \(\mathrm{MotzPF}_{n}\), and by Theorem 3.2 its image is in \(\mathrm{Motz}_{n}\). To show that it is a bijection, we exhibit its inverse. Given a Motzkin path \(\phi=\phi_{1}\cdots\phi_{n}\), we construct a parking function using the following algorithm. 1. Initialise \(i=k=1\). Figure 10. The Motzkin path \(\Phi(p)=HUHUDUD\) corresponding to the Motzkin parking function \(p=(2,2,1,4,3,6,4,6)\). 2. While \(k\leq n\), do the following. 1. If \(\phi_{k}=U\), set \(p_{i}=p_{i+1}=k\) and \(i=i+2\). 2. If \(\phi_{k}=H\), set \(p_{i}=k\) and \(i=i+1\). 3. If \(\phi_{k}=D\), do nothing (leave \(i\) unchanged). In all cases set \(k=k+1\). 3. Output \(p=(p_{1},\cdots,p_{n})\). Since \(\phi\) has the same number of \(U\) and \(D\) steps, it is straightforward to check that this algorithm does indeed output a parking preference \(p\). Moreover, by construction of the map \(\Phi\), we have \(\Phi(p)=\phi\), i.e. this construction defines an inverse for the map \(\Phi\). It therefore remains to show that \(p\) is indeed a parking function. Note that \(p\) is non-decreasing by construction, i.e. \(p_{1}\leq\cdots\leq p_{n}\). As such, it is sufficient to show that for all \(i\in[n]\), we have \(p_{i}\leq i\) (see e.g. [29, Section 1.1]). For this, we first describe the construction of \(p\) from \(\phi\) in a slightly alternate form. We start from the empty sequence \(S\), and initialise \(s=1\) for the "current spot". Looking at the steps of the Motzkin path \(\phi\) from left to right, we put the value \(s\) into the sequence \(S\) twice if we encounter a \(U\) step, once if we encounter a \(H\) step, and zero times if we encounter a \(D\) step, then increase \(s\) by one, and repeat (moving to the next step of the Motzkin path). Because in any prefix \(\phi_{1}\cdots\phi_{k}\) of \(\phi\) the number of \(U\) steps must be greater than or equal to the number of \(D\) steps, this equivalent formulation implies that after \(k\) iterations of the above algorithm, the length of the sequence \(S\) must be greater than or equal to \(k\). These iterations correspond exactly to placing the values \(1,2,\cdots,k\) into the sequence \(S\) (since the current spot \(s\) increases by one at each iteration). In other words, we have \(|\{i\in[n];\,p_{i}\leq k\}|\geq k\) for all \(k\in[n]\). We claim that this implies that for any \(i\in[n]\), we have \(p_{i}\leq i\). Otherwise, suppose that \(p_{i}>i\) for some \(i\). Since \(p\) is non-decreasing, this means that \(\{j\in[n];p_{j}\leq i\}\subseteq[i-1]\). But by the above, the left-hand set should have cardinality at least \(i\), which yields the desired contradiction. Therefore we do indeed have \(p_{i}\leq i\) for all \(i\in[n]\), and so \(p\) is a parking function, as desired. This completes the proof. Theorem 3.4 can be viewed as a generalisation of the bijection between Motzkin paths and parking functions whose MVP outcome is the _decreasing permutation_\(\operatorname{dec}^{n}:=n(n-1)\cdots 1\), which was established by Harris _et al._[17]. We re-state that result in our context of Motzkin parking functions and their equivalence classes. **Theorem 3.5** ([17, Theorem 4.2]).: _For any \(p\in\operatorname{MotzPF}_{n}\), there exists a unique \(p^{\prime}\in\operatorname{MotzPF}_{n}\) such that \(p\sim p^{\prime}\), and \(\mathcal{O}_{\operatorname{MVP}_{n}}(p^{\prime})=\operatorname{dec}^{n}\). In particular, \(\Phi\) induces a bijection from the decreasing fibre \(\mathcal{O}_{\operatorname{MVP}_{n}}^{-1}\left(\operatorname{dec}^{n}\right)\) to the set \(\operatorname{Motz}_{n}\) of Motzkin paths of length \(n\)._ ### Non-crossing arc diagrams Theorem 3.5 implies in particular that the decreasing fibres \(\mathcal{O}_{\operatorname{MVP}_{n}}^{-1}\left(\operatorname{dec}^{n}\right)\) are enumerated by the Motzkin numbers. In this part we give a new bijective explanation of this fact by using our subgraph representation from Section 2. Note that the inversion graph of the decreasing permutation \(\operatorname{dec}^{n}\) is just the complete graph \(K_{n}\) on \(n\) vertices, since all pairs are inversions in \(\operatorname{dec}^{n}\). We can therefore in a sense forget the geometry of the inversion graph, and it will be convenient to think of subgraphs of \(G_{\operatorname{dec}^{n}}\) as _arc diagrams_. An arc diagram is simply a subset of edges of \(G_{\operatorname{dec}^{n}}\), i.e. a set of (some) pairs \((j,i)\in[n]^{2}\) with \(j<i\). In this context, \(\operatorname{Sub}^{1}\left(G_{\operatorname{dec}^{n}}\right)\) is the set of arc diagrams of \([n]\) such that for any \(i\in[n]\) there is at most one arc \((j,i)\) with \(j<i\). We say that an arc diagram \(\Delta\in\operatorname{Sub}^{1}\left(G_{\operatorname{dec}^{n}}\right)\) is a _non-crossing matching_ if it satisfies the two following conditions. 1. **Matching condition**: for every vertex \(i\in[n]\), there is at most one arc incident to \(i\) in \(\Delta\). 2. **Non-crossing condition**: no two arcs of \(\Delta\) "cross", that is there are no four vertices \(i<j<k<\ell\) such that \((i,k)\) and \((j,\ell)\) are both arcs in \(\Delta\). We denote \(\mathrm{NonCross}_{n}\) the set of non-crossing arc diagrams on \([n]\). It is well-known that non-crossing arc diagrams are enumerated by Motzkin numbers. For a simple bijection between \(\mathrm{NonCross}_{n}\) and \(\mathrm{Motz}_{n}\), simply map \(\Delta\in\mathrm{NonCross}_{n}\) to \(\phi=\phi_{1}\cdots\phi_{n}\in\mathrm{Motz}_{n}\) by setting \(\phi_{i}=U\) if \(i\) is incident to a right-arc \((i,j)\) in \(\Delta\) with \(i<j\), \(\phi_{i}=D\) if \(i\) is incident to a left-arc \((j,i)\) in \(\Delta\) with \(j<i\), and \(\phi_{i}=H\) if \(i\) is an isolated vertex in \(\Delta\). Our main result of this section is the following, which gives an alternate proof of the enumerative consequence of Theorem 3.5. **Theorem 3.6**.: _Let \(n\geq 1\). The map \(\Psi_{\mathrm{Sub}\to\mathrm{PF}}\) is a bijection from the set of non-crossing matchings \(\mathrm{NonCross}_{n}\) to the decreasing fibre \(\mathcal{O}_{\mathrm{MVP}_{n}}^{-1}\left(\mathrm{dec}^{n}\right)\)._ **Example 3.7**.: Consider the non-crossing matching \(\Delta\) in Figure 11 below. We wish to compute the corresponding MVP parking function \(p:=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(\Delta)\). To get the parking preference \(p_{i}\) of car \(i\), we look at the vertex \(n+1-i\) (equivalently, the \(i\)-th vertex from the right). If there is a left-arc incident to that vertex, we set \(p_{i}\) to be the label of the left end-point \(j\) of that arc. Otherwise we set \(p_{i}=n+1-i\). We get the parking function \(p=(11,7,8,8,7,1,3,4,3,2,1)\). One can check that we do indeed have \(\mathcal{O}_{\mathrm{MVP}_{11}}\left(p\right)=\mathrm{dec}^{11}\), as desired. Proof of Theorem 3.6.: Note that we already know by Theorem 2.5 that the map is injective, so it suffices to show that \(\Psi_{\mathrm{Sub}\to\mathrm{PF}}\left(\mathrm{NonCross}_{n}\right)=\mathcal{O }_{\mathrm{MVP}_{n}}^{-1}\left(\mathrm{dec}^{n}\right)\), or equivalently, that \(\mathrm{Valid}\left(G_{\mathrm{dec}^{n}}\right)=\mathrm{NonCross}_{n}\). We first show that inclusion \(\mathrm{Valid}\left(G_{\mathrm{dec}^{n}}\right)\subseteq\mathrm{NonCross}_{n}\). Let \(\Delta\in\mathrm{Valid}\left(G_{\mathrm{dec}^{n}}\right)\) be a valid \(1\)-arc diagram, and denote \(p=(p_{1},\cdots,p_{n}):=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(\Delta)\) the corresponding MVP parking function, satisfying \(\mathcal{O}_{\mathrm{MVP}_{n}}\left(p\right)=\mathrm{dec}^{n}\). To simplify notation, for \(i\in[n]\) we denote \(\bar{i}:=n+1-i\) the car that ends up in spot \(i\). We want to show that \(\Delta\) is a non-crossing matching. For this, we need to prove two conditions: the matching condition, and the non-crossing condition. **Matching condition.** Seeking contradiction, suppose there is a vertex which is incident to two arcs. There are three cases to consider here. Case (A): some vertex \(k\) is incident to two left-arcs, That is, there exist \(i<j<k\) such that \((i,k)\) and \((j,k)\) are arcs of \(\Delta\) (see Figure 12). This directly contradicts the condition that \(\Delta\) is a \(1\)-arc diagram (in other words that only one car ends up in spot \(k\)), so there is nothing to show here. Case (B): some vertex \(j\) is incident to both a left-arc and a right-arc. That is, there exist \(i<j<k\) such that \((i,j)\) and \((j,k)\) are arcs of \(\Delta\) (see Figure 13). In this case, \(\Delta\) is not \(\overrightarrow{P_{2}}\)-free, so by Proposition 2.12 \(\Delta\) cannot be valid, which is a contradiction. Figure 11. An example of a non-crossing matching \(\Delta\) on \(11\) vertices. The corresponding parking function is \(p:=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(\Delta)=(11,7,8,8,7,1,3,4,3,2,1)\). Figure 12. Case where \(k\) is incident to two left-arcs. Case (C): some vertex \(i\) is incident to two right-arcs. That is, there exist \(i<j<k\) such that \((i,j)\) and \((i,k)\) are arcs of \(\Delta\) (see Figure 14). By Case (B) we have that \(i\) cannot be incident to a left-arc in \(\Delta\). Therefore, in the parking function, we have \(p_{\bar{i}}=p_{\bar{j}}=p_{\bar{k}}=i\). Without loss of generality, we may assume that \(j\) and \(k\) are the first two vertices incident to \(i\), i.e. that there is no arc \((i,\ell)\) for \(\ell<k\) and \(\ell\neq j\). This is equivalent to cars \(\bar{k}\), \(\bar{j}\) and \(\bar{i}\) being the last three cars to prefer spot \(i\). Because \(\bar{k}<\bar{j}<\bar{i}\), car \(\bar{k}\) arrives and parks on spot \(i\) first, and then car \(\bar{j}\) parks on spot \(i\) again, bumping car \(\bar{k}\). Since car \(\bar{i}\) will subsequently bump car \(\bar{j}\) from spot \(i\) (no other car prefers spot \(i\) in between by the above assumption), and car \(\bar{j}\) needs to end up parking in spot \(j\), this implies that spot \(j\) must be free when car \(\bar{i}\) arrives. In particular, this implies that after being bumped from spot \(i\) by car \(\bar{j}\), car \(\bar{k}\) must park in some spot \(\ell\) with \(i<\ell<j\). But car \(\bar{k}\) should end up in spot \(k\). For this to occur, it must therefore be bumped by another car, say \(\bar{k}^{\prime}\). According to the above, such a car must arrive after \(\bar{i}\), i.e. \(k^{\prime}<i\), since \(j\) must still be free when \(\bar{i}\) arrives, but occupied when \(\bar{k}^{\prime}\) arrives so that \(\bar{k}\) ends up in \(k\), "skipping" over spot \(j\). But by construction, the preferred spot of any car \(\bar{k}^{\prime}\) with \(k^{\prime}<i\) must be less than \(i\), which contradicts the fact that car \(\bar{k}^{\prime}\) would bump \(\bar{k}\) from a spot \(\ell>i\). This concludes the proof of the matching condition. **Non-crossing condition**. We now show that if \(\Delta\) is a valid 1-arc diagram, then \(\Delta\) is non-crossing. Seeking contradiction, suppose that \(\Delta\) has an edge-crossing. That is, there exist \(i<j<k<\ell\) such that the edges \((i,k)\) and \((j,\ell)\) are both in the diagram \(\Delta\) (see Figure 15). By construction, and applying Case (B) from the matching condition above, we have that \(p_{\bar{i}}=p_{\bar{k}}=i\), and \(p_{\bar{j}}=p_{\bar{\ell}}=j\). Moreover, fixing \(i\) and \(k\), we can consider the right-most \(j\) whose right-arc crosses the arc \((i,k)\), which also implies that \(k\) is the left-most \(k\) whose left-arc crosses the arc \((j,\ell)\). Therefore we can assume without loss of generality that for any \(m\) with \(j<m<k\), we have \(j<p_{\bar{m}}<k\). In other words, the cars arriving between cars \(\bar{k}\) and \(\bar{j}\) occupy exactly the spots between \(j\) and \(k\). Because \(i<j<k<\ell\) and \(\bar{i}>\bar{j}>\bar{k}>\bar{\ell}\), car \(\bar{\ell}\) first arrives and parks on spot \(j\) (since \(p_{\bar{\ell}}=j\)). Then car \(\bar{k}\) arrives and parks on spot \(i\) according to the preference \(p_{\bar{k}}=i\), which does not influence the car \(\bar{\ell}\). However, after car \(\bar{j}\) arrives, it prefers spot \(j\) again, bumping the current car \(\bar{\ell}\) which was previously occupying spot \(j\). But by the assumption above all spaces between \(j\) and \(k\) have been Figure 14. Case where \(i\) is incident to two right-arcs. Figure 15. Case where the diagram \(\Delta\) contains an edge-crossing. occupied at this point (by cars arriving between \(\bar{k}\) and \(\bar{j}\)). Therefore, after being bumped by car \(\bar{j}\), car \(\bar{\ell}\) parks in some spot \(\ell^{\prime}\) with \(\ell^{\prime}\geq k\). In particular, this implies that after car \(\bar{j}\) has arrived, the spot \(k\) must be occupied (either by \(\bar{\ell}\) or by a previous car). As such, there is no way for car \(\bar{k}\) to subsequently occupy spot \(k\) after being bumped by car \(\bar{i}\), which yields the desired contradiction. This concludes the proof of the inclusion \(\operatorname{Valid}\left(G_{\operatorname{dec}^{n}}\right)\subseteq \operatorname{NonCross}_{n}\). Before proving the converse, we first introduce some additional notation. We say that a non-crossing matching \(\Delta\in\operatorname{NonCross}_{n}\) is _prime_ if it contains the arc \((1,n)\). A non-crossing matching \(\Delta\) that is not prime can be uniquely decomposed into a disjoint union of _prime factors_, where the prime factors are simply the non-crossing matchings induced on subsets \(\{i,\cdots j\}\) such that \((i,j)\) is an arc of \(\Delta\) and there is no arc \((i^{\prime},j^{\prime})\) of \(\Delta\) with \(i^{\prime}<i\) and \(j<j^{\prime}\). Figure 16 shows an example of the prime decomposition of a non-crossing matching, with different colours corresponding to the three different prime factors. We are now equipped to show the converse inclusion \(\operatorname{NonCross}_{n}\subseteq\operatorname{Valid}\left(G_{ \operatorname{dec}^{n}}\right)\). That is, we show that for any non-crossing matching \(\Delta\in\operatorname{NonCross}_{n}\), we have \(p=\Psi_{\operatorname{Sub}\to\operatorname{PF}}(\Delta)=(p_{1},\cdots,p_{n}) \in\mathcal{O}_{\operatorname{MVP}_{n}}^{-1}\left(\operatorname{dec}^{n}\right)\) (i.e. \(\Delta\) is valid). We proceed by induction on \(n\geq 1\). For \(n=1\), the only non-crossing matching consists of a single isolated vertex \(1\), whose corresponding parking function is \(p=(1)\). We then have \(\mathcal{O}_{\operatorname{MVP}_{1}}\left(p\right)=1=\operatorname{dec}^{1}\), as desired. For the induction step, fix \(n>1\) and suppose that the result holds for all \(k<n\). Let \(\Delta\in\operatorname{NonCross}_{n}\) be a non-crossing matching, and \(p:=\Psi_{\operatorname{Sub}\to\operatorname{PF}}(\Delta)=(p_{1},\cdots,p_{n})\) the corresponding parking function. If \(\Delta\) is not prime (does not contain the arc \((1,n)\)), then the result follows immediately by applying the induction hypothesis to each prime factor of \(\Delta\). Therefore it remains to consider the case where \((1,n)\) is an arc of \(\Delta\). This means that \(p_{\overline{1}}=p_{\overline{n}}=1\) by construction, i.e. cars \(1\) and \(n\) both prefer spot \(1\). Define \(\Delta^{\prime}\in\operatorname{NonCross}_{n-1}\) to be the non-crossing matching with vertex set \([n-1]\) obtained by deleting the vertex \(n\) and the arc \((1,n)\) from \(\Delta\), and let \(p^{\prime}:=\Psi_{\operatorname{Sub}\to\operatorname{PF}}(\Delta^{\prime})\) be the corresponding parking function. Note that we have \(p^{\prime}_{i}=p_{i}\) for all \(i\in[n-1]\). We apply the induction hypothesis to \(\Delta^{\prime}\), which means that \(\mathcal{O}_{\operatorname{MVP}_{n-1}}\left(p^{\prime}\right)=\operatorname{ dec}^{n-1}\). In other words, following the MVP parking process for \(p^{\prime}\), car \(\overline{i}\) ends up in spot \(i\) for all \(i\in[n-1]\). Now consider the MVP parking process for \(p\). Car \(1=\overline{n}\) is the first to enter, and parks in its preferred spot \(1\). Cars \(\overline{n-1},\cdots\overline{2}\) then enter in that order. Since none of these cars prefers spot \(1\), car \(1\) does not move during this process. Moreover, since \(p_{i}=p^{\prime}_{i}\) for all \(i\in[n-1]\), the parking process of these cars for \(p\) follows exactly that for \(p^{\prime}\). In particular, car \(\overline{i}\) ends up in spot \(i\) for all \(i\in\{2,\cdots,n-1\}\). Finally, car \(n=\overline{1}\) enters the car park. It parks in its preferred spot \(1\), as desired. When it does so, it bumps car \(1\). By the above, spots \(2,\cdots,n-1\) have all been occupied at this point, so car \(1=\overline{n}\) parks in the (only) remaining spot \(n\), as desired. This concludes the proof. **Remark 3.8**.: In the proof of the matching condition for a valid \(1\)-arc diagram \(\Delta\), we can ask which of the three cases (A), (B), (C) are still forbidden for a general permutation \(\pi\neq\operatorname{dec}^{n}\). Case (A) obviously still holds by definition of \(1\)-subgraphs. Case (B) also still holds in the sense of Figure 16. An example of the prime decomposition of a non-crossing matching. The prime factors are the matchings induced on the intervals \([1,6]\) (in blue), \([7,10]\) (in red), and the isolated vertex \(11\) (in green). Proposition 2.12. That is, there can be no three spots \(i<j<k\) with cars \(\pi_{i}>\pi_{j}>\pi_{k}\) such that \(p_{\pi_{k}}=j\) and \(p_{\pi_{j}}=p_{\pi_{i}}=i\). However, Case (C) no longer holds. Indeed, consider the parking preference \(p=(1,1,1,2)\). In this case \(\pi=\mathcal{O}_{\mathrm{MVP}_{4}}\left(p\right)=3421\). Figure 17 illustrates the corresponding subgraph \(S\in\mathrm{Sub}^{1}\left(G_{3421}\right)\), which has edges \((1,3)\) and \((1,4)\) with \(\pi_{4}<\pi_{3}<\pi_{1}\). The key difference is the presence of the car \(k^{\prime}=4\) which arrives last and bumps car \(1\) from spot \(2\) to spot \(4\). In the decreasing permutation case this could not occur. **Remark 3.9**.: Similarly, we can see that the non-crossing condition does not hold for general valid \(1\)-subgraphs. Indeed, car \(\bar{\ell}\) could first be bumped to a spot between \(j\) and \(k\) and be subsequently bumped into spot \(\ell\) by a car arriving after \(\bar{i}\). For example, consider the parking preference \(p=(2,1,2,1,3)\), whose outcome is \(\pi=\mathcal{O}_{\mathrm{MVP}_{5}}\left(p\right)=43521\). Figure 18 illustrates the corresponding valid \(1\)-subgraph, which has edges \((1,4)\) and \((2,5)\) with \(\pi_{5}<\pi_{4}<\pi_{2}<\pi_{1}\) (corresponding to crossing edges in an arc diagram representation). ## 4. The complete bipartite case In this section we study the case where the inversion graph is the complete bipartite graph. Given \(m,n\geq 1\), the _complete bipartite permutation_ is the permutation \(\mathrm{bipart}^{m,n}:=(n+1)(n+2)\cdots(n+m)12\cdots n\). In this case, the inversion graph \(G_{\mathrm{bipart}^{m,n}}\) is the complete bipartite graph \(K_{m,n}\), whose edges are all pairs \((i,m+j)\) for \(i\in[m]\) and \(j\in[n]\). **Example 4.1**.: Consider the permutation \(\mathrm{bipart}^{m,n}=4567123\) for \(m=4\) and \(n=3\). Figure 19 shows one example of its \(1\)-subgraphs. In this work, we focus on the case \(n=2\), and obtain the following enumerative formula for the MVP outcome fibre. Figure 17. The subgraph corresponding to the MVP parking function \((1,1,1,2)\). Figure 18. The subgraph corresponding to the MVP parking function \((2,1,2,1,3)\). **Theorem 4.2**.: _For any \(m\geq 0\), we have \(|\mathcal{O}_{\mathrm{MVP}_{m+2}}^{-1}(\mathrm{bipart}^{m,2})|=m+1+\lfloor\frac{( m+1)^{2}}{2}\rfloor\)._ We will prove the theorem by induction on \(m\), by seeing how many "additional" valid \(1\)-subgraphs there are in \(G_{\mathrm{bipart}^{m+1,2}}\) compared to those in \(G_{\mathrm{bipart}^{m,2}}\). **Lemma 4.3**.: _For \(m\geq 1\), set \(m^{\prime}:=m+1\). We define a map \(\mathrm{Sub}^{1}\left(G_{\mathrm{bipart}^{m,2}}\right)\to\mathrm{Sub}^{1} \left(G_{\mathrm{bipart}^{m^{\prime},2}}\right)\), \(S\mapsto S^{\prime}\), as follows. For a \(1\)-subgraph \(S\in\mathrm{Sub}^{1}\left(G_{\mathrm{bipart}^{m,2}}\right)\), we define a \(1\)-subgraph \(S^{\prime}\in\mathrm{Sub}^{1}\left(G_{\mathrm{bipart}^{m^{\prime},2}}\right)\) by increasing vertex labels in \(S\) by one, and inserting a new isolated vertex labelled \(1\) (see Figure 20 below). Then the map \(S\mapsto S^{\prime}\) is a bijection from the set of valid \(1\)-subgraphs of \(G_{\mathrm{bipart}^{m,2}}\) to the set of valid \(1\)-subgraphs of \(G_{\mathrm{bipart}^{m^{\prime},2}}\) where the vertex labelled \(1\) has no incident edge._ **Example 4.4**.: Consider the \(1\)-subgraph \(S\in\mathrm{Sub}^{1}\left(G_{\mathrm{bipart}^{m,2}}\right)\) with \(m=3\) as in Figure 20, Graph (A). Using the map \(\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S)\), the corresponding parking function is \(p=(2,1,1,2,3)\). By running the MVP parking process we get \(\mathcal{O}_{\mathrm{MVP}_{m+2}}\left(p\right)=34512=\mathrm{bipart}^{m,2}\), so \(S\) is valid. Then increasing vertex labels in \(S\) by one and inserting a new isolated vertex labelled \(1\), we get a new \(1\)-subgraph \(S^{\prime}\in\mathrm{Sub}^{1}\left(G_{\mathrm{bipart}^{m^{\prime},2}}\right)\) with \(m^{\prime}=m+1=4\) as in Figure 20, Graph (B). For \(S^{\prime}\), the corresponding parking function is \(p^{\prime}=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S^{\prime})=(3,2,1,2,3,4)\). Then by running the MVP parking process we get \(\mathcal{O}_{\mathrm{MVP}_{m^{\prime}+2}}\left(p^{\prime}\right)=345612= \mathrm{bipart}^{m^{\prime},2}\), so \(S^{\prime}\) is also valid, as desired. Proof of Lemma 4.3.: We first show that if \(S\) is valid, so is \(S^{\prime}\). Let \(p:=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S)\), resp. \(p^{\prime}:=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S^{\prime})\) denote the parking function corresponding to \(S\), resp. \(S^{\prime}\). We consider the MVP parking process for \(p^{\prime}\). By construction, since the vertex \(1\) is isolated in \(S^{\prime}\), and \(\mathrm{bipart}_{1}^{m^{\prime},2}=3\) is isolated in \(S^{\prime}\), only the car \(3\) prefers the spot \(1\). This means that car \(3\) will end up in spot \(1\). Moreover, the remaining \(m+2\) cars follow the same parking process as in \(p\), with their spots increased by one. Since \(S\) is valid, i.e. \(\mathcal{O}_{\mathrm{MVP}_{m+2}}\left(p\right)=\mathrm{bipart}^{m,2}\), this implies that \(\mathcal{O}_{\mathrm{MVP}_{m^{\prime}+2}}\left(p^{\prime}\right)=\mathrm{bipart }^{m^{\prime},2}\), i.e. \(S^{\prime}\) is valid, as desired. We now claim that if \(S^{\prime}\in\mathrm{Sub}^{1}\left(G_{\mathrm{bipart}^{m^{\prime},2}}\right)\) is a valid \(1\)-subgraph where the vertex \(1\) is isolated, then deleting that vertex and decreasing labels of remaining vertices by one yields a \(1\)-subgraph \(S\in\operatorname{Sub}^{1}\left(G_{\operatorname{bipart}^{m,2}}\right)\) which is also valid. Since the operations of inserting and deleting an isolated vertex (with suitable re-labelling of others) are clearly inverses of each other, this suffices to complete the proof. Suppose therefore that \(S^{\prime}\in\operatorname{Sub}^{1}\left(G_{\operatorname{bipart}^{m^{\prime},2}}\right)\) is a valid \(1\)-subgraph where the vertex \(1\) is isolated. Let \(S\) be the \(1\)-subgraph of \(G_{\operatorname{bipart}^{m,2}}\) obtained by deleting this vertex, and decreasing labels of other vertices by \(1\). As above, denote by \(p\) and \(p^{\prime}\) the associated parking functions. By construction, we have \(p^{\prime}_{3}=1\), and this is the only car preferring spot \(1\). Therefore, in \(p\), the parking process follows that of \(p^{\prime}\) where we have removed car \(3\) and spot \(1\) (with suitable re-labelling of remaining cars and spots). As such, since \(S^{\prime}\) is valid, it follows that \(S\) is also valid, as desired. Lemma 4.3 implies that the number of valid \(1\)-subgraphs of \(G_{\operatorname{bipart}^{m+1,2}}\) where the vertex \(1\) is isolated is equal to the number of valid \(1\)-subgraphs of \(G_{\operatorname{bipart}^{m,2}}\). We now enumerate valid \(1\)-subgraphs of \(G_{\operatorname{bipart}^{m+1,2}}\) where vertex \(1\) has a single neighbour. **Lemma 4.5**.: _For any \(j\in[2,m]\), there are two \(1\)-subgraphs in \(G_{\operatorname{bipart}^{m,2}}\) with one edge incident to \(j\) and another incident to \(1\). These are the subgraphs \(S_{1}\) with edges \((1,m+1)\) and \((j,m+2)\), and \(S_{2}\) with edges \((1,m+2)\) and \((j,m+1)\) (see Figure 21). Then for any \(j\in[2,m]\), exactly one of these two \(1\)-subgraphs is valid._ We will see that which subgraph is valid in fact depends on the parity of \(m+j\). Lemma 4.5 implies that there are exactly \(m-1\) valid \(1\)-subgraphs of \(G_{\operatorname{bipart}^{m,2}}\) with two edges, exactly one of which is incident to \(1\). To show the lemma, we consider the two cases from Lemmas 4.6 and 4.7. **Lemma 4.6**.: _For \(j\in[2,m]\), let \(S_{1}\) be the \(1\)-subgraph of \(G_{\operatorname{bipart}^{m,2}}\) with the two non-crossing edges \((1,m+1)\) and \((j,m+2)\) (see Figure 21, Graph (A)). Then \(S_{1}\) is valid if and only if \(m+j\) is even._ Proof.: As in Figure 21, Graph (A), there are only two edges \((1,m+1)\) and \((j,m+2)\), so car \(1=\pi_{m+1}\) and car \(3=\pi_{1}\) prefer the same spot \(1\), i.e., \(p_{1}=p_{3}=1\), and car \(2=\pi_{m+2}\) and car \(j+2=\pi_{j}\) prefer the same spot \(j\), i.e., \(p_{2}=p_{j+2}=j\). Since no vertices from spot \(1\) to spot \(m\) have left edges, the preferences of the corresponding cars are just those spots. That is, we have \(p_{\pi_{k}}=p_{k+2}=k\) for any \(k\in\{1,\cdots,m\}\). Figure 20. Illustrating the construction from Lemma 4.3: (A) a valid \(1\)-subgraph \(S\in\operatorname{Sub}^{1}\left(G_{\operatorname{bipart}^{m,2}}\right)\) with \(m=3\); (B) the corresponding valid \(1\)-subgraph \(S^{\prime}\in\operatorname{Sub}^{1}\left(G_{\operatorname{bipart}^{m^{\prime},2 }}\right)\) with \(m^{\prime}=m+1=4\), obtained by inserting an isolated vertex in column \(1\), row \(3\), and shifting other vertices in \(S\). As such, using the map \(\Psi_{\mathrm{Sub}\to\mathrm{PF}}\) we get the parking function \(p=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S)=(1,j,1,2,\cdots,m)\). We now wish to determine under which conditions \(S\) is valid, i.e. under which conditions \(\mathcal{O}_{\mathrm{MVP}_{m+2}}\left(p\right)=\mathrm{bipart}^{m,2}\). For this, we run the MVP parking process on \(p\), which is illustrated on Table 1 below. Since \(p_{1}=1\), and \(p_{2}=j\), cars \(1\) and \(2\) initially park in spots \(1\) and \(j\) respectively. Then recall that for \(k\geq 3\), we have \(p_{k}=k-2\). As such, car \(3\) parks in \(1\) and bumps car \(1\) into spot \(2\). This is followed by car \(4\) bumping car \(1\) into spot \(3\), car \(5\) bumping it into spot \(4\), and so on. Finally, after car \(j\) parks, we see that exactly the first \(j\) spots are all occupied, with the "partial" outcome \(345\cdots j12\). Then car \(j+1\) arrives, preferring spot \(j-1\), so car \(1\) will be bumped from spot \(j-1\) to the first available spot \(j+1\), yielding the updated partial outcome \(345\cdots j(j+1)21\). This is followed by car \(j+2\) bumping car \(2\) from spot \(j\) into the first unoccupied spot \(j+2\), yielding the partial outcome \(345\cdots j(j+1)(j+2)12\). We therefore see that each car \(k>j\) will bump either car \(1\) or car \(2\), and flip their position in the partial outcome. Since after car \(j\) had parked, cars \(1\) and \(2\) were parked in that order (i.e. the "correct" order), this implies that we will reach the correct final outcome \(\mathrm{bipart}^{m,2}=345\cdots(m+2)12\) (i.e. \(S\) will be \begin{table} \begin{tabular}{r|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline spot: & 1 & 2 & 3 & 4 & 5 & \(\cdots\) & j-2 & j-1 & j & j+1 & j+2 & \(\cdots\) & m+1 & m+2 \\ \hline \(p_{\pi_{m+1}}=p_{1}=:\) & 1 & - & - & - & - & - & - & - & - & - & - & - & - & - \\ \(p_{\pi_{1}}=p_{3}=1:\) & 3 & 1 & - & - & - & - & - & - & 2 & - & - & - & - \\ \(p_{\pi_{2}}=p_{4}=2:\) & 3 & 4 & 1 & - & - & - & - & 2 & - & - & - & - \\ \(p_{\pi_{3}}=p_{5}=3:\) & 3 & 4 & 5 & 1 & - & - & - & 2 & - & - & - & - \\ & & & & & & & & & & & & & \\ & & & & & & & & & & & & & \\ \(p_{\pi_{j-2}}=p_{j}=j-2:\) & 3 & 4 & 5 & 6 & 7 & \(\cdots\) & j & 1 & 2 & - & - & - & - \\ \(p_{\pi_{j-1}}=p_{j+1}=j-1:\) & 3 & 4 & 5 & 6 & 7 & \(\cdots\) & j & j+1 & 2 & 1 & - & - & - & - \\ \(p_{\pi_{j}}=p_{j+2}=j:\) & 3 & 4 & 5 & 6 & 7 & \(\cdots\) & j & j+1 & j+2 & 1 & 2 & \(\cdots\) & - & - \\ & & & & & & & & & & & & & & \\ \end{tabular} \end{table} Table 1. The MVP parking process with \(p=(1,j,1,2,3,4,\cdots,m)\). Figure 21. The two \(1\)-subgraphs in \(\mathrm{Sub}^{1}\left(G_{\mathrm{bipart}^{m,2}}\right)\) with one edge incident to \(1\) and another incident to \(j\in[2,m]\): (A) the subgraph \(S_{1}\) with non-crossing edges \((1,m+1)\) and \((j,m+2)\); (B) the subgraph \(S_{2}\) with crossing edges \((1,m+2)\) and \((j,m+1)\). valid) if, and only if, the number of cars parking after car \(j\) (i.e. the number of flips in the partial outcomes) is even. Since there are \(m+2-j\) cars arriving after car \(j\), this means that \(S\) is valid if, and only if, \(m+2-j\) is even, which is equivalent to \(m+j\) being even, as desired. **Lemma 4.7**.: _For \(j\in[2,m]\), let \(S_{2}\) be the \(1\)-subgraph of \(G_{\mathrm{bipart}^{m,2}}\) with the two crossing edges \((1,m+2)\) and \((j,m+1)\) (see Figure 21, Graph (B)). Then \(S_{2}\) is valid if and only if \(m+j\) is odd._ Proof.: As in Figure 21, Graph (B), there are only two edges \((1,m+2)\) and \((j,m+1)\), so car \(2=\pi_{m+2}\) and car \(3=\pi_{1}\) prefer the same spot \(1\), i.e., \(p_{2}=p_{3}=1\), and car \(1=\pi_{m+1}\) and car \(j+2=\pi_{j}\) prefer the same spot \(j\), i.e., \(p_{1}=p_{j+2}=j\). Since no vertices from spot \(1\) to spot \(m\) have left edges, the preferences of the corresponding cars are just those spots, i.e., \(p_{\pi_{k}}=p_{k+2}=k\) for all \(k\in\{1,\cdots,m\}\). As such, using the map \(\Psi_{\mathrm{Sub}\to\mathrm{PF}}\) we get the parking function \(p=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S)=(j,1,1,2,3,\cdots,m)\). We now wish to determine under which conditions \(S\) is valid, i.e. under which conditions \(\mathcal{O}_{\mathrm{MVP}_{m+2}}\left(p\right)=\mathrm{bipart}^{m,2}\). From here, we proceed as in the proof of Lemma 4.6. Because the proof is essentially analogous, we will be much briefer in our arguments here. We first run the MVP parking process on \(p\), which is illustrated on Table 2 below. We see that after the first \(j\) cars have all parked, the first \(j\) spots are all occupied. This time however, the partial outcome at this point is \(345\cdots j21\), i.e. the positions of cars \(1\) and \(2\) are reversed compared to the desired final outcome \(\mathrm{bipart}^{m,2}\). As in the previous case, every car arriving subsequently will park just before cars \(1\) and \(2\), and flip their order. As such, the final outcome will have \(1\), \(2\) in that order if, and only if, an odd number of cars arrives after car \(j\) (i.e. there are an odd number of flips). Since there are \(m+2-j\) cars arriving after car \(j\), this means that \(S\) is valid if, and only if, \(m+2-j\) is odd, which is equivalent to \(m+j\) being odd, as desired. Lemmas 4.6 and 4.7 imply Lemma 4.5. In particular, this shows that there are exactly \(m-1\) valid \(1\)-subgraphs of \(G_{\mathrm{bipart}^{m,2}}\) with two edges, exactly one of which is incident to the vertex \(1\). We now characterise valid \(1\)-subgraphs with two edges which are both incident to \(1\). **Lemma 4.8**.: _For \(m\geq 1\), let \(S\in\mathrm{Sub}^{1}\left(G_{\mathrm{bipart}^{m,2}}\right)\) be the \(1\)-subgraph with edges \((1,m+1)\) and \((1,m+2)\) (see Figure 22). Then \(S\) is valid if and only if \(m\) is odd._ Proof.: As in Figure 22, there are two edges \((1,m+1)\) and \((1,m+2)\), so car \(1=\pi_{m+1}\) and car \(2=\pi_{m+2}\) both prefer the spot \(1\), i.e. \(p_{1}=p_{2}=1\). Since no vertices from spot \(1\) to spot \(m\) have \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline spot: & 1 & 2 & 3 & 4 & 5 & \(\cdots\) & j-2 & j-1 & j & j+1 & j+2 & \(\cdots\) & m+1 & m+2 \\ \hline \(p_{\pi_{m+1}}=p_{1}=j:\) & **-** & **-** & **-** & **-** & **-** & **-** & **-** & **-** & 1 & **-** & **-** & **-** & **-** & **-** \\ \(p_{\pi_{m+2}}=p_{2}=1:\) & 2 & **-** & **-** & **-** & **-** & **-** & **-** & **-** & 1 & **-** & **-** & **-** & **-** & **-** \\ \(p_{\pi_{1}}=p_{3}=1:\) & 3 & 2 & **-** & **-** & **-** & **-** & **-** & 1 & **-** & **-** & **-** & **-** & **-** \\ \(p_{\pi_{2}}=p_{4}=2:\) & 3 & 4 & 2 & **-** & **-** & **-** & **-** & 1 & **-** & **-** & **-** & **-** & **-** \\ \(p_{\pi_{3}}=p_{5}=3:\) & 3 & 4 & 5 & 2 & **-** & **-** & **-** & 1 & **-** & **-** & **-** & **-** & **-** \\ & & & & & & & & & & & & & & & \\ \(p_{\pi_{j-2}}=p_{j}=j-2:\) & 3 & 4 & 5 & 6 & 7 & \(\cdots\) & j & 2 & 1 & **-** & **-** & **-** & **-** \\ \(p_{\pi_{j-1}}=p_{j+1}=j-1:\) & 3 & 4 & 5 & 6 & 7 & \(\cdots\) & j & j+1 & 1 & 2 & **-** & **-** & **-** & **-** \\ \(p_{\pi_{j}}=p_{j+2}=j:\) & 3 & 4 & 5 & 6 & 7 & \(\cdots\) & j & j+1 & j+2 & 2 & 1 & \(\cdots\) & **-** & **-** \\ & & & & & & & & & & & & & & & & \\ \end{tabular} \end{table} Table 2. The MVP parking process for \(p=(j,1,1,2,3,\cdots,m)\). left edges, the preferences of the corresponding cars are just those spots, i.e., \(p_{\pi_{k}}=p_{k+2}=k\) for \(1\leq k\leq m\). Hence, using the map \(\Psi_{\mathrm{Sub}\to\mathrm{PF}}\) we get the parking function \(p=\Psi_{\mathrm{Sub}\to\mathrm{PF}}(S)=(1,1,1,2,3,\cdots,m)\). We now wish to determine under which conditions \(S\) is valid, i.e. under which conditions \(\mathcal{O}_{\mathrm{MVP}_{m+2}}\left(p\right)=\mathrm{bipart}^{m,2}\). The proof proceeds along analogous lines to those of Lemmas 4.6 and 4.7, by running the MVP parking process on \(p\), which is illustrated on Table 3 below. We see that after the first two cars park we have the partial outcome \(21\), and every subsequent car arriving will flip the order of cars \(1\) and \(2\). This implies that we will reach the correct final outcome \(\mathrm{bipart}^{m,2}\) (i.e. \(S\) will be valid) if, and only if, the number of cars parking after car \(2\) is odd (i.e. there are an odd number of flips after that point). Since there are \(m+2-2=m\) cars arriving after car \(2\), this means that \(S\) is valid if, and only if, \(m\) is odd, as desired. We are now equipped with all the necessary ingredients to prove Theorem 4.2. Proof of Theorem 4.2.: We proceed by induction on \(m\geq 0\). For \(m=0\), we have \(\pi=\mathrm{bipart}^{m,2}=12\), which has no inversions. As such, the only valid \(1\)-subgraph is the empty graph with no edges, so \(|\mathcal{O}_{\mathrm{MVP}_{2}}^{-1}\left(12\right)|=1=0+1+\lfloor\frac{(0+1) ^{2}}{2}\rfloor\), as desired. For clarity, we also explicitly consider the case \(m=1\). In that case, we have \(\pi=\mathrm{bipart}^{m,2}=312\). There are four \(1\)-subgraphs of \(G_{\pi}\), which are all valid by Theorem 2.8 (see Example 2.9). Therefore \(|\mathcal{O}_{\mathrm{MVP}_{3}}^{-1}\left(312\right)|=4=1+1+\lfloor\frac{(1+ 1)^{2}}{2}\rfloor\), as desired. Suppose now that the formula holds for \(m-1\) for some \(m\geq 1\). We wish to count valid \(1\)-subgraphs of \(G_{\mathrm{bipart}^{m,2}}\). Lemma 4.3 tells us that the number of these where \(1\) is isolated is equal to \(|\mathcal{O}_{\mathrm{MVP}_{m+1}}^{-1}\left(\mathrm{bipart}^{m-1,2}\right)|\). It remains to count those where \(1\) is not isolated. There are two Figure 22. The \(1\)-subgraph \(S\in\mathrm{Sub}^{1}\left(G_{\mathrm{bipart}^{m,2}}\right)\) with edges \((1,m+1)\) and \((1,m+2)\). \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline spot: & 1 & 2 & 3 & 4 & 5 & 6 & \(\cdots\) & \(\cdots\) & m+1 & m+2 \\ \hline \(p_{\pi_{m+1}}=p_{1}=1:\) & 1 & - & - & - & - & - & \(\cdots\) & \(\cdots\) & - & - \\ \(p_{\pi_{m+2}}=p_{2}=1:\) & 2 & 1 & - & - & - & - & \(\cdots\) & \(\cdots\) & - & - \\ \(p_{\pi_{1}}=p_{3}=1:\) & 3 & 1 & 2 & - & - & - & \(\cdots\) & \(\cdots\) & - & - \\ \(p_{\pi_{2}}=p_{4}=2:\) & 3 & 4 & 2 & 1 & - & - & \(\cdots\) & \(\cdots\) & - & - \\ \(p_{\pi_{3}}=p_{5}=3:\) & 3 & 4 & 5 & 1 & 2 & - & \(\cdots\) & \(\cdots\) & - & - \\ \(\vdots\) & & & & & & & \(\vdots\) & & & \\ \end{tabular} \end{table} Table 3. The MVP parking process with \(p=(1,1,1,2,3,\cdots,m)\). possibilities: the first where the subgraph has a single edge (incident to \(1\)), and the second where it has two edges. For the first case, any \(1\)-subgraph consisting of a single edge is valid (see Proposition 2.13 and following remarks). There are therefore two valid \(1\)-subgraphs of \(G_{\mathrm{bipart}^{m,2}}\) with a single edge which is incident to \(1\), corresponding to the edges \((1,m+1)\) or \((1,m+2)\) (see Figure 23). It therefore remains to count the number of valid \(1\)-subgraphs of \(G_{\mathrm{bipart}^{m,n}}\) with two edges, at least one of which is incident to \(1\). As explained, Lemma 4.5 implies that there are exactly \(m-1\) such \(1\)-subgraphs with exactly one edge incident to \(1\). Finally, Lemma 4.8 says that the \(1\)-subgraph with two edges incident to \(1\) is valid if, and only if, \(m\) is odd. Bringing this together and applying the induction hypothesis, we get \[|\mathcal{O}_{\mathrm{MVP}_{m+2}}^{-1}\left(\mathrm{bipart}^{m,2}\right)|=| \mathcal{O}_{\mathrm{MVP}_{m+1}}^{-1}\left(\mathrm{bipart}^{m-1,2}\right)|+2+(m -1)+\varepsilon_{m}=m+\lfloor\frac{m^{2}}{2}\rfloor+m+1+\varepsilon_{m},\] where \(\varepsilon_{m}=1\) if \(m\) is odd, and \(\varepsilon_{m}=0\) if \(m\) is even. It is then straightforward to obtain the desired formula. ## 5. Interpreting the MVP outcome through the Abelian sandpile model In this section we give an alternate interpretation of the MVP outcome map \(\mathcal{O}_{\mathrm{MVP}}\) in terms of the so-called _Abelian sandpile model_ (ASM). The ASM is a dynamic process on a graph. It was first introduced by Bak, Tang and Wiesenfeld [2] as an example of a process exhibiting the phenomenon known as _self-organised criticality_. Later, Dhar [12] formalised and named the model. We begin by introducing the model and important related concepts. In this work, we will only be concerned by the connection between the ASM and parking functions, which occurs on the complete graph \(K_{n}\) (this is the graph with vertex set \([n]\) and one edge between any pair of (distinct) vertices). As such, we restrict ourselves to this setting here, essentially ignoring the geometry of the underlying graph. ### The Abelian sandpile model on complete graphs A (sandpile) _configuration_ is a vector \(c=(c_{1},\cdots,c_{n})\in\mathbb{Z}_{+}^{n}\) which assigns a non-negative integer to each vertex. We think of \(c_{i}\) as denoting the number of grains of sand at vertex \(i\). We denote \(\mathrm{Config}_{n}\) the set of sandpile configurations with \(n\) vertices, called \(n\)-configurations for short. For \(i\in[n]\), let \(\alpha^{i}\in\mathrm{Config}_{n}\) be the configuration such that \(\alpha^{i}_{i}=1\) and \(\alpha^{i}_{j}=0\) for all \(j\neq i\). Figure 23. The two subgraphs of \(\mathrm{Sub}^{1}\left(G_{\mathrm{bipart}^{m,2}}\right)\) containing a single edge incident to \(1\) and no other edges: (A) the subgraph with edge \((1,m+1)\); (B) the subgraph with edge \((1,m+2)\). Both subgraphs are valid. Given a configuration \(c\) and a vertex \(i\in[n]\), if \(c_{i}<n\), then the vertex \(i\) is said to be _stable_ in the configuration \(c\). Otherwise the vertex \(i\) is _unstable_. If all vertices in a configuration are stable, this configuration is stable, and we denote the set of all stable \(n\)-configurations as \(\operatorname{Stable}_{n}\). Unstable vertices topple as follows. For a configuration \(c\in\operatorname{Config}_{n}\) and a vertex \(i\in[n]\) which is unstable in \(c\), we define the _toppling operator_ at vertex \(i\), denoted \(\operatorname{Topp}_{i}\), by: \[\operatorname{Topp}_{i}(c):=c-n\cdot\alpha^{i}+\sum_{j\neq i}\alpha^{j}, \tag{4}\] where the addition operator on configurations denotes pointwise addition at each vertex. In words, the toppling of a vertex \(i\) sends one grain of sand from \(i\) to each of the remaining \((n-1)\) vertices \(j\), and an additional grain exits the system. Other vertices may become unstable after performing this toppling once, and we topple these also. Because whenever we topple an unstable vertex, one grain of sand exits the system, it is straightforward to show that, starting from an unstable configuration \(c\) and toppling successively unstable vertices, we will eventually reach a stable configuration \(c^{\prime}\). We say that a sequence \(S:=v_{1},\cdots,v_{k}\) of vertices is a _toppling sequence_ for \(c\) if, for any \(j<k\), the vertex \(v_{j+1}\) is unstable in the configuration \(\operatorname{Topp}_{v_{j}}\cdots\operatorname{Topp}_{v_{1}}(c)\), and \(\operatorname{Topp}_{v_{k}}\cdots\operatorname{Topp}_{v_{1}}(c)\in \operatorname{Stable}_{n}\). In words, starting from the configuration \(c\), we can topple vertices of \(S\) in order, and obtain a stable configuration after toppling them all. Dhar showed (see e.g. [13, Section 5.2]) that all toppling sequences are equivalent up to permutation of the vertices, and that the stable configuration \(c^{\prime}\) reached after toppling all vertices in a toppling sequence \(S\) does not depend on the order of \(S\). We call this \(c^{\prime}\) the _stabilisation_ of \(c\), and denote it \(\operatorname{Stab}(c)\). We now define a Markov chain on the set \(\operatorname{Stable}_{n}\) of stable configurations. Fix a probability distribution \(\mu=(\mu_{i})_{i\in[n]}\) on \([n]\) such that \(\mu_{i}>0\) for all \(i\in[n]\). At each step of the Markov chain we add a grain at the vertex \(i\) with probability \(\mu_{i}\) and stabilise the resulting configuration. The _recurrent_ configurations are those appear infinitely often in the long-time running of this Markov chain. We denote \(\operatorname{Rec}_{n}\) the set of recurrent \(n\)-configurations. The study of the recurrent configurations has been of central importance in ASM research (see e.g. [22] and references therein). Here we recall a classical characterisation of recurrent configurations: the so-called _burning algorithm_ due to Dhar [13, Section 6.1], which provides a simple algorithmic process to check if a given configuration is recurrent or not. **Theorem 5.1**.: _Let \(n\geq 1\), and \(c\in\operatorname{Stable}_{n}\) be a stable configuration. We define \(\tilde{c}:=c+\sum\limits_{i\in[n]}\alpha^{i}\) to be the configuration obtained by adding one grain to each vertex in \(c\). Then \(c\) is recurrent if, and only if, there exists a toppling sequence \(S\) for \(\tilde{c}\) in which each vertex of \([n]\) appears exactly once. Moreover, in this case, we have \(\operatorname{Stab}\left(\tilde{c}\right)=c\)._ There is a natural partial order on the set \(\operatorname{Rec}_{n}\) of recurrent configurations. For two configurations \(c,c^{\prime}\in\operatorname{Rec}_{n}\), we define \(c\preceq c^{\prime}\) if, and only if, \(c_{i}\leq c^{\prime}_{i}\) for all \(i\in[n]\). A _minimal recurrent configuration_ is a recurrent configuration which is minimal for this partial order. In words, a minimal recurrent configuration is a recurrent configuration where the removal of one grain of sand from any vertex would cause the configuration to no longer be recurrent. We denote \(\operatorname{MinRec}_{n}\) the set of minimal recurrent \(n\)-configurations. The following result appears in various parts of the literature (see e.g. [21]). **Proposition 5.2**.: _Let \(n\geq 1\), and \(c\in\operatorname{Config}_{n}\) be a configuration. Then \(c\) is minimal recurrent if, and only if, \(c\) is a permutation of the set \(\{0,\cdots,n-1\}\)._ Given a minimal recurrent \(n\)-configuration \(c\in\mathrm{MinRec}_{n}\), we define the _canonical toppling_ of \(c\) to be the permutation \(\pi=\pi_{1}\cdots\pi_{n}\in S_{n}\) where for each \(i\), \(\pi_{i}\) is the unique index \(j\) such that \(c_{j}=n-i\). We denote this permutation \(\mathrm{CanonTopp}\left(c\right)\). **Example 5.3**.: Consider the minimal recurrent configuration \(c=(2,4,3,0,1)\in\mathrm{MinRec}_{5}\). We calculate \(\pi:=\mathrm{CanonTopp}\left(c\right)\). By construction \(\pi_{1}\) is the vertex with \(5-1=4\) grains, i.e. \(\pi_{1}=2\), \(\pi_{2}\) is the vertex with \(3\) grains, i.e. \(\pi_{2}=3\), and so on. Finally we get \(\mathrm{CanonTopp}\left(c\right)=23154\). The terminology _canonical toppling_ comes from the following observation. For \(c\in\mathrm{Rec}_{n}\), Dhar's burning criterion says that there must be a toppling sequence \(S=\pi_{1},\cdots,\pi_{n}\) for \(\tilde{c}\) in which each vertex of \([n]\) appears exactly once, i.e. \(S\) can be thought of as a permutation of \([n]\). It is straightforward to see that \(\mathrm{CanonTopp}\left(c\right)\) is in fact the only possible toppling sequence for \(\tilde{c}\) if \(c\) is minimal recurrent, given Proposition 5.2. ### The connection between ASM and MVP parking functions In this section we will link the ASM to the MVP outcome map. We first recall the bijection between recurrent configurations and parking functions from the seminal work by Cori and Rossin [11]. **Theorem 5.4**.: _Let \(n\geq 1\). For a configuration \(c=(c_{1},\cdots,c_{n})\in\mathrm{Config}_{n}\), define \(p=(p_{1},\cdots,p_{n}):=(n-c_{1},\cdots,n-c_{n})\) to be the n-complement of \(c\) (we write \(p=n-c\) for short). Then \(c\) is recurrent if, and only if, \(p\) is a (MVP) parking function. Thus the map \(c\mapsto n-c\) defines a bijection from \(\mathrm{Rec}_{n}\) to \(\mathrm{MVP}_{n}\) (\(=\mathrm{PF}_{n}\))._ Our main goal of this section is an interpretation of the MVP outcome of a given parking function \(p\) through the recurrent configuration corresponding to \(p\) via the bijection above. We begin by describing an algorithm which reduces a recurrent configuration \(c\) to a minimal recurrent configuration \(c^{\prime}\preceq c\). **Algorithm 5.5**.: Given an input recurrent configuration \(c\), we modify \(c\) as follows. In each iteration, we look for the first pair of duplicate values in \(c\). 1. * If \(c\) contains no duplicate values, return \(c\). * Otherwise, define \(j:=\min\{j^{\prime}\in[n];c_{j^{\prime}}\in\{c_{1},\cdots,c_{j^{\prime}-1}\}\}\), to be the index of the first duplicate value encountered in \(c\), and \(i\) to be the index s.t. \(i<j\) and \(c_{i}=c_{j}\). 2. While there exists \(j^{\prime}\in\{1,\cdots,j\}\setminus\{i\}\) such that \(c_{j^{\prime}}=c_{i}\) (i.e. \(c_{i}\) is a duplicate value in \(\{c_{1},\cdots,c_{j}\}\)), decrease the value of \(c_{i}\) by one (\(c_{i}=c_{i}-1\)). Repeat Step (1) and Step (2) until \(c\) is returned. **Remark 5.6**.: Algorithm 5.5 necessarily terminates as each time we return to step (1), \(j\) strictly increases. Moreover, by construction, the algorithm returns a configuration \(c\) such that \(c_{i}\neq c_{j}\) for all \(i\neq j\). **Example 5.7**.: Consider the recurrent configuration \(c=(11,9,5,8,1,9,4,8,4,9,10,0)\). We wish to determine the output of Algorithm 5.5. ``` Iteration1: Starting from index 1, the value "9" (in red) is the first duplicate value encountered (it is the value at index 2, 6 and 10, which are in blue) with this value. * In Step (1), we set \(j\) to be the index of the first duplicate, i.e. \(j=6\), and \(i\) to be the index where that value was previously encountered, i.e. \(i=2\). * In Step (2), we decrement \(c_{i}=c_{2}\) until it is no longer a value encountered elsewhere to the left of \(j=6\). In this case the values 9 and 8 are already present (highlighted in pink) at indices 6 and 4 respectively, so we finally set \(c_{2}=7\). index: \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & \(10\) & \(11\) & \(12\) values: \(11\), \(7\), \(5\), \(\clubsuit\), \(1\), \(\clubsuit\), \(1\), \(\clubsuit\), \(4\), \(8\), \(4\), \(9\), \(10\), \(0\) **Iteration 2**: index: \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & \(10\) & \(11\) & \(12\) values: \(11\), \(7\), \(5\), \(8\), \(1\), \(9\), \(4\), \(8\), \(4\), \(9\), \(10\), \(0\) Starting from index \(1\), the value "8" (in red) is the first duplicate value encountered. * In Step (1), we set \(j\) to be the index of the first duplicate, i.e. \(j=8\), and \(i\) to be the index where that value was previously encountered, i.e. \(i=4\). * In Step (2), we decrement \(c_{i}=c_{4}\) until it is no longer a value encountered elsewhere to the left of \(j=8\). In this case the values \(8\) and \(7\) are already present (highlighted in pink) at indices \(8\) and \(2\) respectively, so we finally set \(c_{4}=6\). Starting from index \(1\), the value "4" (in red) is the first duplicate value encountered. * In Step (1), we set \(j\) to be the index of the first duplicate, i.e. \(j=9\), and \(i\) to be the index where that value was previously encountered, i.e. \(i=7\). * In Step (2), we decrement \(c_{i}=c_{7}\) until it is no longer a value encountered elsewhere to the left of \(j=9\). In this case only the value \(4\) is already present (highlighted in pink) at index \(9\), so we finally set \(c_{7}=3\). Starting from index \(1\), the value "9" (in red) is the first duplicate value encountered. * In Step (1), we set \(j\) to be the index of the first duplicate, i.e. \(j=10\), and \(i\) to be the index where that value was previously encountered, i.e. \(i=6\). * In Step (2), we decrement \(c_{i}=c_{6}\) until it is no longer a value encountered elsewhere to the left of \(j=10\). This time the values \(9\) (index \(10\)), \(8\) (index \(8\)), \(7\) (index \(2\)), \(6\) (index \(4\)), \(5\) (index \(3\)), \(4\) (index \(9\)), and \(3\) (index \(7\)) all already appear, so we finally set \(c_{6}=2\). At this point we have reached a configuration \(c\) with no duplicate values. Hence, Algorithm 5.5 outputs the final configuration \(c=(11,7,5,6,1,2,3,8,4,9,10,0)\). We may notice on the above example that the final configuration output is in fact a permutation of the set \(\{0,\cdots,11\}\), i.e. a minimal recurrent \(12\)-configuration. This turns out to be true in general, and is one of the main results of this section. **Proposition 5.8**.: _Given an input \(c\in\mathrm{Rec}_{n}\), the output of Algorithm 5.5 is a permutation of the set \(\{0,\cdots,n-1\}\), and thus a minimal recurrent \(n\)-configuration. We denote it \(\mathrm{minrec}\left(c\right)\). As such, we may view the algorithm as a map \(\mathrm{minrec}:\mathrm{Rec}_{n}\to\mathrm{MinRec}_{n}\)._ Proof.: Given that by construction the output configuration has no duplicate values, by Proposition 5.2 it is sufficient to show that the algorithm always outputs a recurrent configuration. In fact, it is sufficient to show that recurrence is preserved through any single decrement on Step (2). In particular, it suffices to show that if \(c\) is a recurrent configuration, and \(i\neq j\in[n]\) are such that \(c_{i}=c_{j}\) (i.e. a duplicated value in \(c\)), then the configuration \(c^{\prime}:=c-\alpha^{i}\), resulting from removing one single grain of sand from \(i\), is also recurrent. To show this, we first apply Dhar's burning algorithm (Theorem 5.1) to \(c\). Since \(c\) is recurrent, there exists a toppling sequence \(S=v_{1},\cdots,v_{n}\) for \(\tilde{c}:=c+\sum\limits_{i\in[n]}\alpha^{i}\). Without loss of generality we may assume that \(j\) appears before \(i\) in \(S\). Otherwise we simply swap them, and this does not affect the ability of any vertex to topple in \(S\). Furthermore, we may assume that \(i\) immediately follows \(j\) in \(S\). Indeed, if this is not the case, we move \(i\) to immediately follow \(j\), yielding a new sequence \(S^{\prime}=v_{1},\cdots,v_{k}=j,v_{k+1}=i,\cdots,v_{n}\). Since initially \(c_{i}=c_{j}\), and \(i\) and \(j\) receive the same number of grains through toppling \(v_{1},\cdots,v_{k-1}\), we see that after toppling \(v_{1},\cdots,v_{k-1}\), \(i\) and \(j\) still have the same number of grains. This implies that they are both unstable at that point (since \(j\) has to be in order for \(S\) to be a toppling sequence). Moreover, subsequent vertices at most receive one extra grain in \(S^{\prime}\) compared to \(S\), which does not affect their capacity to topple. Therefore \(S^{\prime}\) is also a toppling sequence for \(\tilde{c}\). We claim that \(S^{\prime}\) is also a valid toppling sequence for \(\tilde{c^{\prime}}\). Indeed, given that the only difference between \(c\) and \(c^{\prime}\) is at the vertex \(i\), it is sufficient to show that, starting from the configuration \(\tilde{c^{\prime}}\), the vertex \(i\) is unstable after toppling \(v_{1},\cdots,v_{k}=j\) in \(S^{\prime}\) (the capacity of other vertices to topple will be the same in \(\tilde{c^{\prime}}\) as in \(\tilde{c}\)). But by definition we have \(c^{\prime}_{i}=c_{i}-1=c_{j}-1\). Moreover, after toppling \(v_{1},\cdots,v_{k-1}\), the vertex \(j\) must be unstable. In particular, at that point the vertex \(i\) requires at most extra grain to in turn become unstable, and it receives that grain when vertex \(v_{k}=j\) topples. Thus \(S^{\prime}\) is also a toppling sequence for \(\tilde{c^{\prime}}\), which implies that \(c^{\prime}\) is recurrent by Theorem 5.1. This completes the proof. We are now ready to state the main result of this section. **Theorem 5.9**.: _Given an input \(c\in\operatorname{Rec}_{n}\), Algorithm 5.5 yields an output \(\operatorname{minrec}\left(c\right)\in\operatorname{MinRec}_{n}\). Moreover, if \(p(c)=n-c\) is the parking function corresponding to \(c\) via the bijection of Cori and Rossin (Theorem 5.4), then we have \(\mathcal{O}_{\operatorname{MVP}_{n}}\left(p(c)\right)=\operatorname{CanonTop }\left(\operatorname{minrec}\left(c\right)\right)\)._ In words, given a parking function \(p\in\operatorname{MVP}_{n}\), we can describe its outcome through applying Algorithm 5.5 to the corresponding recurrent configuration \(c=n-p\in\operatorname{Rec}_{n}\), and taking the canonical toppling of the output minimal recurrent configuration \(\operatorname{minrec}\left(c\right)\). To prove Theorem 5.9, we need to prove two things: that the MVP outcome of a parking function \(p\) is unchanged through applying Algorithm 5.5 to the corresponding recurrent configuration \(c=n-p\) (Lemma 5.10), and that the canonical toppling does indeed give the correct MVP outcome for a minimal recurrent configuration (Lemma 5.11). **Lemma 5.10**.: _Let \(c\in\operatorname{Rec}_{n}\) be a recurrent configuration, and \(c^{\prime}\) be the recurrent configuration obtained from applying a single iteration of Algorithm 5.5 (Steps (1) and (2)) to \(c\). Let \(p=n-c\) and \(p^{\prime}=n-c^{\prime}\) be the MVP parking functions corresponding to \(c\) and \(c^{\prime}\) respectively. Then we have \(\mathcal{O}_{\operatorname{MVP}_{n}}\left(p^{\prime}\right)=\mathcal{O}_{ \operatorname{MVP}_{n}}\left(p\right)\)._ Proof.: Let \(c\in\operatorname{Rec}_{n}\) be a recurrent configuration, and \(i,j\in[n]\) as in Step (1) of Algorithm 5.5. In the corresponding MVP parking function, this means that we have \(p_{i}=p_{j}=n-c_{i}=n-c_{j}\), and moreover that \(j\) is the first such \(j\) satisfying this equality, which means that it is the first car to produce a collision in the MVP parking process. Now let \(c^{\prime}\), \(p\), \(p^{\prime}\) be as in the statement of the lemma. That is, \(c^{\prime}_{i}\) is the maximal _value_ less than or equal to \(c_{j}\) such that no other _vertex_\(j^{\prime}\leq j\) has the same _number of grains_ (i.e. \(c^{\prime}_{i}\neq c_{j^{\prime}}\) for all \(j^{\prime}\leq j\)). In the corresponding parking function \(p^{\prime}\), this implies that \(p_{i}^{\prime}\) is the minimal _spot_ greater than or equal to \(p_{j}\) such that no other _car_\(j^{\prime}\leq j\) has the same _preference_ (i.e. \(p_{i}^{\prime}\neq p_{j^{\prime}}\) for all \(j^{\prime}\leq j\)). In other words, in the MVP parking process for \(p\), \(p_{i}^{\prime}\) is exactly the spot that car \(i\) will move to when it is bumped out of its original preference \(p_{i}\) by car \(j\). Also note that this is the only difference between the MVP parking functions \(p\) and \(p^{\prime}\). It is then straightforward to see that the outcome of the entire MVP parking process will be the same whether car \(i\) initially prefers spot \(p_{i}\) and is first bumped to \(p_{i}^{\prime}\) by car \(j\) (as in the parking process for \(p\)), or whether car \(i\) initially prefers spot \(p_{i}^{\prime}\) directly without needing to be bumped there (as in the parking process for \(p^{\prime}\)). This implies that \(p\) and \(p^{\prime}\) have the same MVP outcome, as desired. **Lemma 5.11**.: _If \(c\in\operatorname{MinRec}_{n}\) is a minimal recurrent configuration, and \(p=p(c):=n-c\) is the corresponding MVP parking function, then we have \(\mathcal{O}_{\operatorname{MVP}_{n}}\left(p(c)\right)=\operatorname{CanonTopp }\left(c\right)\)._ Proof.: Since \(c\) is minimal recurrent, it is a permutation of \(\{0,\cdots,n-1\}\). By construction, the first vertex recorded in the canonical toppling sequence \(\operatorname{CanonTopp}\left(c\right)\) is the unique \(k\in[n]\) such that \(c_{k}=n-1\). In terms of the parking function, this means that \(k\) is the unique \(k\in[n]\) such that \(p_{k}=1\), i.e. the car that ends up in spot \(1\). In other words, the first vertex recorded in \(\operatorname{CanonTopp}\left(c\right)\) is the first car recorded in \(\mathcal{O}_{\operatorname{MVP}_{n}}\left(p(c)\right)\). Iterating this reasoning on the remaining vertices/cars immediately yields the desired result. **Remark 5.12**.: Since MVP parking functions are also classical parking functions, we may ask how the outcome map \(\mathcal{O}_{\operatorname{PF}}\) for classical parking functions translates to the ASM through the Cori-Rossin bijection. In fact, this can be done in analogous fashion, with a minor modification to Algorithm 5.5. All that is needed is to replace the decrement of \(c_{i}\) in Step (2) with the decrement of \(c_{j}\) (in fact, in this alternate version there is no need to define the index \(i\) at all). Indeed, in the proof of Lemma 5.10 we saw that the decrement of \(c_{i}\) in Step (2) corresponds to car \(i\) being bumped out of its preferred spot by car \(j\). In the classical case, this is replaced by car \(j\) driving on to the first available spot (instead of car \(i\)), so we decrease \(c_{j}\) instead in the corresponding recurrent configuration. ## 6. Discussion and future work In this paper, we have investigated the outcome fibres of MVP parking functions. We have represented the fibre of a given outcome permutation \(\pi\) as certain _valid_ subgraphs of the inversion graph \(G_{\pi}\) of \(\pi\). In turn, this produced new and improved upper and lower bounds on the fibre size \(\left|\mathcal{O}_{\operatorname{MVP}}^{-1}\left(\pi\right)\right|\). It remains an open problem to fully characterise which \(1\)-subgraphs of \(G_{\pi}\) are valid, or equivalently which are invalid, only in terms of the subgraphs (without needing to check by running the MVP parking process). In [17, Theorem 3.2] it was shown that there exist invalid \(1\)-subgraphs if, and only if, the permutation \(\pi\) contains at least one of the patterns \(321\) and \(3412\). In the case of the pattern \(321\), Proposition 2.12 gives a general characterisation of a "forbidden motif" that would always render a subgraph \(S\) invalid in terms of the existence of a directed path \(\overrightarrow{P_{2}}\) in the subgraph \(S\). However, this is not the only forbidden motif. Indeed, Example 2.10 gives the example of a subgraph \(S\in\operatorname{Sub}^{1}\left(G_{321}\right)\) with edges \((1,2)\) and \((1,3)\) which is invalid. But this particular motif does not translate to the general case, as detailed in Remark 3.8. In the case of the pattern \(3412\), the situation is also complex. Indeed, in Section 4 we fully characterised the valid \(1\)-subgraphs of the complete bipartite grah \(K_{m,n}\). However, as we saw in Lemmas 4.6 and 4.7 which subgraphs are valid depends on some parity conditions. As such, it seems difficult to hope for a general expression of forbidden motifs in \(1\)-subgraphs of \(G_{\pi}\) when \(\pi\) contains the pattern \(3412\). More generally, we noted in Theorem 2.8 that permutations avoiding the patterns 321 and 3412 are exactly those whose inversion graphs are acyclic. However, the arguments for this statement in both this paper and in the previous work by Harris _et al._[17] are quite _ad hoc_, relying on a case-by-case analysis of how the MVP parking process behaves in those cases. It seems plausible that there is some sort of "meta", more general, explanation for why the appearance of a cycle in the inversion graph forces the existence of invalid 1-subgraphs, but we have been unable to find such an explanation. One might also ask how much improvement our upper and lower bounds on fibre sizes give, and how close these bounds are to the actual fibre size. In the general case this depends quite strongly on the permutation \(\pi\). Here we provide a table in the case where \(\pi=\operatorname{dec}^{n}\) is the decreasing permutation (i.e. \(G_{\pi}=K_{n}\)) for the first few values of \(n\). In this table, we calculate the total number of 1-subgraphs (the original upper bound from [17, Theorem 3.1]), the number of \(\overrightarrow{P_{2}}\)-free 1-subgraphs (our new upper bound), the number of valid subgraphs (i.e. the fibre size), and the number of HS 1-subgraphs (our new lower bound). We can see that, while neither bound appears particularly tight as \(n\) grows, we do get quite an improvement on the previous upper bound by imposing the \(\overrightarrow{P_{2}}\)-free condition. The somewhat attentive reader will have noticed that the number of HS 1-subgraphs of \(G_{\operatorname{dec}^{n}}\) appears to be \(2^{n-1}\). This can be established as follows. Given a HS subgraph \(S\), and an edge \(e=(i,j)\in S\) (with \(i<j\)), we define \(P_{e}:=\{i,i+1,\cdots,j-1\}\subseteq[n-1]\). It is reasonably straightforward to check that the map \(S\mapsto\bigcup\limits_{e\in S}P_{e}\) is a bijection between HS 1-subgraphs of \(G_{\operatorname{dec}^{n}}\) and subsets of \([n-1]\). The highly attentive and eagle-eyed reader will have recognised that the \(\overrightarrow{P2}\)-free numbers \(1,2,5,\cdots\) correspond to the well-studied _Bell numbers_ given by Sequence A000110 in the OEIS [28]. This can be established by giving a bijection between set partitions of \([n]\), which are counted by the Bell numbers (see the OEIS entry), and \(\overrightarrow{P2}\)-free subgraphs of \(G_{\operatorname{dec}^{n}}\), \(\mathcal{P}\mapsto S\), as follows. For each part \(P\) in a set partition \(\mathcal{P}\) of \([n]\), we put edges in \(S\) between the minimal element of \(P\) and all the other elements of \(P\) (if \(P\) is reduced to a single element \(i\), then \(i\) is an isolated vertex in \(S\)). By construction, this gives a \(\overrightarrow{P_{2}}\)-free 1-subgraph \(S\), and it is reasonably straightforward to check that this construction is bijective. We should note that the above bijections on \(\overrightarrow{P_{2}}\)-free and HS 1-subgraphs can be extended to the setting of general permutations, with additional restrictions imposed by the geometry of the inversion graph. For example, in the general setting, the \(\overrightarrow{P_{2}}\)-free subgraphs map bijectively to partitions of \([n]\) where, in each part, the minimal element is incident to all other elements. \begin{table} \begin{tabular}{c|c|c|c|c} \(n\) & 1-subgraphs & \(\overrightarrow{P_{2}}\)-free & valid & HS \\ \hline 1 & 1 & 1 & 1 & 1 \\ 2 & 2 & 2 & 2 & 2 \\ 3 & 6 & 5 & 4 & 4 \\ 4 & 24 & 15 & 9 & 8 \\ 5 & 120 & 52 & 21 & 16 \\ 6 & 720 & 203 & 51 & 32 \\ 7 & 5040 & 877 & 127 & 64 \\ 8 & 40320 & 4140 & 323 & 128 \\ 9 & 362880 & 21147 & 835 & 256 \\ \end{tabular} \end{table} Table 4. Comparing our new bounds to the actual fibre size in the case of the decreasing permutation \(\operatorname{dec}^{n}\). Another avenue of research explored in this paper concerns the study of certain subsets of MVP parking functions. In Section 3 we focused on Motzkin parking functions, where each spot is preferred by at most two cars. In particular, if we restrict ourselves to MVP parking functions whose outcome is the decreasing permutation \(\operatorname{dec}^{n}:=n(n-1)\cdots 1\), we get bijective correspondences with Motzkin paths and non-crossing matching arc diagrams. Then, in Section 4, we studied the permutations in the fibre of the permutation \(\operatorname{bipart}^{m,n}:=(n+1)(n+2)\cdots(n+m)12\cdots n\) whose corresponding inversion graph is the complete bipartite graph \(K_{m,n}\). In particular, via a careful analysis of which \(1\)-subgraphs are valid, we were able to obtain an explicit enumeration for the fibre size in the case \(n=2\). For more general values of \(n\), Table 5 gives the numbers of MVP parking function whose outcome is \(\operatorname{bipart}^{m,n}\). The sequence in the second column for \(m=2\) appears to correspond to Sequence A045891 in the OEIS [28], and we leave it to enterprising readers to try to prove this. No other row or column matches any OEIS entry, and neither does the diagonal reading of the array. One may also note that there are straightforward enumerative values in the cases \(m=1\) and \(n=1\). These correspond to the cases where the inversion graph is a _star graph_, with a central vertex incident to all other vertices (and no other edges). In both cases, the graph is acyclic, so the fibre size is simply given by the product formula in Theorem 2.8, and it is not difficult to get the direct enumerations of Table 5. In fact, these cases were already shown as applications of the product formula by Harris _et al._[17, Section 3.1]. One possible direction of future research involves other families of regular permutations, or permutations where the inversion graph is regular. So far the following cases have been considered: complete graphs, complete bipartite graphs for \(n=2\), and star graphs. Another family of regular graphs which are permutation inversion graphs are the _complete split graphs_, and we briefly discuss this case. For \(m,n\geq 1\), the complete split graph \(S_{m,n}\) consists of a _clique_ (complete subgraph) of size \(n\) together with an _independent set_ (a set of vertices with no edges between them) of size \(m\), with one edge between any vertex in the clique and any vertex in the independent set (see e.g. [14]). There are two families of permutations whose inversion graphs are complete split graphs: the permutations \(\operatorname{split}^{m,n}:=(n+1)\cdots(n+m)n(n-1)\cdots 1\) and \(\operatorname{split}^{m,n}:=(m+n)(m+n-1)\cdots(m+1)12\cdots m\). It would be interesting to study the MVP outcome fibres of these families of permutations. Note that if \(m=1\), we recover complete graphs and the permutations \(\operatorname{dec}^{n}\), and if \(n=1\), we recover the star graphs discussed above. Similarly to the complete bipartite case in Section 4, a good starting point may therefore be to first consider \(m=2\) case. In this case the corresponding inversion graph is simply the complete graph with one of its edges removed. We first consider the permutation \(\operatorname{split}^{2,n-2}=(n-1)n(n-1)\cdots 1\) and \(\operatorname{split}^{m,n}:=(m+n)(m+n-1)\cdots(m+1)12\cdots m\). It would be interesting to study the MVP outcome fibres of these families of permutations. Note that if \(m=1\), we recover complete graphs and the permutations \(\operatorname{dec}^{n}\), and if \(n=1\), we recover the star graphs discussed above. Similarly to the complete bipartite case in Section 4, a good starting point may therefore be to first consider \(m=2\) case. In this case the corresponding inversion graph is simply the complete graph with one of its edges removed. We first consider the permutation \(\operatorname{split}^{2,n-2}=(n-1)n(n-1)\cdots 1\) and \(\operatorname{split}^{m,n}:=(m+n)(m+n-1)\cdots(m+1)12\cdots m\). \(2)(n-3)\cdots 21\) for \(n\geq 3\), corresponding to the edge \((1,2)\) being removed in the inversion graph. Numerical evidence from Harris _et al._[17, Section 5] suggests that this permutation may be the one which maximises the MVP fibre size, at least when \(n\geq 6\). We conducted our own numerical experimentation to support this, and found that for \(n=7,8,9,10,11\), the permutation split\({}^{2,n-2}\) above had a larger MVP fibre size than the decreasing permutation dec\({}^{n}\). Indeed, if anything, the gap between the fibre sizes seems to be growing, as shown in Table 6 below. **Conjecture 6.1**.: _For \(n\geq 6\), the permutation with the largest MVP fibre size is the split permutation split\({}^{2,n-2}:=(n-1)n(n-2)(n-3)\cdots 21\)._ This may perhaps seem counter-intuitive given our subgraph representation. Indeed we might expect that the permutation with the largest fibre is the one whose inversion graph has the most edges (since more edges means more possible subgraphs). One would therefore expect this to be the decreasing permutation. However, as seen in Section 3.2, valid subgraphs in the decreasing case must be matchings. In particular, there cannot be two edges \((i,j)\) and \((i,k)\) with \(i<j,k\). Numerical experimentation suggests that for the split permutation split\({}^{2,n-2}\), we can have such combinations of edges, at least for \(i=1,2\). For example, if \(\pi=3421\), the subgraph with edges \((1,3)\) and \((1,4)\) is valid (the parking function associated is \((1,1,1,2)\)). It seems plausible that these additional valid subgraphs are more than enough to compensate for the loss of the edge \((1,2)\) compared to the decreasing case. Now let us consider the split permutation \(\overline{\text{split}}^{2,n-2}:=n(n-1)\cdots 312\), for \(n\geq 3\). The corresponding inversion graph is the complete graph with edge \((n-1,n)\) removed. It turns out that in this case, the MVP fibres are enumerated by Motzkin numbers. We sketch a proof of this by exhibiting a bijection \(\Psi:\text{Valid}\left(G_{\text{dec}^{n}}\right)\to\text{Valid}\left(G_{ \overline{\text{split}}^{2,n-2}}\right)\) for \(n\geq 3\) as follows. 1. If \(S\in\text{Valid}\left(G_{\text{dec}^{n}}\right)\) does not contain the edge \((n-1,n)\), we simply set \(\Psi(S)=S\) (as edge sets). 2. If \(S\in\text{Valid}\left(G_{\text{dec}^{n}}\right)\) contains the edge \((n-1,n)\), and the vertex \(n-2\) is isolated, we set \(\Psi(S)=S\setminus\{(n-1,n)\}\cup\{(n-2,n-1),(n-2,n)\}\). 3. If \(S\in\text{Valid}\left(G_{\text{dec}^{n}}\right)\) contains the edge \((n-1,n)\), and an edge \((i,n-2)\) for some \(i\), let \(S_{i}\) denote the set of edges with end-points strictly between \(i\) and \(n-2\). For \(e=(j,k)\in S_{i}\), we define \(e^{\prime}:=(j+1,k+1)\) to be \(e\) "shifted" rightwards by one column, and \(S^{\prime}_{i}=\{e^{\prime};\,e\in S_{i}\}\). We then set \(\Psi(S)=S\setminus\left(S_{i}\cup\{(i,n-2),(n-1,n)\}\right)\cup S^{\prime}_{i }\cup\{(i,n-1),(i,n)\}\). In words, we replace the edges \((i,n-2)\) and \((n-1,n)\) with the edges \((i,n-1)\) and \((i,n)\), and shift all edges which are nested inside \((i,n-2)\) by one column to the right. **Example 6.2**.: We take \(n=8\), and consider the \(1\)-subgraph \(S\) of dec\({}^{8}\) whose edges are \((2,6)\), \((3,4)\), and \((7,8)\). Here we are in Case (3) above, so in the \(1\)-subgraph \(S^{\prime}=\Psi(S)\) we replace the edges \((2,6)\) and \((7,8)\) with edges \((2,7)\) and \((2,8)\), and shift the edge \((3,4)\) to the right, i.e. it becomes \((4,5)\). This is illustrated on Figure 24 below. Since \(S\) is a non-crossing matching, we know that \(S\) is valid by Theorem 3.6. To see that \(S^{\prime}\) is valid, we obtain its corresponding parking preference \(p:=\Psi_{\text{Sub}\to\text{PF}}\left(S^{\prime}\right)=(2,2,6,4,4,3,2,1)\). We can then check that \(\mathcal{O}_{\text{MVP}_{8}}\left(p\right)=87654312=\overline{\text{split}}^{2,6}\), as desired. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \(n\) & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline \(\overline{|\mathcal{O}_{\text{MVP}_{n}}^{-1}\left(\text{dec}^{n}\right)|}\) & 4 & 9 & 21 & 51 & 127 & 323 & 835 & 2188 & 5798 \\ \hline \(\overline{|\mathcal{O}_{\text{MVP}_{n}}^{-1}\left(\text{split}^{2,n-2} \right)|}\) & 3 & 8 & 20 & 51 & 131 & 341 & 897 & 2383 & 6385 \\ \end{tabular} \end{table} Table 6. Comparing fibre sizes for the decreasing and split permutations. We state the following theorem without proof. It can be shown through a careful analysis of valid \(1\)-subgraphs of \(G_{\overline{\text{split}}^{2,n-2}}\), similar to the proofs in Sections 3.2 or 4. The key is that in \(S^{\prime}\), the rightwards shift creates an empty space \(i+1\) for car \(1\) to park temporarily when it is bumped from spot \(i\) by car \(2\). **Theorem 6.3**.: _For any \(n\geq 3\), the map \(\Psi:\operatorname{Valid}\left(G_{\operatorname{dec}^{n}}\right)\to \operatorname{Valid}\left(G_{\overline{\text{split}}^{2,n-2}}\right)\) is a bijection. In particular, we have \(\left|\mathcal{O}_{\operatorname{MVP}_{n}}^{-1}\left(\overline{\text{split} }^{2,n-2}\right)\right|=\left|\mathcal{O}_{\operatorname{MVP}_{n}}^{-1}\left( \operatorname{dec}^{n}\right)\right|=\left|\operatorname{Motz}_{n}\right|\)._ Note that while the MVP fibres of \(\overline{\text{split}}^{2,n-2}\) are enumerated by Motzkin numbers, the parking functions in those fibres are not Motzkin parking functions as defined in Section 3.1. Indeed, in the parking function \(p=(2,2,6,4,4,3,2,1)\in\mathcal{O}_{\operatorname{MVP}_{8}}^{-1}\left( \overline{\text{split}}^{2,6}\right)\) from Example 6.2, the spot \(2\) is preferred by three cars (equivalently the vertex \(2\) has two right-edges in the corresponding subgraph), so \(p\) is not a Motzkin parking function. ## Acknowledgments The research leading to these results was partially supported by the National Natural Science Foundation of China (NSFC) under Grant Agreement No 12101505. The authors have no competing interests to declare that are relevant to the content of this article. The code that was used to generate the experimental data in this paper is available from the corresponding author on reasonable request.
2309.10482
On the pulsar Y-point
The pulsar magnetosphere is divided into a corotating region of closed field lines surrounded by open field lines that emanate from the two poles of the star, extend to infinity and are separated by an equatorial current sheet. The three regions meet at a magnetospheric Y-point. In steady-state solutions of the ideal force-free magnetosphere, the Y-point may lie at any distance inside the light cylinder. Time-dependent force-free simulations, however, develop closed-line regions that extend all the way to the light cylinder. On the other hand, particle (PIC) solutions consistently develop smaller closed-line regions. In order to understand this effect, we solve the pulsar equation with an improved numerical method. We show that the total electromagnetic energy stored in the ideal force-free magnetosphere manifests a subtle minimum when the closed-line region extends to only 90% of the light cylinder, and thus argue that the system will spontaneously choose this particular configuration. Furthermore, we argue that the intersection of the corotating region with the equatorial current sheet is at right angles, literally leading to a T-point.
I. Contopoulos, D. Ntotsikas, K. N. Gourgouliatos
2023-09-19T09:53:02Z
http://arxiv.org/abs/2309.10482v1
# On the pulsar Y-point ###### Abstract The pulsar magnetosphere is divided into a corotating region of closed field lines surrounded by open field lines that emanate from the two poles of the star, extend to infinity and are separated by an equatorial current sheet. The three regions meet at a magnetospheric Y-point. In steady-state solutions of the ideal force-free magnetosphere, the Y-point may lie at any distance inside the light cylinder. Time-dependent force-free simulations, however, develop closed-line regions that extend all the way to the light cylinder. On the other hand, particle (PIC) solutions consistently develop smaller closed-line regions. In order to understand this effect, we solve the pulsar equation with an improved numerical method. We show that the total electromagnetic energy stored in the ideal force-free magnetosphere manifests a subtle minimum when the closed-line region extends to only 90% of the light cylinder, and thus argue that the system will spontaneously choose this particular configuration. Furthermore, we argue that the intersection of the corotating region with the equatorial current sheet is at right angles, literally leading to a T-point. keywords: pulsars - magnetic fields ## 1 Pulsar spindown and the extent of the closed-line region Standard dipolar pulsar magnetospheres are divided into three regions: a region of untwisted closed field lines (hereafter region I), and two regions of azimuthally backward twisted open field lines (hereafter regions II and III) separated by an equatorial current sheet discontinuity (Kalapotharakos et al. (2012), Stefanou et al. (2023)). The closed-line region is separated from regions II and III by a separatrix current sheet (see figure 1). The equatorial current sheet joins the separatrix current sheet at a singular line which in a meridional magnetospheric cross section manifests itself as a Y-point. In the present discussion we will only consider axisymmetric magnetospheres, but our results may also be generalized for oblique rotators. The electromagnetic energy loss rate \(L\) of the axisymmetric rotator is found numerically to be equal to \[L\approx\frac{\Omega^{2}\Psi_{\rm open}^{2}}{6\pi^{2}c^{2}}\approx\frac{ \Omega^{2}B_{\rm s}^{2}r_{\rm s}^{2}}{4cR_{\rm Y}^{2}}=\frac{1}{x_{\rm Y}^{2} }L_{\rm canonical} \tag{1}\] (e.g. Contopoulos (2005), Timokhin (2006), hereafter T06, Kalapotharakos & Contopoulos (2009)). Here, \(\Omega\) is the angular velocity of stellar rotation, \(\Psi_{\rm open}\equiv\pi R_{\rm pc}^{2}B_{\rm s}\) is the amount of open magnetic flux that originates in the two polar caps of cylindrical radius \(R_{\rm pc}\approx\sqrt{3/2}\,\sqrt{r_{\rm s}^{2}/R_{\rm Y}}\) (Spitkovsky (2006)), \(B_{\rm s}\) is the polar value of the dipole magnetic field, \(r_{\rm s}\) is the stellar radius, \(R_{\rm Y}\equiv x_{\rm Y}R_{\rm LC}\) is the distance of the Y-point beyond which magnetic field lines open up to infinity, \(R_{\rm LC}\equiv c/\Omega\) is the radius of the light cylinder, and \(L_{\rm canonical}\equiv\Omega^{2}B_{\rm s}^{2}r_{\rm s}^{2}/(4cR_{\rm LC}^{2} )=\Omega^{2}B_{\rm s}^{2}r_{\rm s}^{4}/(4c^{3})\). In general, \(x_{\rm Y}\leq 1\). Notice also that \(\Omega^{2}B_{\rm s}^{2}r_{\rm s}^{4}/(6c^{3})\) is the electromagnetic energy loss rate of an orthogonal dipole rotator in vacuum. Eq. (1) is very important. It implies that the pulsar spindown rate depends strongly on the location of the Y-point. If for some reason the Y-point is located a significant distance inside the light cylinder, namely \(x_{\rm Y}\ll 1\), eq. (1) leads to a significant overestimation of the stellar magnetic field \(B_{\rm s}\) (as e.g. in Harding et al. (1999)). Steady-state Force-Free Electrodynamic (hereafter FFE) and Magneto-Hydrodynamic (hereafter MHD) solutions of the ideal force-free magnetosphere have shown that the closed-line region is free to extend up to any distance inside the light cylinder (i.e. \(x_{\rm Y}\) can have any value between \(r_{\rm s}/R_{\rm LC}\) and 1). Time-dependent solutions, however, always relax to a solution with the closed-line region extending as close to the light cylinder as numerically possible (as we will see next, several physical quantities diverge when the Y-point lies exactly on the light cylinder (Parfrey et al. (2012), Figure 1: Schematic of magnetospheric open and closed line regions I, II, III. All three regions are separated by electric current sheets and meet at the so-called Y-point. Tchekhovskoy et al. (2013)). Over the past 10 years, a new type of numerical simulations has appeared in the literature, namely global (so-called 'ab initio') PIC simulations (Philippov & Spitkovsky (2014), Philippov et al. (2015a), Philippov et al. (2015b)). These show a consistently smaller closed-line region that extends only up to a fraction of the light cylinder radius. The extent of the closed-line region affects the pulsar spindown rate, thus, it is imperative to understand the origin of this effect. It has been theorized that this may be a numerical artifact (either the simulation has not evolved long enough to relax to a steady-state, either the inertia of the PIC particles is artificially high, either scale separation is not as large in simulations as in reality, e.g. skin depth and Larmor radii at the light cylinder vs magnetospheric size). We instead will argue in the present paper that this effect may be understood by a more physical and detailed treatment of the return current sheet in the pulsar equation. We will show that the total electromagnetic energy stored in the ideal force-free magnetosphere manifests a subtle minimum when the closed-line region extends up to 93% of the light cylinder. We thus argue that the system will spontaneously choose this particular configuration which is close to the ones obtained in global PIC simulations. We will next investigate in detail the Y-point. ## 2 The Y-point is in fact a T-point We will be guided by Uzdensky (2003) (hereafter U03), but we will also take into account what we have learned about pulsar magnetospheres over the past 20 years. We will consider only the axisymmetric case. We know today that the separatrix between open and closed field lines contains an electric current sheet which closes the global magnetospheric electric current circuit. This was not yet clear at the time of U03. This implies that the azimuthal magnetic field \(B_{\phi}\) is non-zero right outside the Y-point, and zero inside the closed line region. Force-balance in a relativistic force-free magnetosphere implies that (Goldreich & Julian (1969)) \[\rho_{e}{\bf E}+{\bf J}\times{\bf B}=0\;. \tag{2}\] Here, \(\rho_{e}\equiv\nabla\cdot{\bf E}\), and \({\bf J}=\nabla\times{\bf B}\) (in steady state). U03 (see also Lyubarky 1990) integrated eq. (2) accross the separatrix current sheet. This yields that \[(B^{2}-E^{2})_{I}=(B^{2}-E^{2})_{II} \tag{3}\] or equivalently, \[(B_{p})_{I}^{2}=(B_{p})_{II}^{2}+\frac{B_{\phi}^{2}\,_{II}}{1-x^{2}}\neq 0\;, \tag{4}\] where \(B_{p}\) denotes the poloidal magnetic field component in each region accross the separatrix at the Y-point, \(E_{p}\equiv xB_{p}\) is the poloidal component of the electric field, and \(x\equiv R/R_{\rm LC}\) is the cylindrical radius in units of the radius of the light cylinder. The toroidal magnetic field component just outside the Y-point is given by \[|B_{\phi}|_{H}(x_{\rm Y})=\frac{I_{\rm pc}}{2cx_{\rm Y}R_{\rm LC}}=\frac{3}{ 8}\frac{B_{\rm s}r_{\rm s}^{3}}{R_{\rm LC}^{3}x_{\rm Y}^{2}}\;, \tag{5}\] where \(I_{\rm pc}\approx\rho_{e}\pi R_{\rm pc}^{2}c=\Omega B_{\phi}R_{\rm pc}^{2}/2\) is the total electric current flowing though each of the pulsar polar caps. The magnetic field in region I must obey the pulsar equation without poloidal electric current, namely, \[(1-x^{2})\left(\frac{\partial^{2}\Psi}{\partial x^{2}}+\frac{\partial^{2}\Psi} {\partial z^{2}}\right)-\frac{1+x^{2}}{x}\frac{\partial\Psi}{\partial x}=0\;. \tag{6}\] Spatial cylindrical coordinates \(x\) and \(z\) are expressed here in units of the light cylinder radius \(R_{\rm LC}\). The magnetic field components can be written in terms of the magnetic flux function \(\Psi(x,z)\) as \[(B_{\phi})_{I}=-\frac{1}{2\pi R_{\rm LC}^{3}}\frac{1}{x}\frac{\partial\Psi}{ \partial z}\;,(B_{z})_{I}=\frac{1}{2\pi R_{\rm LC}^{2}}\frac{1}{x}\frac{ \partial\Psi}{\partial x}\;,(B_{\phi})_{I}=0 \tag{7}\] Following U03, we will introduce polar coordinates \((r,\theta)\) around the Y-point, such that \[x=x_{\rm Y}-r\cos\theta\;,z=r\sin\theta\;. \tag{8}\] In these coordinates, we can rewrite the pulsar equation in region \(I\) as \[(1-x_{\rm Y}^{2})\left(\frac{\partial^{2}\Psi}{\partial r^{2}}+\frac{1}{r} \frac{\partial\Psi}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}\Psi}{ \partial\theta^{2}}\right)\] \[-\frac{1+x_{\rm Y}^{2}}{x_{\rm Y}}\left(\cos\theta\frac{\partial\Psi}{\partial r }-\frac{\sin\theta}{r}\frac{\partial\Psi}{\partial\theta}\right)=0\;. \tag{9}\] very close to the Y-point, we will make the self-similar Ansatz that \[\Psi_{I}\equiv r^{\alpha}f(\theta) \tag{10}\] and therefore, \[(B_{r})_{I}=\frac{1}{2\pi R_{\rm LC}^{2}}r^{\alpha-1}f(\theta)\;,(B_{\phi})_{I }=-\frac{1}{2\pi R_{\rm LC}^{2}}\alpha r^{\alpha-1}f(\theta)\;. \tag{11}\] We will also assume that \(\Psi=0\) along the separatrix. Obviously, in order for \(B_{p}\) to be finite in region I all the way down to \(r\to 0\) as required by eq. (4), \(\alpha\) must be equal to 1. In the limit \(r\to 0\), eq. (9) then becomes \[f^{\prime\prime}(\theta)=f(\theta)\;. \tag{12}\] Since at \(\theta=\pi\) the field crosses the equator vertically, and thus \(B_{r}(\theta=\pi)=0\), this yields that \(f^{\prime}(\theta=\pi)=0\), which yields the solution \[f(\theta)\propto\cos(\theta) \tag{13}\] Obviously, \(\Psi=0\) where \(f=0\), thus the separatrix lies at \[\theta_{t}=\pi/2\;. \tag{14}\] In other words, technically, the Y-point is a T-point as can be seen in figure 21. Obviously, just outside a T-point, Footnote 1: Notice that Gruzinov (2005) assumed instead that \(\alpha=0.5\), from which he derived \(\theta_{t}=77.3^{\circ}\). \[(B_{p}^{2})_{II}(x_{\rm Y})=0 \tag{15}\] and eq. (4) then yields that \[|B_{p}|_{I}(x_{\rm Y})=\frac{|B_{\phi}|_{H}(x_{\rm Y})}{\sqrt{1-x_{\rm Y}^{2}}} =\frac{3B_{\rm s}}{8}\frac{r_{\rm s}^{3}}{R_{\rm LC}^{3}}\frac{1}{x_{\rm Y}^{2} }\frac{1}{\sqrt{1-x_{\rm Y}^{2}}}\;. \tag{16}\] Global PIC simulations of Hu & Beloborodov (2022), and Hakobyan et al. (2023) show a different picture around the Y-point. In particular, instead of it being a T-point, on the contrary, it seems to locally protrude outwards. We believe the answer is that due to the presence of an electric current sheet immediately outside the closed-line region, there is a point exactly along the equator where \(B_{\phi}=0\). The above analysis obviously brakes down around that point, and the closed line region protrudes outwards like a 'hermia'. The height of the protruding region is equal to the thickness of the equatorial current sheet. This effect is seen clearly in the high-resolution PIC simulations of Hu & Beloborodov (2022) which resolve in detail the equatorial current sheet. It is interesting that this effect has been seen before in Gourgouliatos & Lynden-Bell (2018) (figure 2 top left) and Ntotsikas & Gourgouliatos (2023). There is a simple explanation why only these solutions of the pulsar equation show this effect: these are the only solutions of the pulsar equation known in the literature where the return current is placed _inside_ the last open field line \(\Psi=\Psi_{\rm open}\), in the regions of open field lines II and III. When solving the pulsar equation, the distribution of the poloidal current along the open magnetic field lines is determined from the condition of smooth crossing of the light cylinder singular surface (Contopoulos et al. (1999), hereafter CKF). This procedure, however, does not take into account the return current along the separatrix which must be specifically dealt with. Mathematically, the return current corresponds to an infinitely abrupt jump of the magnetospheric electric current \(I(\Psi)\) from \(I_{\rm pc}\) to zero. In practice, it may be viewed as half a Gaussian distribution of height \(I_{\rm pc}\) and width \(\delta\Psi\ll\Psi_{\rm open}\). In that narrow region, the force-free conditions implied by the pulsar equation do not apply, thus this narrow region is problematic in the context of the pulsar equation. CKF first implemented a current distribution along closed lines \(\Psi_{\rm open}\leq\Psi\leq\Psi_{\rm open}\ll\delta\Psi\). Obviously, the specified electric current does not cross the light cylinder. It is only a mathematical approximation of the return current distribution that allows us to solve the pulsar equation. Gruzinov (2005) and T06 followed a similar approach. The CKF prescription guaranteed that the last closed line without poloidal electric current crosses the equator vertically (see. e.g. figure 4 of T06 and the right panel in figure 3) and does not form the protrusion observed in recent simulations. Gourgouliatos & Lynden-Bell (2018) were the first to place it _inside_ the open line region, \[\Psi_{\rm open}-\delta\Psi\leq\Psi\leq\Psi_{\rm open}. \tag{17}\] In these solutions, the poloidal electric current right outside the Y-point on the equator is equal to zero, hence the equatorial field protrusion has nothing to do with inertia. In fact, the region of open poloidal flux over which the return current sheet flows is equal to \(\delta\Psi\approx 0.01\Psi_{\rm open}\) in Gourgouliatos & Lynden-Bell (2018) and \(\delta\Psi=0.005\Psi_{\rm open}\) in Ntotsikas & Gourgouliatos (2023), hence the corresponding protrusions are correspondingly thinner. As we have spread the current sheet into a narrow layer, we find that the regularisation condition is not completely fulfilled in this region. Because of this, some features just outside the light cylinder may appear, in the form of magnetic islands. These become negligible for they affect the solution only in a thin layer i.e. one that corresponds to \(\delta\Psi=0.005\Psi_{0}\), but may become significant for solutions with larger \(\delta\Psi\). We obtained new high resolution solutions of the pulsar equation with the return current imposed over the last open field lines above the separatrix between open and closed field lines. We have used an elliptic solver (utilizing the Successive Overrelaxation Method) in the computational domain \(0<x<2\) and \(0<z<2\) with a resolution of 800 points in \(x\) equally spaced inside the light cylinder, 800 points in \(x\) equally spaced outside the light cylinder, and 800 equally space points in \(z\) and we find that the solution converges after \(2\times 10^{8}\) iterations for the cases of simulations where \(R_{Y}=0.93,R_{Y}=0.83\) (in principle for any simulation where the current sheet is placed at \(R_{Y}\leq 0.98\)), while for the simulation with the current sheet placed at \(R_{Y}=0.99\) and \(R_{Y}=1.0\) the solution converges after \(10^{9}\) iterations. Our resolution is higher than CKF but lower than T06. The distribution of the magnetosphere electric current \(I(\Psi)\) was iteratively adjusted by the condition of smooth crossing of the light cylinder as described in Gourgouliatos & Lynden-Bell (2018). To account for the return current flowing on the separatrix between the open closed field lines, we approximated the \(\delta\)-function return current in the open-line region of eq. (17) by a narrow Gaussian of height \(I_{\rm pc}\) and width \(\delta\Psi=0.005\Psi_{\rm open}\) centered at \(\Psi=\Psi_{\rm open}\). We found that indeed, the Y-point is clearly a T-point for all values of \(x_{\rm Y}\) (figure 4), unlike the solutions shown in figure 4 of T06 which develop clear Y-points. We understand this discrepancy with the right panel of figure 3, where a separatrix without a jump in \(B_{\phi}\) crosses the equator at a nonzero angle, while the innermost closed field line where the imposed return current flows crosses the equator vertically. Unfortunately, as we argued above, such a current closure configuration is unphysical. It is interesting that time-dependent force-free and PIC runs yield oblique Y-points, not T-points. We suspect that all such runs contain a very thick current sheet where what we superficially observe as a Y-point is in fact the thick and extended protrusion shown schematically in the right panel of figure 2. ## 3 A subtle energy minimum According to Contopoulos (2005) and T06, the Y-point can lie anywhere inside the light cylinder (it can certainly not lie outside). This effect has been corroborated by a study of the total magnetospheric Figure 3: Schematic of idealized Y-point with infinitely thin current sheets vs Y-point with current closure in the closed-line region as in CKF. The first one is viable only if there is no jump in \(B_{\phi}\) accross the separatrix (i.e. \(B_{\phi}=0\) or constant everywhere around the Y-point), thus it does not apply to the pulsar magnetosphere as we understand it today. The second one is unrealistic (current closure must take place along open field lines, not along closed lines). The field crosses the equator vertically inside the current sheet, and at a non-vertical angle outside. Figure 2: Schematic of idealized T-point with infinitely thin current sheets vs realistic T-point with equatorial protrusion. In the idealized T-point, \(B_{z}\) in the closed-line region balances \(B_{\phi}\) in the open-line region. In the realistic T-point, \(B_{\phi}=0\) in the interior of the equatorial current sheet, and therefore, the closed-line region creates a protrusion in the equator where \(B_{z}\to 0\). energy content as a function of the position of the Y-point (see figure 10 of T06). Unfortunately, the analysis of what happens around the Y-point is subtle, and requires a more careful numerical treatment with high resolution. Eq. (16) tells us that \(|B_{p}(x_{\rm Y})|_{l}\) decreases \(\propto 1/x_{\rm Y}^{2}\) as the Y-point is moved outwards, but beyond some distance it increases again as \(x_{\rm Y}\to 1\) (see figure 5). This makes us suspicious that indeed, the electromagnetic energy of the magnetosphere increases as \(x_{\rm Y}\) moves beyond some distance and some part of the open line region that contains normal valued \(B_{p}\) and \(B_{\rm g}\) is replaced by a local region of enhanced poloidal magnetic field \(\propto 1/\sqrt{1-x^{2}}\). We performed this detailed calculation and found a subtle local minimum of the integral \[W\equiv\int\limits_{r_{\rm s}}^{2R_{\rm c}}\int\limits_{0}^{2R_{\rm c}}(B^{ 2}+E^{2})R_{\rm LC}^{3}\ 2\pi x\ {\rm d}x\ {\rm d}z \tag{18}\] for \(x_{\rm Y}=0.93\). Notice that we have arbitrarily chosen an inner boundary of \(x_{\rm in}=r_{\rm s}\equiv 0.1R_{\rm LC}\) and \(z_{\rm in}=0\), and an outer boundary of \(x_{\rm out}=z_{\rm out}=2R_{\rm LC}\). The local energy minimum is very subtle because it requires a detailed high resolution treatment of the region around the Y-point when the latter approaches the light cylinder. The highest resolution to-date solution of the pulsar equation (T06) missed this effect because it explicitly did not include the equatorial region around \(z=0\) in the energy integral where most of the increase in eq. (16) takes place. One further reason is the introduction of the separatrix return current _inside_ the closed-line region, whereas in reality it flows _outside_. This artificial effect essentially removes from the calculation of the energy integral the interesting region adjacent to the separatrix where the poloidal magnetic field of the closed-line region increases dramatically. Without a detailed treatment of the region around the Y-point, the electromagnetic energy integral in eq. (18) is found to be a decreasing function of \(x_{\rm Y}\), and it is therefore natural to conclude that the pulsar magnetosphere will attain the minimum energy configuration that corresponds to its maximum \(x_{\rm Y}\) value, namely \(x_{\rm Y}=1\). When the region around the Y-point is considered more carefully, as in the present paper, the poloidal field divergence at the tip of the closed-line region becomes much more dramatic. Let us calculate here the energy integral of eq. (18) at the tip of the closed-line region inside the Y-point. This yields \[W_{\rm Y}\equiv\int\limits_{z=-h(x)}^{x_{\rm Y}}(B^{2}+E^{2})\ 2\pi R_{\rm LC}^{3}x\ {\rm d}x\ {\rm d}z\, \tag{19}\] where, \(h(x)\) is the height of the tip of the closed-line region as \(x\to x_{\rm Y}\). If the tip of the closed line region is a Y-point at some non-vertical angle \(\theta_{\rm Y}\) (e.g. \(\theta_{\rm Y}=77.3^{\circ}\) as calculated by Gruzinov (2005), then \(h(x)=(x_{\rm Y}-x)\tan(\theta_{\rm Y})\), whereas if it is a T-point as first shown by U03, then \(h(x)=h_{\rm Y}=\) constant. Therefore, if \(B(x)\sim E(x)\sim B_{o}/\sqrt{1-x}\) as \(x\to x_{\rm Y}\to 1\), eq. (19) yields \[W_{\rm Y} = 8\pi R_{\rm LC}^{3}B_{o}\tan(\theta_{\rm Y})\ \delta x=\mbox{finite for a Y-point}\, \tag{20}\] \[\approx -8\pi R_{\rm LC}^{3}B_{o}\nu_{\rm Y}\ln(1-x_{\rm Y})^{\dagger}= \mbox{infinite for a T-point}\.\] In other words, the electromagnetic energy contained in the tip of the closed-line region diverges due to the divergence of \(B_{o}(x\to 1-,z=0)\). This is the reason the Y-point must lie at a finite region inside the light cylinder. Nevertheless, while the global energy argument is certainly interesting, it is not clear to us what would keep the Y-point from moving towards the light cylinder via field line reconnection. We suspect that, even if such field reconnection takes place, it will Figure 4: High-resolution solutions for various values of \(x_{\rm Y}\leq 1\). The return current is imposed to flow along the last open field lines in regions II & III. It is clearly seen that the Y-point is in fact a T-point. be locally favorable to form and eject plasmoids from the Y-point as seen in the Hu & Beloborodov (2022) numerical simulations. Plasmoid formation at the Y-point for various positions of the Y-point needs further investigation. ## 4 Conclusions In this short letter we corrected some common misconsoptions about the shape and the position of the magnetospheric Y-point. We showed that the pulsar magnetosphere manifests a subtle global electromagnetic energy minimum when its closed-line region ends at about 90% of the light cylinder distance. This explains a result seen in all global PIC numerical simulations of the past decade. This subtle modification of the pulsar magnetosphere does not affect significantly its main properties, namely its electromagnetic energy loss and the resulting pulsar spin down rate. It also does not explain the divergence of the pulsar braking index \(n\) from its canonical dipolar field value (according to eq. 1, for a fixed value of \(x_{\rm Y}\), the electromagnetic energy loss rate remains proportional to \(\Omega^{4}\), hence \(n=3\)). ## Acknowledgements We would like to thank the International Space Science Institute (ISSI) for providing financial support for the organization of the meeting of ISSI Team No 459 led by I. Contopoulos and D. Kazanas where the issues addressed in this work were first discussed. ## Data Availability Statement The data underlying this article will be shared on reasonable request to the corresponding author.
2305.00544
On the State Estimation Error of "Beam-Pointing'' Channels: The Binary Case
Sensing capabilities as an integral part of the network have been identified as a novel feature of sixth-generation (6G) wireless networks. As a key driver, millimeterwave (mmWave) communication largely boosts speed, capacities, and connectivity. In order to maximize the potential of mmWave communication, precise and fast beam acquisition (BA) is crucial, since it compensates for a high pathloss and provides a large beamforming gain. Practically, the angle-of-departure (AoD) remains almost constant over numerous consecutive time slots, the backscatter signal experiences some delay, and the hardware is restricted under the peak power constraint. This work captures these main features by a simple binary beam-pointing (BBP) channel model with in-block memory (iBM) [1], peak cost constraint, and one unit-delayed feedback. In particular, we focus on the sensing capabilities of such a model and characterize the performance of the BA process in terms of the Hamming distortion of the estimated channel state. We encode the position of the AoD and derive the minimum distortion of the BBP channel under the peak cost constraint with no communication constraint. Our previous work [2] proposed a joint communication and sensing (JCAS) algorithm, which achieves the capacity of the same channel model. Herein, we show that by employing this JCAS transmission strategy, optimal data communication and channel estimation can be accomplished simultaneously. This yields the complete characterization of the capacity-distortion tradeoff for this model.
Siyao Li, Giuseppe Caire
2023-04-30T18:08:59Z
http://arxiv.org/abs/2305.00544v1
# On the State Estimation Error of "Beam-Pointing" Channels: The Binary Case ###### Abstract Sensing capabilities as an integral part of the network have been identified as a novel feature of sixth-generation (6G) wireless networks. As a key driver, millimeter-wave (mmWave) communication largely boosts speed, capacities, and connectivity. In order to maximize the potential of mmWave communication, precise and fast beam acquisition (BA) is crucial, since it compensates for a high pathloss and provides a large beamforming gain. Practically, the angle-of-departure (AoD) remains almost constant over numerous consecutive time slots, the backscatter signal experiences some delay, and the hardware is restricted under the peak power constraint. This work captures these main features by a simple binary beam-pointing (BBP) channel model with in-block memory (IBM) [1], peak cost constraint, and one unit-delayed feedback. In particular, we focus on the sensing capabilities of such a model and characterize the performance of the BA process in terms of the Hamming distortion of the estimated channel state. We encode the position of the AoD and derive the minimum distortion of the BBP channel under the peak cost constraint with no communication constraint. Our previous work [2] proposed a joint communication and sensing (JCAS) algorithm, which achieves the capacity of the same channel model. Herein, we show that by employing this JCAS transmission strategy, optimal data communication and channel estimation can be accomplished simultaneously. This yields the complete characterization of the capacity-distortion tradeoff for this model. ## I Introduction With the evolution of 4G to 5G, the spectrum allocations have expanded towards millimeter-wave (mmWave) bands [3]. This trend will continue and communication spectra in the sub-Terahertz region will likely be available as some of the frequency bands for 6G deployments. With the introduction of these new frequencies, the potential for very accurate sensing based on radar-like technology arises [4, 5, 6].That is, reflections of transmitted signals are received in the network and processed to yield spatial knowledge of the physical surroundings. At these frequencies, the communication network must employ beamforming of the transmitted signals to concentrate and direct the signal energy to a specific geographical area where the intended receiver is located [7], which is strongly affected by the _initial beam acquisition_ (BA) phase [8]. In general, standard BA schemes are based on some "beam-sweeping" phase, i.e., the base station (BS) sends pilot signals in all the possible transmission directions at regular intervals, allowing the user equipment (UE) to identify the best beam index and feed this back via some hand-shaking protocol. Works have studied the BA problem in various ways (e.g., see [9, 10, 11, 12, 13, 14, 15] and references therein). However, transmission efficiency is not fully realized in these works, as they isolate the BA phase and the data communication phase. This separation is known to be sub-optimal from the information-theoretic perspective [16]. For future sensing, the main advantage of the communication network is that most of the infrastructure is already in place with transmitter/receiver (Tx/Rx) nodes. This provides full area coverage as well as a good interconnection between nodes. Hence, the sensing can be provided almost 'for free'. To achieve the full potential of mmWave communication, numerous recent works exploit "joint communication and sensing" (JCAS) (see e.g. [17, 18, 19, 20] and references therein), where communication can take place while the angle-of-departure (AoD) is being estimated via the backscatter signal. From physical considerations, it is clear that the AoD remains almost constant over a large number of consecutive time slots, which presents a state-dependent channel with memory. Additionally, the backscatter signal can be modeled as causal feedback. In this work, we investigate this scenario with JCAS from an information-theoretic viewpoint. Considering channels with _in-block memory_ (iBM) [1], i.e., the state remains constant for blocks of \(L\) time slots, and changes in an independent and identically distributed (i.i.d.) fashion from block to block, the state estimation error is very hard to evaluate since it requires the optimization over length-\(L\) sequences of conditional input distributions. _Contribution:_ In this paper, we consider a binary beam-pointing (BBP) channel with iBM and one-unit delayed feedback under the peak cost constraint. In our previous work [2], we characterized the capacity of this BBP channel. Herein, we are interested in the ability of the BS to "locate" the target AoD, quantified by the distortion error achieved by the BS state estimator at the end of each block. We refer to "distortion" as the (average) error with which the transmitter is able to determine the channel state (i.e., the AoD of the receiver) at the end of each block, and characterize the minimum distortion under the peak cost constraint. It is interesting to see that this minimum distortion can be obtained by deploying the capacity-achieving transmission strategy in [2]. We, therefore, obtain a complete capacity-distortion region of the considered channel model, revealing that for this model the tradeoff is "trivial" in the sense that optimal communication rate and minimum state estimation distortion can be achieved at the same time. This corroborates the general intuition that JCAS yields excellent sensing capabilities without compromising capacity. NotationsFor an integer \(n\), we let \([n]=\{1,\cdots,n\}\) and \([n_{1}:n_{2}]=\{n_{1},\cdots,n_{2}\}\) for some integers \(n_{1}<n_{2}\). \(\underline{X}\) denotes a vector and \(\underline{X}^{n}=[\underline{X}_{1},\cdots,\underline{X}_{n}]\) denotes a sequence of vectors. Let \([x]^{+}=\max(0,x)\), and \(y^{i-1}1\) denote the realization of the sequence \(Y^{i}\) where \(Y^{i-1}=y^{i-1}\) and \(Y_{i}=1\). Let \(\beta_{k}^{i}\) denote the binary sequence with length \(i\) and the first one element appearing at the \(k\)-index (i.e., \(\beta_{2}^{i}=\{01\underbrace{\star\cdots\star}_{i-2}\}\) where \(\star\) can be either 0 or 1). \(\mathbb{1}\{\cdot\}\) denotes an indicator function. \(|\mathcal{A}|\) represents the cardinality of a set \(\mathcal{A}\). ## II System Model We consider a BBP channel model with iBM [1] and block length \(L\). There are \(n\) total transmission time in channel uses where \(n=\ell L\) and \(\ell\) is the number of blocks. Note that when \(L=1\), the channel becomes the memoryless channel with independent states. The channel state \(\underline{S}\in\{0,1\}^{M}\) is an \(M\)-dimensional "one-hot" binary vector where the single "1" appearing at index \(m\) indicates the (unknown) target receiver AoD, among \(M\) possible quantized angles. This index \(m\) is a random variable uniformly distributed over \([M]\) and is referred to as the _transmission direction_, i.e., the quantized AoD of the UE with respect to the BS array. The state remains constant for blocks of \(L\) channel uses and the transmitter (BS) receives binary causal noiseless feedback. This channel state information is assumed to be perfectly known at the receiver (CSIR) but unknown at the transmitter. The transmitter decides the transmission direction estimation \(\hat{S}\) (i.e., a one-hot vector) upon the channel input and feedback at the end of each block. Furthermore, \(\underline{S}\in\mathcal{S}\) is i) independent of the channel input, ii) remains constant for an interval of \(L\) channel uses, and iii) i.i.d. according to \(P_{\underline{S}}\) across the blocks. The channel input \(\underline{X}_{i,j}\in\mathcal{X}:=\{0,1\}^{M}\) is also an \(M\)-dimensional binary vector with a peak Hamming weight cost constraint, modeling the fact that sending in multiple directions costs transmit power. The channel output \(Y_{i,j}\in\mathcal{Y}:=\{0,1\}\) at channel use \(j\) of block \(i\) \[Y_{i,j}=\underline{S}_{i}^{T}\underline{X}_{i,j} \tag{1}\] is binary, given by the inner product of the state and input vectors. The causal feedback is noiseless, i.e., it coincides with the output from the previous channel use. Notice that \(Y_{i,j}=1\) if the single "1" in \(S_{i}\) coincides with a "1" in \(X_{i,j}\) and zero otherwise. The joint probability distribution of the considered model is \[P_{W\underline{X}^{n}\underline{S}^{\ell}Y^{n}}(w,\underline{x}^{ n},\underline{s}^{\ell},y^{n})=P_{W}(w)\] \[\times\prod_{i=1}^{\ell}\left(P_{\underline{S}}(\underline{s}_{i} )\prod_{j=1}^{L}P_{Y\lfloor\underline{X}\underline{S}}(y_{i,j}|\underline{x}_{ i,j}\underline{s}_{i})P(\underline{x}_{i,j}|w,y_{i}^{j-1})\right) \tag{2}\] where we denote \(y_{i}^{j-1}=[y_{i,1},\cdots,y_{i,j-1}]\). **Definition 1**.: The estimate of the state sequence \(\mathcal{S}^{\ell}\) in the presence of the input \(X^{n}\) and feedback \(Y^{n}\) is defined as \[\hat{\underline{S}}^{\ell}\triangleq q(\underline{X}^{n},Y^{n}), \tag{3}\] where \(q:\mathcal{X}^{n}\times\mathcal{Y}^{n}\rightarrow\hat{\mathcal{S}}^{\ell}\), is a state estimation function and \(\hat{\mathcal{S}}\) is the reproduction alphabet. The average per-block distortion is defined as \[\Delta^{(\ell)}\triangleq\frac{1}{\ell}\sum_{i=1}^{\ell}\mathbb{E }[d(\underline{S}_{i},\hat{\underline{S}}_{\ell})], \tag{4}\] where \(\hat{\underline{S}}_{i}\) is the \(i\)-th component of \(\hat{\underline{S}}^{\ell}\) in (3), and \(d:\mathcal{S}\times\hat{\mathcal{S}}\rightarrow\mathbb{R}_{+}\) is a state estimation error measure with \(\max_{(\underline{x},\underline{x})\in\mathcal{S}\times\hat{\mathcal{S}}}d( \underline{s},\underline{\hat{s}})<\infty\). **Lemma 1**.: _Define the function_ \[\hat{\underline{s}}^{*}(\underline{x}_{i}^{L},y_{i}^{L})\triangleq \arg\min_{\underline{x}_{i}^{\ell}\in\mathcal{S}}\sum_{\underline{s}_{i}\in \mathcal{S}}P_{\underline{S}_{i}\lfloor\underline{X}^{L}Y_{i}^{L}}(\underline {s}_{i}|\underline{x}_{i}^{L},y_{i}^{L})d(\underline{s}_{i},\underline{s}_{i} ^{\prime})\] _where_ \[P_{\underline{S}_{i}\lfloor\underline{X}^{L}Y_{i}^{L}}(\underline {s}_{i}|\underline{x}_{i}^{L},y_{i}^{L})=\frac{P_{\underline{S}_{i}}(\underline {s}_{i})P_{Y_{i}^{L}|\underline{S}_{i},\underline{X}_{i}^{L}}(y_{i}^{L}| \underline{s}_{i},\underline{x}_{i}^{L})}{\sum_{\underline{s}_{i}\in\mathcal{S} }P_{\underline{S}_{i}}(\underline{s}_{i})P_{Y_{i}^{L}|\underline{S}_{i}, \underline{X}_{i}^{L}}(y_{i}^{L}|\underline{s}_{i},\underline{x}_{i}^{L})}\] _and \(P_{Y_{i}^{L}|\underline{S}_{i},\underline{X}_{i}^{L}}(y_{i}^{L}|\underline{s}_{ i},\underline{x}_{i}^{L})=\prod_{j=1}^{L}P_{Y_{i}|\underline{S}_{i},\underline{X}_{i} }(y_{i,j}|\underline{s}_{i},\underline{x}_{i,j})\). Irrespective of the choice of encoding and decoding functions, distortion \(\Delta^{(\ell)}\) in (4) is minimized by the estimator_ \[q^{*}(\underline{x}^{n},y^{n})=(\hat{\underline{s}}^{*}(\underline {x}_{1}^{L},y_{1}^{L}),\hat{\underline{s}}^{*}(\underline{x}_{2}^{L},y_{2}^{L}) \cdots,\hat{\underline{s}}^{*}(\underline{x}_{\ell}^{L},y_{\ell}^{L}))\] _where \(\hat{\underline{s}}^{*}(\underline{x}_{\ell}^{L},y_{i}^{L})\) is the state estimation of the \(i\)-th block, \(i\in[\ell]\)._ Proof:: By (3), we have \[\mathbb{E}[d(\underline{S}_{i},\hat{\underline{S}}_{i})]\] \[=\mathbb{E}_{\underline{X}^{n},Y^{n}}\Big{[}\mathbb{E}[d( \underline{S}_{i},\hat{\underline{S}}_{i})|\underline{X}^{n},Y^{n}]\Big{]}\] \[=\sum_{\underline{x}^{n},y^{n}}P_{\underline{X}^{n},Y^{n}}( \underline{x}^{n},y^{n})\sum_{\underline{\underline{s}}_{i}\in\mathcal{S}}P_{ \underline{S}_{i}\lfloor\underline{X}^{n},Y^{n}}(\underline{\underline{s}}| \underline{x}^{n},y^{n})\] \[\times\sum_{\underline{s}_{i}\in\mathcal{S}}P_{\underline{S}_{i} \lfloor\underline{X}^{L},Y_{i}^{L}}(\underline{s}_{i}|\underline{x}_{i}^{L},y_{i }^{L})d(\underline{s}_{i},\hat{\underline{s}}_{i}) \tag{5a}\] \[\geq\sum_{\underline{x}^{n},y^{n}}P_{\underline{X}^{n},Y^{n}}( \underline{x}^{n},y^{n})\] \[\times\min_{\underline{\underline{s}}_{i}\in\mathcal{S}}\sum_{ \underline{s}_{i}\in\mathcal{S}}P_{\underline{S}_{i}\lfloor\underline{X}^{L},Y_{i}^{L} }(\underline{s}_{i}|\underline{x}_{i}^{L},y_{i}^{L})d(\underline{s}_{i},\hat{ \underline{s}}_{i})\] \[=\mathbb{E}[d(\underline{S}_{i},\hat{\underline{s}}^{*}( \underline{X}_{i}^{L},Y_{i}^{L}))], \tag{5b}\] where (5a) holds by the Markov chain \[(\underline{X}_{1}^{L},\cdots,\underline{X}_{i-1}^{L},Y_{1}^{L},\cdots,Y_{i-1 }^{L},\hat{\underline{S}}_{i})-(\underline{X}_{i}^{L},Y_{i}^{L})-\underline {S}_{i Summing over all \(i=1,\cdots,\ell\), we have \[\Delta^{(\ell)}=\frac{1}{\ell}\sum_{i=1}^{\ell}\mathbb{E}[d(\underline{S}_{i}, \underline{\hat{S}}_{i})]\geq\frac{1}{\ell}\sum_{i=1}^{\ell}\mathbb{E}[d( \underline{S}_{i},\underline{\hat{s}}^{*}(\underline{X}_{i}^{L},Y_{i}^{L}))],\] which leads to the desired conclusion. Lemma 1 allows us to define the conditional estimation cost \[c(\underline{x}_{i}^{L})\triangleq\mathbb{E}[d(\underline{S}_{i},\underline{ \hat{s}}^{*}(\underline{X}_{i}^{L},Y_{i}^{L}))|\underline{X}_{i}^{L}=\underline {x}_{i}^{L}],\] such that, for any encoding function \[\Delta^{(\ell)}=\frac{1}{\ell}\sum_{i=1}^{\ell}\mathbb{E}[c(\underline{x}_{i}^ {L})]. \tag{6}\] **Definition 2**.: Define the minimum distortion \(D(B_{\text{peak}})\) under the peak input cost constraint \(B_{\text{peak}}\) as \[\min_{P_{\underline{X}^{L}}}\frac{1}{\ell}\sum_{i=1}^{\ell}\sum_{\underline{ x}^{L}}P_{\underline{X}^{L}}(\underline{x}^{L})c(\underline{x}^{L}), \tag{7}\] where \(P_{\underline{X}^{L}}(\underline{x}^{L})\) satisfies the peak cost constraint, i.e., \(b(\underline{X}_{,j})\leq B_{\text{peak}},\forall i\in[\ell],j\in[L]\) where \(b(\cdot):\mathcal{X}\rightarrow\mathbb{R}_{+}\) is an input cost function. Since the channel state \(\underline{S}\) is i.i.d. over each block, without loss of generality, we consider only the first block and ignore the block index \(i\). The same derivation/strategy can be applied to other blocks identically. We consider \(b(\cdot)\) to be the Hamming weight function (number of ones). This is physically motivated by the fact that assuming constant transmission power per direction, the total transmission power is proportional to the number of directions in which \(\underline{X}_{i,j}\) sends a "1". The estimation distortion function \(d(\underline{s},\underline{\hat{s}})\) is characterized by Hamming distance, that is, \[d(\underline{s},\underline{\hat{s}})=\begin{cases}0,&\underline{s}=\underline {\hat{s}}\\ 2,&\underline{s},\underline{\hat{s}}\in\mathcal{S}\text{ and }\underline{s}\neq\underline{ \hat{s}}\end{cases}, \tag{8}\] since \(\underline{s}\) and \(\underline{\hat{s}}\) are both one-hot vectors. ## III Main Results In this section, we first derive the minimum distortion under a peak cost constraint \(B_{\text{peak}}\) with an unconstrained communication of the BBP channel model, i.e., \(D(B_{\text{peak}})\) in (7). Then, we provide a sensing strategy that achieves the minimum distortion. Notice that this strategy can simultaneously achieve the capacity of this BBP channel by our previous result [2]. ### _Minimum Distortion_ Let \(\mathcal{B}_{y^{j}}(\underline{x}^{j})\) denote the set of beam indices containing the transmission direction at channel use \(j\) when channel input \(\underline{X}^{j}=\underline{x}^{j}\) and feedback \(Y^{j}=y^{j}\) for all possible transmission strategies. Then, we can simplify the distortion in (6) as follows. We initialize \(\mathcal{B}_{y^{0}}(\underline{x}^{0})=\left[M\right]\). The state estimation decision is made based on \(\mathcal{B}_{y^{L}}(\underline{x}^{L})\). For this BBP channel with iBM and noiseless feedback, we have \[P_{\underline{\hat{S}}|\underline{X}^{L}Y^{L}}(\underline{s}| \underline{x}^{L},y^{L})=P_{\underline{S}|\underline{X}^{L}Y^{L}}(\underline{s }|\underline{x}^{L},y^{L})\] \[=\frac{P_{\underline{S},\underline{X}^{L}Y^{L}}(\underline{s}, \underline{x}^{L},y^{L})}{P_{\underline{X}^{L}Y^{L}}(\underline{x}^{L},y^{L})}.\] The joint distribution for \(L\) channel uses is \[P_{\underline{X}^{L},Y^{L}}(\underline{x}^{L},y^{L}),\] \[=\sum_{\underline{s}}P_{\underline{X}^{L},Y^{L},\underline{s}}( \underline{x}^{L},y^{L},\underline{s})\] \[=\sum_{\underline{s}}\prod_{j=1}^{L}P_{\underline{S}}(\underline{s })\mathbb{1}_{\{\underline{x}^{T}\underline{x}_{j}=y_{j}\}}P_{\underline{X}_{j }|\underline{X}^{j-1},Y^{j-1}}(\underline{x}_{j}|\underline{x}^{j-1},y^{j-1})\] \[=\frac{|\mathcal{B}_{y^{L}}(\underline{x}^{L})|}{M}\prod_{j=1}^{L}P _{\underline{X}_{j}|\underline{X}^{j-1},Y^{j-1}}(\underline{x}_{j}|\underline{ x}^{j-1},y^{j-1}), \tag{9a}\] \[=\frac{|\mathcal{B}_{y^{L}}(\underline{x}^{L})|}{M}P_{\underline{X} ^{L}||Y^{L-1}}(\underline{x}^{L}||y^{L-1}), \tag{9b}\] where (9a) holds since \(\prod_{j=1}^{L}\mathbb{1}_{\{\underline{x}^{T}\underline{x}_{j}=y_{j}\}}=1\) only for the beam indices belonging to \(\mathcal{B}_{y^{L}}(\underline{x}^{L})\) and (9b) holds since we define \[P_{\underline{X}^{L}||Y^{L-1}}(\underline{x}^{L}||y^{L-1}) \triangleq\prod_{j=1}^{L}P_{\underline{X}_{j}|\underline{X}^{j-1},Y^{j-1}}( \underline{x}_{j}|\underline{x}^{j-1},y^{j-1}). \tag{10}\] Therefore, \[P_{\underline{S}|\underline{X}^{L}Y^{L}}(\underline{s}|\underline{x }^{L},y^{L}) =P_{\underline{S}|\underline{X}^{L}Y^{L}}(\underline{s}|\underline{x}^{L},y^{ L})\] \[=\begin{cases}\frac{1}{|\mathcal{B}_{y^{L}}(\underline{x}^{L})|},& \forall\underline{s}\in\mathcal{B}_{y^{L}}(\underline{x}^{L})\\ 0,&\text{otherwise}\end{cases}, \tag{11}\] where \(|\mathcal{B}_{y^{L}}(\underline{x}^{L})|\geq 1\), i.e., it is uniform over the restricted set \(B_{y}^{L}(\underline{x}^{L})\) and zero elsewhere. The distortion can be simplified as \[\mathbb{E}[d(\underline{S},\underline{\hat{S}})]\] \[=\mathbb{E}_{\underline{X}^{L},Y^{L}}\left[\mathbb{E}[d( \underline{S},\underline{\hat{S}})|\underline{X}^{L},Y^{L}]\right]\] \[=\sum_{\underline{x}^{L},y^{L}}P_{\underline{X}^{L}Y^{L}}( \underline{x}^{L},y^{L})\sum_{\underline{\hat{s}}\in\mathcal{S}}P_{\underline{S}| \underline{X}^{L}Y^{L}}(\underline{\hat{s}}|\underline{x}^{L},y^{L})\] \[\times\sum_{\underline{s}\in\mathcal{S}}P_{\underline{S}| \underline{X}^{L}Y^{L}}(\underline{s}|\underline{x}^{L},y^{L})d(\underline{s}, \underline{\hat{s}})\] \[=\sum_{\underline{x}^{L},y^{L}}P_{\underline{X}^{L}Y^{L}}( \underline{x}^{L},y^{L})\frac{2[|\mathcal{B}_{y^{L}}(\underline{x}^{L})|-1]^{+}}{| \mathcal{B}_{y^{L}}(\underline{x}^{L})|} \tag{12a}\] \[=\sum_{\underline{x}^{L}}\sum_{y^{L}}\prod_{j=1}^{L}P_{\underline{ X}_{j}|\underline{X}^{j-1},Y^{j-1}}(\underline{x}_{j}|\underline{x}^{j-1},y^{j-1})\frac{| \mathcal{B}_{y^{L}}(\underline{x}^{L})|}{M}\] \[\qquad\times\frac{2[|\mathcal{B}_{y^{L}}(\underline{x}^{L})|-1]^{+}}{| \mathcal{B}_{y^{L}}(\underline{x}^{L})|}\] \[=\sum_{\underline{x}^{L}}\sum_{y^{L}}P_{\underline{X}^{L}||Y^{L-1} }(\underline{x}^{L}||y^{L-1})\frac{2[|\mathcal{B}_{y^{L}}(\underline{x}^{L})|-1]^{+}}{M} \tag{12b}\] where (12a) follows from (8) and (11), and (12b) follows from (10). Sending back a \(Y_{k}=1\) indicates that the transmission direction is detected within the small set of ones in \(\underline{X}_{k}\). Recall that \(\beta_{k}^{L}\) denotes the set containing all possible \(L\)-length binary sequences with the first non-zero element appearing at index \(k\). Let \[c_{k}\stackrel{{\Delta}}{{=}}M\sum_{y^{L}\in\beta_{k}^{L}}P_{Y^{L}}( y^{L}),\] (13a) which is independent of the transmission strategy. Then, \[P_{Y^{L}}(0^{L})=1-\sum_{k=1}^{L}\sum_{y^{L}\in\beta_{k}^{L}}P_{Y^{L}}(y^{L})=1- \sum_{k=1}^{L}\frac{c_{k}}{M}.\] (13b) Further, by ( 13a ), we have \[c_{k}\leq MP_{Y_{k}}(y_{k}=1)\leq M\frac{B_{\text{peak}}}{M}=B_{\text{peak}},\] (13c) where ( 13c ) holds by the peak input cost constraint. Following this notation, we next provide the minimum distortion under the peak cost constraint. **Theorem 1**.: _The minimum distortion \(D(B_{\text{peak}})\) defined in (7) of the BBP model with iBM under peak cost constraint \(B_{\text{peak}}\) is_ \[D(B_{\text{peak}})=\sum_{j=1}^{L}\frac{2[c_{j}-2^{L-j}]^{+}}{M}+ \frac{2[M-\sum_{j=1}^{L}c_{j}-1]^{+}}{M}, \tag{14}\] _where \(c_{1}=\min(\frac{M}{2},B_{\text{peak}})\), and_ \[c_{j}=\min(\frac{M-\sum_{k=1}^{j-1}c_{k}}{2},B_{\text{peak}}),1<j \leq L. \tag{15}\] Proof.: We first prove that the minimum distortion is presented in (14). By (13a), we have \[c_{j} =M\sum_{\underline{x}^{L}}\sum_{y^{L}\in\beta_{j}^{L}}P_{\underline {X}^{L},Y^{L}}(\underline{x}^{L},y^{L})\] \[=M\sum_{\underline{x}^{L}}\sum_{y^{L}\in\beta_{j}^{L}}\frac{| \mathcal{B}_{y^{L}}(\underline{x}^{L})|}{M}P_{\underline{X}^{L}||Y^{L-1}}( \underline{x}^{L}||y^{L-1}) \tag{16}\] where (16) follows from (9b). Similarly, by (13b), we have \[M-\sum_{j=1}^{L}c_{j} =M\sum_{\underline{x}^{L}}P_{\underline{X}^{L},Y^{L}}(\underline {x}^{L},0^{L})\] \[=M\sum_{\underline{x}^{L}}\frac{|\mathcal{B}_{0^{L}}(\underline{x }^{L})|}{M}P_{\underline{X}^{L}||Y^{L-1}}(\underline{x}^{L}||0^{L-1}).\] Continuing with (12b), the distortion at the end of each block is at least \[D\] \[=\sum_{j=1}^{L}\sum_{y^{L}\in\beta_{j}^{L}}\sum_{\underline{x}^{L} }P_{\underline{X}^{L}||Y^{L-1}}(\underline{x}^{L}||y^{L-1})\frac{2[|\mathcal{B }_{y^{L}}(\underline{x}^{L})|-1]^{+}}{M}\] \[+\sum_{\underline{x}^{L}}P_{\underline{X}^{L}||Y^{L-1}}(\underline {x}^{L}||0^{L-1})\frac{2[|\mathcal{B}_{0^{L}}(\underline{x}^{L})|-1]^{+}}{M}\] \[\geq \sum_{j=1}^{L}\frac{2[\sum_{y^{L}\in\beta_{j}^{L}}(\underline{x} _{\underline{x}^{L}}P_{\underline{X}^{L}||Y^{L-1}}(\underline{x}^{L}||y^{L-1}) |\mathcal{B}_{y^{L}}(\underline{x}^{L})|-1]^{+}}{M} \tag{17a}\] \[+\frac{2[\sum_{\underline{x}^{L}}P_{\underline{X}^{L}||Y^{L-1}}( \underline{x}^{L}||0^{L-1})|\mathcal{B}_{0^{L}}(\underline{x}^{L})|-1]^{+}}{M}\] \[=\sum_{j=1}^{L}\frac{2[c_{j}-2^{L-j}]^{+}}{M}+\frac{2[M-\sum_{j=1 }^{L}c_{j}-1]^{+}}{M} \tag{17b}\] where (17a) holds since \([x-1]^{+}\) is a convex function, and (17b) holds by (16) and since there are \(2^{L-j}\) possible \(y^{L}\) belongs to \(\beta_{j}^{L}\). Hence, the minimum distortion can be represented by (14). Next, we show that the minimum (14) can be obtained by choosing \(c_{j},i\in[L]\) iteratively as given in (15). Ideally, the minimum of (17b) is achieved when \(M-\sum_{j=1}^{L}c_{j}-1\geq 0\) and \(c_{j}-2^{L-i}\geq 0,\forall j\in[L]\). Hence, we have \(M-1\geq\sum_{j=1}^{L}2^{L-j},\) which gives \(L\leq\log M\). To have \(c_{j}\geq 2^{L-j}\) and \(M-\sum_{j=1}^{L}c_{j}-1\geq 0\) hold simultaneously, we can choose \(c_{1}=\frac{M}{2}\) and \(c_{j}=\frac{M-\sum_{j=1}^{L-1}c_{k}}{2}\). Meanwhile, by (13c), \(c_{j}\leq B_{\text{peak}}\) for all \(j\in[L]\). Therefore, we can choose (15) to achieve the minimum of (14). Similarly, one can verify that the minimum distortion \(D(B_{\text{peak}})\) is achieved by choosing (15) when \(L>\log M\). **Remark 1**.: The optimal choice of \(c_{j}\) to minimize (14) is not unique and depends on the values of \(L,M\) and \(B_{\text{peak}}\). For example, when \(L=1,B_{\text{peak}}>1,\) and \(M>1\), any \(1\leq c_{i}\leq M-1\) achieves the minimum distortion. Herein, we choose \(c_{j}\) as in (15) since it also achieves the channel capacity as proved in [2]. ### _Estimation Strategy_ In order to minimize distortion, it is critical to reducing the size of the set \(\mathcal{B}_{y^{L}}(\underline{x}^{L})\). Let \(\mathcal{B}_{i}^{c}\) denote the set of beam indices to be explored at channel use \(i\) and \(\mathcal{B}_{i}^{c,c}\) denote the complementary of \(\mathcal{B}_{i}^{c}\) (i.e., \(\mathcal{B}_{i}^{c}\cup\mathcal{B}_{i}^{c,c}=[M]\)). Initially, \(\mathcal{B}_{y^{0}}=[M]\) and \(\mathcal{B}_{0}^{c}=\emptyset\). Based on the strictly causal noiseless feedback, \(|\mathcal{B}_{y^{i}}(\underline{x}^{i})|\) can be updated as \[|\mathcal{B}_{y^{i+1}}(\underline{x}^{i+1})|\] \[=y_{i+1}|\mathcal{B}_{y^{i}}(\underline{x}^{i})\cap\mathcal{B}_{ i}^{e}|+(1-y_{i+1})|\mathcal{B}_{y^{i}}(\underline{x}^{i})\cap\mathcal{B}_{i}^{e,c}| \tag{18}\] \[\geq y_{i+1}|\mathcal{B}_{i}^{e}|+(1-y_{i+1})(|\mathcal{B}_{y^{i} }(\underline{x}^{i})|-|\mathcal{B}_{i}^{e}|) \tag{19}\] where (18) indicates that the size of possible transmission directions is decreasing (i.e., \(|\mathcal{B}_{y^{i+1}}(\underline{x}^{i+1})|\leq|\mathcal{B}_{y^{i}}(\underline {x}^{i})|\)) and equality in (19) holds when \(\mathcal{B}_{i}^{c}\subseteq B_{y^{i}}(\underline{x}^{i})\), that is, the transmitter selects beam indices from the set \(B_{y^{i}}(\underline{x}^{i})\) recursively. Following the ideas illustrated above, we next show that the minimum distortion in Theorem 1 can be obtained by applying the transmission strategy in [2, Algorithm 1]. Specifically, we initialize a sequence of \(\{c_{1},\cdots,c_{L}\}\) iteratively solved by (15). At the beginning of channel use \(i\), we update \(\mathcal{B}_{y^{i}}\) and choose some number of beam indices randomly and uniformly from \(\mathcal{B}_{y^{i}}\) based on the casual feedback \(Y_{i-1}\). Additionally, we use \(k,k\in[L]\) to record the number of channel uses until the transmitter selected the "right" directions (i.e., \(Y_{k}=1\)). Before that, the transmitter randomly and uniformly chooses \(c_{i},i\leq k\) beam indices from \(\mathcal{B}_{y^{i-1}}\). After that, the transmitter randomly and uniformly chooses \(\frac{c_{k}}{2^{i-k}},i>k\) beam indices from \(\mathcal{B}_{y^{i-1}}\). These selected beam indices are stored in set \(\mathcal{B}^{e}_{i}\). Recall that \(\beta^{L}_{k}\) denotes the binary sequence with length \(L\) and the first non-zero element appearing at the \(k\)-index. Based on this transmission strategy, the probabilities of output sequences \(y^{L}\) under the condition of different channel states are the same, i.e., \(P_{Y^{L}|\underline{S}}(y^{L}|\underline{s})=P_{Y^{L}|\underline{S}}(y^{L}| \underline{s}^{\prime}),\underline{s}\neq\underline{s}^{\prime}\), and one can easily check that \[\sum_{\underline{x}^{L}}\sum_{y^{L}\in\beta^{L}_{k}}P_{\underline {X}^{L},Y^{L}}(\underline{x}^{L},y^{L})=\frac{c_{k}}{M},\ |\mathcal{B}_{\beta^{L}_{k}}(\underline{x}^{L})|=\frac{c_{k}}{2^{L-k}}, \tag{20a}\] \[\sum_{\underline{x}^{L}}P_{\underline{X}^{L},Y^{L}}(\underline{x }^{L},0^{L})=1-\frac{\sum_{k=1}^{L}c_{k}}{M},\ |\mathcal{B}_{0^{L}}(\underline{x}^{L})|=M-\sum_{k=1}^{L}c_{k}, \tag{20b}\] where \(|\mathcal{B}_{0^{L}}(\underline{x}^{L})|\geq 0\) by (15) and \(\mathcal{B}_{\beta^{L}_{k}}(\underline{x}^{L})\) denotes the set containing possible transmission directions for any channel input sequence \(\underline{x}^{L}\) leading to \(y^{L}\in\beta^{L}_{k}\). From (12b), the distortion is at least \[D(B_{\text{peak}})=\sum_{\underline{x}^{L}}\left(\sum_{k=1}^{L} \sum_{y^{L}\in\beta^{L}_{k}}\right.\] \[\left.P_{\underline{X}^{L}||Y^{L-1}}(\underline{x}^{L}||y^{L-1}) \frac{2[|\mathcal{B}_{\beta^{L}_{k}}(\underline{x}^{L})|-1]^{+}}{M}\right. \tag{21a}\] \[\left.+P_{\underline{X}^{L}||Y^{L-1}}(\underline{x}^{L}||0^{L-1}) \frac{2[|\mathcal{B}_{0^{L}}(\underline{x}^{L})|-1]^{+}}{M}\right)\] (21b) \[=\sum_{k=1}^{L}2^{L-k}\frac{2[\frac{c_{k}}{2^{L-k}}-1]^{+}}{M}+ \frac{2[M-\sum_{k=1}^{L}c_{k}-1]^{+}}{M}\] (21c) \[=\sum_{k=1}^{L}\frac{2[c_{k}-2^{L-k}]^{+}}{M}+\frac{2[M-\sum_{k=1} ^{L}c_{k}-1]^{+}}{M}. \tag{21d}\] We partition the sequence of \(y^{L}\) into \(y^{L}\in\beta^{L}_{k}\) and \(y^{L}=0^{L}\) in (21a) and (21b). (21c) follows from (20a) and there are \(2^{i-k}\) possible \(y^{i}\) sequences in \(\beta^{i}_{k}\) sharing the same probability \(P_{\underline{X}^{i},Y^{i}}(\underline{x}^{i},y^{i}\in\beta^{i}_{k})\) for \(k\leq i\) according to Algorithm 1. Finally, we obtain the lower bound (21d). Therefore, we showed that Algorithm 1 in [2] achieves the distortion in (14). ## IV Conclusion In this work, we studied a binary beam-pointing channel with in-block memory and feedback that captures the main feature of the beam alignment problem in mmWave communications and yet is sufficiently simple to be tractable from an information-theoretic viewpoint. We derived the minimum distortion of this simplified channel model under the peak cost constraint. We showed that the capacity-achieving transmission strategy in [2] attains the minimum distortion simultaneously. In conclusion, we have characterized the full capacity-distortion region of this binary beam-pointing channel under the peak cost constraint. This surprising result reveals the fact that channel estimation and signal communication can be jointly optimal, which enables the efficient utilization of the available resources in time, frequency, available antennas, and transmission power.
2309.03398
Automated Discovery of Wurtzite Solid Solutions with Enhanced Piezoelectric Response
While many piezoelectric materials are known, there is still great potential to improve on the figures of merit of existing materials through compositional doping, forming solid solutions. Specifically, it has been shown that doping and alloying wurtzite-structured materials can improve the piezoelectric response; however, a vast compositional space has remained unexplored. In this work, we apply a multi-level screening protocol combining machine learning, chemical intuition, and thermodynamics to systematically discover dopant combinations in wurtzite material space that improve the desired piezoelectric response. Through our protocol, we use computationally inexpensive screening calculations to consider more than 3000 possible ternary wurtzite solid solutions from 9 different wurtzite base systems: AlN, BeO, CdS, CdSe, GaN, ZnO, ZnS, ZnSe, and AgI. Finally, based on thermodynamic analysis and explicit piezoelectric response calculations, we predict 11 materials with improved piezoelectric response, due to the incorporation of electropositive dopants.
Drew Behrendt, Sayan Banerjee, Jiahao Zhang, Andrew M. Rappe
2023-09-06T23:08:54Z
http://arxiv.org/abs/2309.03398v1
# Automated Discovery of Wurtzite Solid Solutions with Enhanced Piezoelectric Response ###### Abstract While many piezoelectric materials are known, there is still great potential to improve on the figures of merit of existing materials through compositional doping, forming solid solutions. Specifically, it has been shown that doping and alloying wurtzite-structured materials can improve the piezoelectric response; however, a vast compositional space has remained unexplored. In this work, we apply a multi-level screening protocol combining machine learning, chemical intuition, and thermodynamics to systematically discover dopant combinations in wurtzite material space that improve the desired piezoelectric response. Through our protocol, we use computationally inexpensive screening calculations to consider more than 3000 possible ternary wurtzite solid solutions from 9 different wurtzite base systems: AlN, BeO, CdS, CdSe, GaN, ZnO, ZnS, ZnSe, and AgI. Finally, based on thermodynamic analysis and explicit piezoelectric response calculations, we predict 11 materials with improved piezoelectric response, due to the incorporation of electropositive dopants. ## 1 Main Text: Piezoelectrics are non-centrosymmetric materials that are capable of interconverting mechanical and electrical energy for a variety of applications [1]. Piezoelectrics provide the basis for microelectronic energy harvesting, acoustic wave devices, actuators, and other devices that are widely used in research, industry, and military applications [2, 3, 4]. The energy efficiency and power output of these materials increases with increasing relative piezoelectric response, but decreases with increasing dielectric constant. Perovskite ferroelectrics are prototypical piezoelectrics; however, while perovskites have a very high piezoelectric response, they also possess high dielectric constants [4, 5]. Furthermore, these materials often lose their beneficial properties at high temperatures [5]. An alternate material class is wurtzite; though wurtzites have smaller piezoelectric constants than their perovskite counterparts, they are known for their low dielectric constants and high-temperature performance [6, 7, 8]. Furthermore, these materials are highly compatible with complementary metal-oxide-semiconductor (CMOS) technology, which makes them highly attractive for use in piezoelectric applications [2, 3]. Doping has long been used to improve the electronic properties of host materials; for example, perovskite lead magnesium niobate (PMN) is alloyed with lead titanate (PT) to enhance the temperature stability and magnitude of piezoelectricity. Adding elements into common wurtzite materials, such as doping aluminum nitride with scandium or zinc oxide with magnesium, has been shown to appreciably increase the piezoelectric response and even induce ferroelectricity, which is of interest for many additional applications [7, 9, 10, 11, 12, 13]. With the recent growth of machine learning (ML) applications in materials science, many methods have been proposed to speed up materials discovery using ML-based methods [14, 15, 16, 17, 18, 19, 20, 21, 22]. Feature selection, where one finds which input features are most correlated to a target output, is a primary challenge of current ML applications in big data. Choosing features is at the heart of this challenge, since it is inherently subjective, and the only way to find meaningful correlations is to have representative and meaningful features. Once chosen and selected, however, features can be particularly useful in rational materials design because of their role in providing an interpretative picture of the underlying physics. Feature selection approaches have been shown to be successful for fields spanning from materials discovery [18, 23] to chemical reaction development using homogeneous [24, 25, 26, 27] and heterogeneous [28, 29, 30, 31, 22] catalysts. Many recent works have identified a "material gene" to perform advanced analysis of material structure-property relations, where the material gene corresponds to the most important feature(s) that correlate with and therefore influencing a target property [32, 33, 34, 35, 36, 37, 38, 39]. The features can be broadly classified into two different types: primitive and calculated. Primitive features, or primitives, are purely based on summary statistics of atomic data such mass, location on the periodic table, electronegativity, and charges [37]. Calculated features, reffered to as proxies, involve calculations such as _ab initio_ density functional theory calculations, and these include ionic charges, bond lengths, and bond angles. It is worth noting that, to be most useful, calculated features of a material should be simpler and less time consuming to calculate than the target property of interest. For example, calculation of the lattice parameters requires much less computational time than piezoelectric tensor calculations. Our multi-level screening protocol is based on the combination of these two types of featurization. In the current work, we have investigated nine different base wurtzite materials and their solid solutions with every relevant metal and metalloid element, over 3000 possible candidate materials. We narrow down this space to 30 solid solutions which are predicted to improve the piezoelectric response. Finally, we perform thermodynamic analysis to predict 11 best candidates for future experimental verification. An overview of this workflow is shown in Fig 1. The first level of screening is designed to find one screening proxy to use in place of expensive piezoelectric calculations. The second level of the screening serves to reduce the space of all examined ternary solid solutions to just a few suggested high-value materials using automated machine learning candidate selection and cost of the suggested dopants. We consider a 2 \(\times\) 2 \(\times\) 1 wurtzite supercell consisting of 8 metal and 8 non-metal sites. For doped materials, 2 of the 8 cation sites have been replaced. The relative positions of the dopants within the unit cell has been shown to affect the piezoelectric response [7], so it is kept constant throughout this entire study to compare elemental doping effects on an equal footing. The calculations for the \(e_{33}\) component of the piezoelectric tensor are performed using the Quantum Espresso [40] software package with optimized, norm-conserving pseudopoentials generated by OPIUM [41, 42]. To do this, the system is strained along the \(c\) axis for five different values (\(-1.0\) to \(+1.0\) percent strain), and then atomic positions are allowed to relax. The polarization is then calculated using the Berry's phase method [43, 44]. The slope of polarization _vs._ strain was calculated and taken to be the \(e_{33}\) component [45] of the piezoelectric tensor for the calculated material. For doping, all combinations of metallic elements that would preserve charge neutrality are considered for each system. For example, combinations of (+3,+3) or (+2,+4) dopants are considered for AlN, whereas for ZnO (+1,+3) or (+2,+2) are screened. Five different machine learning (ML) methods, linear regression, least absolute shrinkage and selection operator (LASSO), ridge, recursive feature elimination (RFE), and random forest (RF), from python's sklearn are employed [46]. By training multiple machine learning algorithms on each round of the active learning, we can ensure faster sampling of a Figure 1: Multi-level screening workflow to explore the compositional phase space of the nine different wurtzite base materials to maximize the piezoelectric tensor component \(e_{33}\). The grey, blue, and green boxes represent data sets, methods, and deliverables, respectively. larger candidate space since the algorithms will select different types of materials from the limited initial database, and ultimately we can ensure that if any algorithm finds a candidate to be viable then it is screened. Additionally, after training these different methods we can compare the relative effectiveness of each in predicting the material proxy from the primary features. To identify the best screening proxy, we started with a moderate dataset of additive elements in the most commonly used wurtzite, aluminium nitride (AlN), to improve the piezoelectric response (Level 1 in Fig. 1). There have been multiple studies showing the effects of co-doping into AlN. Various material descriptors, especiallly the lattice \(\frac{c}{a}\) ratio, have been identified as key properties that correlate with the piezoelectric response [7, 8, 10, 47, 48, 49]. In total, 53 materials (see supplementary materials) are included in our initial dataset, and the results for the piezoelectric response are summarized in Table 1. Notably, AlN doped with boron and scandium together shows a slight increase in the piezoelectric constant relative to AlN with scandium alone, which is currently attracting great interest [2, 3]. For each material in the dataset, we used 15 possible candidate proxy properties based on previous work on material genes and wurtzite characterization (supplementary material). We then used machine learning feature selection to find the material proxy that is, on average, most correlated to \(e_{33}\), and the results are shown in Figure 2. Note that the feature importance for each feature is averaged across all of the ML methods for proxy selection. \begin{table} \begin{tabular}{|c|c|} \hline Dopants & \(e_{33}\) \\ \hline None & 1.46 \\ Sc & 1.86 \\ Sc,B & 1.90 \\ Mg,Hf & 1.84 \\ Mg,Ti & 1.60 \\ \hline \end{tabular} \end{table} Table 1: Notable piezoelectric response for dopants in AlN, from among the initial dataset of 53 materials. We find the lattice \(\frac{c}{a}\) ratio to be the most important feature (Fig. 2a), which has also been reported in previous literature for host materials alone and based on manual manipulation of the lattice parameters for sample systems [8, 9, 10]. Therefore, a lower lattice \(\frac{c}{a}\) ratio for wurtzite solid solutions corresponds to a higher value of \(e_{33}\) (Fig. 2b), in accordance with previous works [8, 9, 10]. We use the \(\frac{c}{a}\) ratio as a screening proxy for our automated workflow (Fig. 2b). Based on the lattice \(\frac{c}{a}\) ratio proxy, we generated a set of over 3000 possible candidate solid solutions from 9 different wurtzite base systems: AlN, BeO, CdS, CdSe, GaN, ZnO, ZnS, ZnSe, and AgI. Starting from the initial 53-material dataset and base materials, we then used machine learning predictions to iteratively select candidates from the 3000 possible combinations to screen. For each atom, uncalculated descriptors were chosen to reflect the fundamental chemical characteristics including: atomic number, atomic mass, elemental melting temperature, charge, column and row on periodic table, atomic radius [50], electronegativity [51], and preferred valence. Primitive features were collected by taking the average, standard deviation, minimum, maximum, and range of the atomic descriptors for each material (a total of 50 primitives for each material). The key screening steps are: i) training Figure 2: Feature selection in the initial dataset of 53 co-doped AlN materials: (a) ML feature selection to identify the most important material proxy. (b) Scatter plot showing the correlation between lattice \(\frac{c}{a}\) and \(e_{33}\) response. Abbreviated proxies in (a) are shown in full in the supplementary material. All of the important proxies represent optimized bond lengths (\(u\) parameter), angles, lattice parameters, or simple electronic properties. a ML model using the primitives of the materials in the database in a specific iteration, ii) using the trained ML model to predict which of the unstudied candidates will have the lowest \(\frac{c}{a}\) ratio to screen next, and iii) adding the newly screened materials to the database (the iterative ML screening in Fig. 1). This process was repeated until the best 500 candidates were screened. During the protocol, it became clear that doped versions of certain base materials are much more likely to have a lower lattice ratio than others (specifically for BeO, AlN, ZnO, GaN). Therefore, in order to screen the best candidates for each base material, once solid solutions of a given base material are no longer predicted to give significant improvement to the proxy of the host, the rest of the elemental combinations of that base material were removed from the remaining active learning candidates. This ensures that we suggest new variants for all nine different base materials, even if some have a higher propensity toward high \(\frac{c}{a}\) ratio than others. Additionally, some elements are inherently unstable in the host material and the relaxation calculations fail for a variety of reasons. In this circumstance, a high value of \(\frac{c}{a}\) ratio of 2.0 was assigned to these co-dopants so that the ML predictions would avoid similar materials. After screening, we performed five-fold cross validation for each ML model. This is done to ensure that the ML models are accurately predicting the lattice ratios from the primitive features so that we can be confident that all viable candidates are screened correctly; the Figure 3: Validation of the predictive power of ML models. The low root mean square error (RMSE) for multiple models indicates that all the viable candidates are screened. results are shown in Fig. 3. We find that the random forest algorithm worked best to predict the lattice ratios. However, with the exception of the LASSO regression, all of the methods used are relatively accurate at predicting the proxy from primitives, which is evident from the low RMSE given in Fig. 3. All of the methods have particular difficulty with materials that are predicted to have \(\frac{c}{a}\) ratios near the average yet are calculated by DFT to possess extremely low lattice ratios; this is in part due to the fact that most of these extreme values represent materials that become unstable and deviate from the original wurtzite structure. Furthermore, because the only poor predictions are in the low \(\frac{c}{a}\) regime, we conclude that the unscreened materials predicted to have high \(\frac{c}{a}\) are unlikely to be good candidate piezoelectrics. At the end of the screening, only about 1/5 of all the screened solid solutions are found to be insulating and actually effective at reducing the lattice ratio for the parent material; these materials are then verified by subsequent \(e_{33}\) calculations in accordance with the workflow in Fig. 1. As shown in Fig. 3, the LASSO algorithm had the worst fitting for the ML algorithms. Figure 4: Importance of primary features to predicting lattice \(\frac{c}{a}\) ratio. "At" stands for atomic, "std" for standard deviation, and the melting temperature is of the material for each element in solid form. All features involved no calculations are are summary statistics for the constituent elements in each candidate material. By design, LASSO tends to lead to only a few features carrying very high weight in the trained model; thus, the poorer fitting indicates that many primitive features are needed in combination to accurately predict the lattice \(\frac{c}{a}\) ratio. However, determination of the most important features still can provide insight to the structure-property relations, guiding which elements are best to add to each wurtzite system. Since most of the ML algorithms trained on the primitive material features provide reliable prediction of the resulting \(\frac{c}{a}\) ratio and insight into the possible piezoelectric response, we performed feature selection to see which primaries were most important (Fig. 4). Two major atomic characteristics stand out as essential to a low \(\frac{c}{a}\) ratio and consequently improved piezoelectric response: electronegativity and mass. Materials containing extremely highly electronegative elements as the anion, oxygen and nitrogen in particular, had characteristically lower \(\frac{c}{a}\) ratios. Additionally, materials with a high standard deviation of electronegativity in constituent elements, containing extremely electropositive elements as well, tended to have lower \(\frac{c}{a}\) ratios. With the high importance of mean row and atomic mass, materials with a low average row and atomic mass also tended to have low \(\frac{c}{a}\) ratios. This means that the addition of small, electropositive elements to AlN, ZnO, and BeO will reliably lead to heightened electronic response in wurtzites. This finding is aligned with recent evidence of Sc and B enhancing piezoelectricity in AlN, Mg in ZnO, and even the classic example of adding Mg and Nb to lead titanate. However, since the LASSO algorithm had relatively poor predictive power, we posit that these primitives alone are not enough to predict the \(\frac{c}{a}\) ratio, and that all (or almost all) of the primitives provide enough information for accurate prediction of the lattice ratio and piezoelectric properties from elemental descriptors. During the verification of \(e_{33}\) using Berry's phase polarization calculations (DFT response calculations in Fig. 1), we find that while all good piezoelectrics have low \(\frac{c}{a}\), not all wurtzite solid solutions with low \(\frac{c}{a}\) are good piezoelectrics. For example, some of the the greatly reduced \(\frac{c}{a}\) ratio systems unphysically distort the base material out of the wurtzite phase, leading to an unstable system. We observe that BeO in particular often becomes unstable with the addition of other elements. Furthermore, we find that the trend of lower \(\frac{c}{a}\) ratio correlating with higher piezoelectric constant is not a single linear trend for all different wurtzite materials, as suggested in previous work [8], but instead we find a linear trend for dopants within each given base material, as shown in Fig. 5. The resulting 30 materials from this final screening that notably increased the piezoelectric coefficient of their respective base material are listed in Table 2. All of these materials are predicted to increase the \(e_{33}\) of the parent material. However, not all are good candidates for functional materials due to thermodynamic and practical concerns (Fig. 1). For each material, the best candidate doping combinations after considering such factors are highlighted in bold in Table 2. While the proxy \(\frac{c}{a}\) ratio helps to drastically reduce the computational expense for the materials screening, it is important to further validate ML-based predictions. The low \(R^{2}\) values in Fig. 5 indicate that there are complicating factors beyond the \(\frac{c}{a}\) ratio that govern the piezoelectric properties, particularly for zinc-containing materials (Fig. 5). Additional factors, such as system Born effective charges, are discussed in the supplementary material and will be investigated in the future. However, we report that as a screening parameter, Figure 5: Lattice ratios and piezoelectric responses for the final screened materials. Each color corresponds to a base material and its respective co-dopants. the \(\frac{c}{a}\) proxy has been effective to find valuable candidate wurtzite solid solutions which can improve the piezoelectric response for all nine base materials. Overall, through a multi-step screening protocol and high-throughput study, we identify previously unstudied solid solution candidates to improve the piezoelectric response of nine chosen wurtzite base materials. During this process, we emphasize the fundamental relation of \(\frac{c}{a}\) lattice ratio in wurtzites to its respective piezoelectric response and use it as a proxy to develop a computationally inexpensive protocol for piezoelectric material discovery. Furthermore, we are able to support the idea that primitives can be effectively used in machine learning methods to predict basic material properties such as lattice parameters, given a sizeable dataset. Finally, we propose the best set of candidates for further experimental verification. We hope that the present work serves as a practical example in the design of materials with improved material properties through computationally efficient multi-step high-throughput studies. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Dopants & c/a & e\({}_{33}\) & Improvement & Formation Energy & Avg. Dopant Cost \\ (Base) & & (C/m\({}^{2}\)) & & (eV) & (USD/kg) \\ \hline (AlN) & 1.604 & 1.46 & - & - & - \\ B,Y & 1.572 & 1.64 & 1.12x & 4.299 & 1,200 \\ Mg,Hf & 1.581 & 1.84 & 1.26x & 2.129 & 700 \\ Sc & 1.575 & 1.86 & 1.27x & -0.069 & 15,000 \\ **Sc,B** & **1.569** & **1.90** & **1.30x** & **2.813** & **8,700** \\ Be,Zr & 1.569 & 2.01 & 1.37x & 3.232 & 860 \\ **Be,Hf** & **1.565** & **2.12** & **1.45x** & **2.937** & **1,100** \\ \hline (BeO) & 1.612 & 0.356 & - & - & - \\ Li,Y & 1.458 & 0.791 & 2.22x & 2.338 & 75 \\ Li,Sc & 1.376 & 0.954 & 2.68x & 2.955 & 7,560 \\ Ga,Na & 1.488 & 1.01 & 2.84x & 9.267 & 140 \\ **Mg** & **1.435** & **1.06** & **2.98x** & **4.211** & **2** \\ **Al,Li** & **1.563** & **1.06** & **2.98x** & **3.593** & **60** \\ \hline (CdS) & 1.625 & 0.302 & - & - & - \\ Be,Sr & 1.577 & 0.561 & 1.86x & -1.161 & 420 \\ **B,K** & **1.452** & **0.699** & **2.22x** & **1.811** & **1,200** \\ \hline (CdSe) & 1.624 & 0.082 & - & - & - \\ Na,Sc & 1.618 & 0.264 & 3.22x & -2.627 & 7,500 \\ **Ca,Mg** & **1.621** & **0.353** & **4.30x** & **-3.764** & **5** \\ \hline (GaN) & 1.633 & 0.629 & - & - & - \\ Mg,Ti & 1.614 & 0.889 & 1.41x & -2.628 & 3 \\ Be,Zr & 1.601 & 0.946 & 1.50x & -1.750 & 850 \\ B,Sc & 1.608 & 1.01 & 1.61x & -2.029 & 8,700 \\ Sc & 1.613 & 1.08 & 1.72x & -4.935 & 15,000 \\ **B,Y** & **1.598** & **1.15** & **1.83x** & **-0.886** & **1,200** \\ \hline (ZnO) & 1.610 & 1.27 & - & - & - \\ B,Na & 1.557 & 1.31 & 1.03x & -0.346 & 1,200 \\ Mg & 1.598 & 1.40 & 1.10x & -4.369 & 3 \\ Be,Mg & 1.585 & 1.43 & 1.13x & -3.750 & 420 \\ **Be,Ca** & **1.560** & **1.64** & **1.29x** & **-3.555** & **420** \\ \hline (ZnS) & 1.642 & 0.243 & - & - & - \\ Be,Cd & 1.629 & 0.335 & 1.38x & 1.452 & 420 \\ Be,Mg & 1.627 & 0.340 & 1.40x & -0.276 & 420 \\ **Be,Ca** & **1.608** & **0.522** & **2.15x** & **-0.620** & **420** \\ \hline (ZnSe) & 1.647 & 0.001 & - & - & - \\ **Na,Sc** & **1.644** & **0.140** & **140x** & **-1.896** & **7500** \\ Be,Mg & 1.632 & 0.198 & 198x & 0.154 & 420 \\ \hline (AgI) & 1.629 & 0.098 & - & - & - \\ **Li** & **1.628** & **0.138** & **1.41x** & **-4.948** & **120** \\ \hline \end{tabular} \end{table} Table 2: Notable piezoelectric response for co-doped systems. Base materials are in parentheses and materials written in bold are of particular interest for increasing the piezoelectric constant of the base system. ## Corresponding Author: *E-mail: [email protected] ## ORCID: Drew Behrendt: 0000-0003-4701-2722; Sayan Banerjee: 0000-0002-8586-9236; Jiahao Zhang:0000-0002-8284-8122; Andrew M. Rappe: 0000-0003-4620-6496 ## Acknowledgements: D.B. and J.Z. thank the U. S. Department of Energy, Office of Science, Office of Basic Energy Sciences Energy Frontier Research Centers program under Award Number DE-SC00211118. S.B. acknowledges the Vagelos Institute for Energy Science and Technology for the graduate fellowship. Computational support was provided by the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy, Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
2310.20344
Multi-Valued Verification of Strategic Ability
Some multi-agent scenarios call for the possibility of evaluating specifications in a richer domain of truth values. Examples include runtime monitoring of a temporal property over a growing prefix of an infinite path, inconsistency analysis in distributed databases, and verification methods that use incomplete anytime algorithms, such as bounded model checking. In this paper, we present multi-valued alternating-time temporal logic (mv-ATL*), an expressive logic to specify strategic abilities in multi-agent systems. It is well known that, for branching-time logics, a general method for model-independent translation from multi-valued to two-valued model checking exists. We show that the method cannot be directly extended to mv-ATL*. We also propose two ways of overcoming the problem. Firstly, we identify constraints on formulas for which the model-independent translation can be suitably adapted. Secondly, we present a model-dependent reduction that can be applied to all formulas of mv-ATL*. We show that, in all cases, the complexity of verification increases only linearly when new truth values are added to the evaluation domain. We also consider several examples that show possible applications of mv-ATL* and motivate its use for model checking multi-agent systems.
Wojciech Jamroga, Beata Konikowska, Damian Kurpiewski, Wojciech Penczek
2023-10-31T10:33:38Z
http://arxiv.org/abs/2310.20344v1
# Multi-Valued Verification of Strategic Ability ###### Abstract Some multi-agent scenarios call for the possibility of evaluating specifications in a richer domain of truth values. Examples include runtime monitoring of a temporal property over a growing prefix of an infinite path, inconsistency analysis in distributed databases, and verification methods that use incomplete anytime algorithms, such as bounded model checking. In this paper, we present _multi-valued alternating-time temporal logic_ (mv-ATL\({}_{\rightarrow}^{*}\)), an expressive logic to specify strategic abilities in multi-agent systems. It is well known that, for branching-time logics, a general method for model-independent translation from multi-valued to two-valued model checking exists. We show that the method cannot be directly extended to mv-ATL\({}_{\rightarrow}^{*}\). We also propose two ways of overcoming the problem. Firstly, we identify constraints on formulas for which the model-independent translation can be suitably adapted. Secondly, we present a model-dependent reduction that can be applied to all formulas of mv-ATL\({}_{\rightarrow}^{*}\). We show that, in all cases, the complexity of verification increases only linearly when new truth values are added to the evaluation domain. We also consider several examples that show possible applications of mv-ATL\({}_{\rightarrow}^{*}\) and motivate its use for model checking multi-agent systems. ## 1 Introduction Alternating-time temporal logic ATL\({}^{*}\) and its less expressive variant ATL [1, 2] are probably the most popular logics that allow for reasoning about agents' abilities in strategic encounters. ATL\({}^{*}\) combines features of temporal logic and basic game theory, encapsulated in the main language construct of the ###### Abstract We consider a class of "free"-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-(free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free) (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-(free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free)-dimensional (free) (free)-dimensional (free) Conflicting evidence coming from different sources is another reason why one may need to deal with an imperfect model of the system. This can happen, e.g., in case of a distributed knowledge base, where some components may be outdated and/or contain unreliable information. In classical logic, the deductive explosion would make any attempt at reasoning useless. In multi-valued logics, one can assign a special truth value ("inconsistent") to situations when conflicting evidence about the value of p exists, and conduct the verification in the usual way. Thirdly, even if the model is complete and faithful, the verification procedure can be only partially conclusive. For instance, in bounded model checking [7, 8], a full transition system is given but the formula is checked on runs of length at most \(n\). Now, again, F p is clearly true if we find p to occur on every path in up to \(n\) steps. Otherwise, the output is inconclusive (because p might or might not occur in subsequent steps). The case of G p is analogous. Note that different sources of multiple truth values can easily combine. For example, runtime monitoring of a distributed knowledge base may require handling both the possible inconsistency of data across different locations and the uncertainty about how the system state will evolve. This kind of scenarios are most conveniently modeled by partially, rather than linearly ordered sets of answers. Overall, it should be up to the designer to decide which domain of truth values will be used for verification. In this paper, we provide a general methodology, together with flexible algorithms, that can be used for any distributive lattice of interpretation. ### Technical contribution and outline of the paper The focus of the article is on general multi-valued verification of strategic abilities in multi-agent systems. The main contributions are: 1. We propose a multi-valued variant of alternating-time temporal logic ATL\({}^{*}\), called mv-ATL\({}^{*}_{\rightarrow}\), over arbitrary lattices of logical values. We also show that the new logic is a conservative extension of the standard, 2-valued ATL\({}^{*}\). 2. We study the multi-valued model checking problem for mv-ATL\({}^{*}_{\rightarrow}\). Similarly to the previous work on temporal model checking [3, 4], we do not propose dedicated algorithms for mv-ATL\({}^{*}_{\rightarrow}\). Instead, we look for general efficient translations from the multi-valued case to the 2-valued case. In this respect, we: 1. prove that no model-independent translation exists for the whole language of mv-ATL\({}^{*}_{\rightarrow}\), 2. identify a broad subclass of formulas for which such a translation can be obtained, 3. propose a recursive model-dependent translation that works for all instances of the problem. 3. We show that all results are insensitive to the actual notion of strategy and strategic play. In particular, they easily extend to verification of strategies under imperfect information. 4. Finally, we report an implementation of the verification algorithm based on the translation to 2-valued model checking, and evaluate its performance experimentally. Point 2 might require further explanation. Multi-valued model checking often provides a _conceptual_ approximation of the classical (two-valued) verification problem, especially in case the precise model is difficult to obtain. On the other hand, we show a _technical_ reduction of multi-valued model checking to the two-valued variant. Thus, typically, a designer who wants to verify properties of a complex system, expressed in (classical) ATL\({}^{*}\), would first come up with a conceptual approximation of the problem in mv-ATL\({}^{*}_{\rightarrow}\), then use the technical translation of the specification back to model checking of ATL\({}^{*}\), and finally run the latter by means of an existing tool (such as MCMAS [9]). The structure of the paper is as follows. We begin by presenting the context of our work: the alternating time logic ATL\({}^{*}\) in Section 2, and distributive lattices in Section 3. We also introduce our working example of drones monitoring the pollution in a city. In Section 4, we define the syntax and semantics of our multi-valued variant of ATL\({}^{*}\), the logic mv-ATL\({}^{*}_{\rightarrow}\). Section 5 discusses the model checking problem for mv-ATL\({}^{*}_{\rightarrow}\) (in general and under certain restrictions). Until that point, we assume the classical interpretation of transitions in models, i.e., the multi-valued approach is applied only to the labeling of _states_. Section 6 extends our definitions and results to models with many-valued transitions. We also show that the idea of _may/must abstraction_ can be seen as a special case of model checking mv-ATL\({}^{*}_{\rightarrow}\). Section 7 adapts our results to verification of strategies for agents with imperfect information. In Section 8, we present an experimental evaluation of our algorithms, based on the drones scenario. Finally, we conclude in Section 9. **Previous version of the article.** The main concepts and some of the results have appeared in a preliminary form in the conference paper [10]. The present version extends it with proper proofs, a comprehensive discussion on motivation and applicability, and a generalization of the framework to models with multi-valued transitions. We also add a case study (complete with a detailed experimental evaluation) based on a newly proposed benchmark model. ### Related work Multi-valued interpretation of modal formulas has been used in multiple approaches to verification. The main idea was proposed by Fitting [11, 12] already more than 25 years ago. Further fundamental work on multi-valued modal logic includes, e.g., [13] where general properties of multi-valued abstractions were studied, and demonstrated on an example 9-valued bilattice. In the 2000s, a number of works adapted the idea to verification of distributed and multi-agent systems. A variant of CTL\({}^{*}\) for models over finite quasi-boolean lattices was proposed in [3], together with a general translation scheme that reduced multi-valued model checking of CTL\({}^{*}\) specifications to the standard 2-valued case. This was later extended to multi-valued modal \(\mu\)-calculus [14, 15, 16, 17], and to multi-valued modal \(\mu\)-calculus with knowledge [4].1 Our paper follows this line of work, and extends the techniques to strategic operators of ATL\({}^{*}\). We also enrich the language with the two-valued "implication" (or comparison) and "equivalence" operators \(\rightarrow\) and \(\cong\), which provide: (i) the notions of material implication and biconditional, useful in specifying general properties of multi-valued models; (ii) a way of model checking "threshold properties" analogous to probabilistic temporal logics behind PRISM [19]. As it turns out, the new operators require non-trivial treatment, significantly different from the previous works [3, 14, 15, 4, 16, 17]. All the above papers consider _general_ multi-valued verification, i.e., the interpretation can be based on an arbitrary finite lattice. Another line of work considers model checking over specific domains of truth values, tailored to a particular class of scenarios. Model checking methods for the special case of 3-valued temporal logics were discussed in [20, 21, 22] and, recently, in [23]. Two different methods for approximating the standard two-valued semantics of ATL\({}^{*}\) under imperfect information by using three-valued ATL\({}^{*}\) are presented and analysed in [24, 25]. However, these approaches use only Kleene's three valued logic rather than general distributive lattices. Moreover, a partial algorithm for model checking two-valued perfect recall via its approximation as three-valued bounded recall is constructed in [24]. Related approaches include runtime verification, which often uses 3-valued [5] or 4-valued interpretation [6, 14] of temporal formulas. Moreover, a 4-valued semantics has been used to evaluate database queries [26, 27]. A 3-valued semantics of strategic abilities in models of perfect information was considered in [28] for verification of alternating \(\mu\)-calculus, and in [29] for abilities expressed in ATL.2 In [30], another 3-valued semantics of ATL was studied, for both perfect and imperfect information. In all those papers (i.e., [28, 29, 30]), the main aim was to verify may/must abstractions of multi-agent systems. Note that, while the agenda of our paper comes close to that of [29, 30], our semantics differs from [29] even in the 3-valued case. Moreover, our multi-valued semantics of ATL is a conservative extension of standard ATL, whereas the one in [29] is not. In contrast, the ATL variant in [30]_is_ a conservative extension of the 2-valued semantics, and can be in fact considered as a very special case of our general semantics. Footnote 2: More precisely, the semantics in [29] includes agents’ indistinguishability relations in the model, but it allows for non-uniform play, thus effectively assuming that the agents have perfect information about the current state of the system while playing. A quite different but related strand of research concerns real-valued logics over probabilistic models for temporal [31, 32, 33] and strategic specifications [34]. We also mention the research on probabilistic model checking of temporal and strategic logics [35, 36, 19, 37, 38, 39] that evaluates specifications in the 2-valued domain but recognizes different degrees of success and the need to aggregate them over available strategies and possible paths. In sum, there have been approaches to general multi-valued model checking of temporal and epistemic properties, but no analogous proposals for strategic properties. Furthermore, there were proposals for multi-valued verification of strategic properties over specific (and usually very simple) sets of truth vales, but no framework that studies the same problem for arbitrary lattices. This paper fills the gap, and combines elements of both strands to obtain a general framework for multi-valued verification of ability in systems of interacting agents. ## 2 How to specify strategic abilities We begin by recalling the basics of two-valued alternating-time temporal logic. We also introduce our working example that will be used throughout the paper. ### Syntax _Alternating-time temporal logic_[1, 2] generalizes the branching-time temporal logic CTL\({}^{*}\) by replacing path quantifiers \(\mathsf{E},\mathsf{A}\) with _strategic modalities_\(\langle\!\langle A\rangle\!\rangle\). Informally, \(\langle\!\langle A\rangle\!\rangle\gamma\) says that a group of agents \(A\) has a collective strategy to enforce temporal property \(\gamma\). ATL\({}^{*}\) formulas can include temporal operators: \(\mathsf{X}\) ("in the next state"), \(\mathsf{G}\) ("always from now on"), \(\mathsf{F}\) ("now or sometime in the future"), and \(\mathsf{U}\) (strong "until"). Similarly to CTL\({}^{*}\) and CTL, we consider two syntactic variants of the alternating-time logic, namely ATL\({}^{*}\) and ATL. Formally, let \(\mathbb{Agt}\) be a finite set of agents, and \(Prop\) a countable set of atomic propositions. The language of ATL\({}^{*}\) is defined as follows: \[\varphi::=\mathsf{p}\mid\neg\varphi\mid\varphi\land\varphi\mid\langle\! \langle A\rangle\!\rangle\gamma,\qquad\gamma::=\varphi\mid\neg\gamma\mid \gamma\land\gamma\mid\mathsf{X}\,\gamma\mid\gamma\,\mathsf{U}\,\gamma.\] where \(A\subseteq\mathbb{Agt}\) and \(\mathsf{p}\in Prop\). Traditionally, only the state formulas \(\varphi\) are called formulas of ATL\({}^{*}\). Derived boolean connectives and constants (\(\lor,\top,\bot\)) are defined as usual. "Sometime", "weak until", and "always from now on" are defined as \(\mathsf{F}\,\gamma\equiv\top\,\mathsf{U}\,\gamma\), \(\gamma_{1}\,\mathsf{W}\,\gamma_{2}\equiv\neg((\neg\gamma_{2})\,\mathsf{U}\,( \neg\gamma_{1}\land\neg\gamma_{2}))\), and \(\mathsf{G}\,\gamma\equiv\gamma\,\mathsf{W}\,\bot\). Also, we can use \(\llbracket\!A\rrbracket\gamma\equiv\neg\langle\!\langle A\rangle\!\rangle\neg\gamma\) to express that, for each strategy of \(A\), property \(\gamma\) holds on some paths. ATL (without "star") is the syntactic variant in which strategic and temporal operators are combined into compound modalities: \(\varphi::=\mathsf{p}\mid\neg\varphi\mid\varphi\land\varphi\mid\langle\! \langle A\rangle\!\rangle\mathsf{X}\,\varphi\mid\langle\!\langle A\rangle\! \rangle\varphi\,\mathsf{U}\,\varphi\mid\langle\!\langle A\rangle\!\rangle \varphi\,\mathsf{W}\,\varphi\). ### Models The semantics of ATL\({}^{*}\) is typically defined over synchronous multi-agent transition systems, i.e., models where all the agents simultaneously decide on their next actions, and the combination of their choices determines the next state, see the following definition. **Definition 2.1**.: **(Cgs)** A _concurrent game structure (CGS)_ is a tuple \(M=\langle\mathbb{Agt},St,Act,d,t,Prop,V\rangle\), which includes nonempty finite sets of: agents \(\mathbb{Agt}\), states \(St\), actions \(Act\), atomic propositions \(Prop\), and a propositional valuation \(V:St\to 2^{Prop}\). The function \(d:\mathbb{Agt}\times St\to 2^{Act}\setminus\{\emptyset\}\) defines the availability of actions. The (deterministic) transition function \(t\) assigns a successor state \(q^{\prime}=t(q,\alpha_{1},\dots,\alpha_{|\mathbb{Agt}|})\) to each state \(q\in St\) and any tuple of actions \(\alpha_{i}\in d(i,q)\), one per agent \(i\in\mathbb{Agt}\), that can be executed in \(q\). A _pointed CGS_ is a pair \((M,q_{0})\), where \(M\) is a CGS and \(q_{0}\in St\) is the initial state of \(M\). A _path_\(\lambda=q_{0}q_{1}q_{2}\dots\) in a CGS is an infinite sequence of states such that there is a transition between each \(q_{i},q_{i+1}\) for each \(i\geq 0\). \(\lambda[i]\) denotes the \(i\)th position on \(\lambda\) (starting from \(i=0\)) and \(\lambda[i,\infty]\) the suffix of \(\lambda\) starting with \(i\). ### Semantics Given a CGS, we define the strategies and their outcomes as follows. A _perfect recall strategy_ (or _IR-strategy_) for agent \(a\) is a function \(s_{a}:St^{+}\to Act\) such that \(s_{a}(q_{0}q_{1}\dots q_{n})\in d(a,q_{n})\). A _memoryless strategy_ (or _Ir-strategy_) for \(a\) is a function \(s_{a}:St\to Act\) such that \(s_{a}(q)\in d(a,q)\). A _collective strategy_ for a group of agents \(A=\{a_{1},\dots,a_{r}\}\) is a tuple of individual strategies \(s_{A}=\langle s_{a_{1}},\dots,s_{a_{r}}\rangle\) Note that \(s_{A}\) only binds the agents in \(A\), while agents outside \(A\) can act as they wish. The set of such strategies is denoted by \(\Sigma_{A}^{\textit{IR}}\) (resp. \(\Sigma_{A}^{\textit{Ir}}\)). The "outcome" function \(out(q,s_{A})\) returns the set of all paths that can occur when agents \(A\) execute strategy \(s_{A}\) from state \(q\) onward. The semantics of perfect recall ATL\({}^{*}\) is defined as follows: \[M,q\models\mathsf{p}\ \text{iff}\ \mathsf{p}\in\ V(q)\text{, for } \mathsf{p}\in Prop;\] \[M,q\models\neg\neg\varphi\ \text{iff}\ M,q\models\varphi\text{, and }M,q\models\neg\varphi\ \text{iff}\ M,q\not \models\varphi\ \text{for}\ \varphi\neq\neg\psi\ \text{for}\ \text{any}\ \psi;\] \[M,q\models\varphi_{1}\land\varphi_{2}\ \text{iff}\ M,q\models\varphi_{1}\ \text{and}\ M,q\models\varphi_{2};\] \[M,q\models\langle\!\langle A\rangle\!\rangle\gamma\ \ \ \text{iff}\ \text{there is a strategy}\ s_{A}\in\Sigma_{A}^{\textit{IR}}\ \text{such that, for each path}\ \lambda\in out(q,s_{A})\text{, we have}\ \ \ M,\lambda\models\gamma\text{.}\] \[M,\lambda\models\varphi\ \text{iff}\ M,\lambda[0]\models\varphi;\] \[M,\lambda\models\neg\neg\gamma\ \text{iff}\ M,\lambda\models\gamma\ \text{, and}\ M,\lambda\models\neg\gamma\ \text{iff}\ M,\lambda\not \models\gamma\ \text{for}\ \lambda\neq\neg\gamma\ \text{for}\ \text{any}\ \gamma;\] \[M,\lambda\models\gamma_{1}\land\gamma_{2}\ \text{iff}\ M,\lambda\models \gamma_{1}\ \text{and}\ M,\lambda\models\gamma_{2};\] \[M,\lambda\models\mathsf{X}\ \gamma\ \text{iff}\ M,\lambda[1,\infty]\models\gamma;\text{ and}\] \[M,\lambda\models\gamma_{1}\ \mathsf{U}\ \gamma_{2}\ \text{iff there is an}\ i\in\mathbb{N}_{0}\ \text{such that}\ M,\lambda[i,\infty]\models\gamma_{2}\ \text{and}\ M,\lambda[j,\infty]\models\gamma_{1}\ \text{for all}\ \ 0\leq j<i\text{,}\] where \(\mathbb{N}_{0}\) is the set of non-negative integers. The memoryless semantics of ATL* uses strategies of \(\Sigma_{A}^{\textit{Ir}}\) rather than \(\Sigma_{A}^{\textit{IR}}\). **Example 2.2**.: **(Drones patrolling for pollution)** Consider a team of \(k\) drones monitoring air pollution in the city of Cracow, Poland. For this scenario, we use the map shown in Figure 1. To keep the resulting CGS small, the map is a grid that only Figure 1: Map: drone navigation and measurements in an area of Cracow. Location colors indicate whether the PM2.5 readings are within or beyond the norm includes four locations, and the drones can move between the locations in one direction only (either North or East).3 The initial location is \(0\), and the drones are supposed to reach location \(3\) (called their "target" location). Each drone has a sensor to measure the level of PM2.5 in the air, that is the rate of particles with a diameter less than 2.5 microns. The drone can also communicate with the nearest sensor on the ground that reports the PM2.5 rate on the ground level, as well as the current measurements of temperature, air pressure, and humidity. Note that some measurements may be unobtainable at some locations. In particular, no measurements are available at the start location. We also assume, for the sake of simplicity, that the environment can be viewed as stationary with respect to the movement of drones, i.e., the measurements at a location do not change throughout the patrolling mission. Footnote 3: A more complex map will be used for the experiments in Section 8. Figure 2 presents a pointed concurrent game structure \(M_{1}\) that models the above scenario for \(k=2\) drones. Every drone is a separate agent, with two actions available: \(N\) (fly North) and \(E\) (fly East). Due to a limited battery capacity, a drone can only visit \(l=2\) locations before the battery dies. Each state of the model includes information about the current locations of the drones, possibly distinguishing the locations that have been already visited by the team. Note that in our simple scenario the only pair of locations that can be reached by the drones via two different routes is \((3,3)\). The corresponding two states are labeled accordingly \((3,3)_{1}\) and \((3,3)_{2}\). Most of the atomic propositions refer to the pollution measurements available to a drone at its current location. That is, \(\mathsf{d}\text{-}\mathsf{pol}_{i}\) indicates that the \(i\)th drone registers pollution; more precisely: the sensor of drone \(i\) reports that the level of PM2.5 exceeds the norm. Similarly, \(\mathsf{d}\text{-}\mathsf{ok}_{i}\) indicates that the sensor of drone \(i\) reports a level of PM2.5 within the norm. Moreover, \(\mathsf{g}\text{-}\mathsf{pol}_{i}\) (resp. \(\mathsf{g}\text{-}\mathsf{ok}_{i}\)) says that the ground sensor nearest to drone \(i\) reports a level of PM2.5 exceeding the norm (resp. within the norm). Proposition target indicates states where all drones have reached the target location. Finally, Figure 2: Model \(M_{1}\): autonomous drones monitoring pollution proposition allvisited labels the states where the team has already visited all locations in the map: in case of \(M_{1}\), the only such state is \((3,3)_{2}\). \(\Box\) **Example 2.3**.: **(Drones, ctd.)** For the model in Example 2.2, we have, for instance, \(M_{1},(0,0)\models\langle\!\langle 1\rangle\!\rangle\mathsf{F}\,\mathsf{d} \mathsf{-}\mathsf{pol}_{1}\): drone \(1\) has a strategy ensuring that its sensor will eventually register pollution. The strategy itself is simple: when in the state \((0,0)\) fly North. Moreover, \(M_{1},(0,0)\models\langle\!\langle 1\rangle\!\rangle\mathsf{F}\,\mathsf{d} \mathsf{-}\mathsf{ok}_{1}\): it also has a strategy for reaching a location where it registers no pollution. This time the simplest strategy is to fly East. In fact, \(M_{1},(0,0)\models\langle\!\langle 1\rangle\!\rangle(\mathsf{F}\,\mathsf{d} \mathsf{-}\mathsf{pol}_{1}\wedge\mathsf{F}\,\mathsf{d}\mathsf{-}\mathsf{ok}_{1})\): there is a single strategy for achieving both goals. The drone needs to fly East first, and then fly North. Furthermore, no drone can assure on its own that all of the locations will eventually be visited: \(M_{1},(0,0)\models\neg\langle\!\langle 1\rangle\!\rangle\mathsf{F}\) allvisited \(\wedge\neg\langle\!\langle 2\rangle\!\rangle\mathsf{F}\) allvisited. This can only be ensured if both drones cooperate: \(M_{1},(0,0)\models\langle\!\langle 1,2\rangle\!\rangle\mathsf{F}\) allvisited. On the other hand, the drones are bound to end up at the target location, no matter what they decide to do: \(M_{1},(0,0)\models\langle\!\langle\emptyset\rangle\!\rangle\mathsf{F}\) target. \(\Box\) ## 3 Multi-valued domains of interpretation As the formulas of our multi-valued version of \(\text{ATL}^{*}\) will be interpreted in distributive lattices of truth values [40], we recall the relevant notions and results in this section. ### Lattices of truth values **Definition 3.1**.: A _lattice_ is a partially ordered set \(\mathsf{L}=(\mathsf{L},\leq)\), where every pair of elements \(x,y\in\mathsf{L}\) has the greatest lower bound (called _meet_ and denoted by \(x\sqcap y\)) and the least upper bound (called _join_ and denoted by \(x\sqcup y\)). Note that the meet and join of any \(x,y\in\mathsf{L}\) are uniquely determined due to the antisymmetry of \(\leq\). In what follows, we only consider finite lattices. We denote the least and the greatest elements of \(\mathsf{L}\) by \(\bot\), \(\top\), respectively. Also, we write (i) \(x_{1}<x_{2}\) iff \(x_{1}\leq x_{2}\) and \(x_{1}\neq x_{2}\), and (ii) \(x_{1}\bowtie x_{2}\) iff neither \(x_{1}\leq\ x_{2}\) nor \(x_{2}\leq\ x_{1}\). Moreover, let: * \(\top\,x=\{y\in L\mid x\leq y\}\) denote the upward closure of \(x\), and * \(\bot\,x\ =\{y\in L\mid y\leq x\}\) denote the downward closure of \(x\). A lattice \(\mathsf{L}^{\prime}=(\mathsf{L}^{\prime},\leq^{\prime})\) is a sublattice of a lattice \(\mathsf{L}=(\mathsf{L},\leq)\) if \(\mathsf{L}^{\prime}\subseteq\mathsf{L}\) and \(\leq^{\prime}=\leq\cap(\mathsf{L}^{\prime}\times\mathsf{L}^{\prime})\). **Definition 3.2**.: A lattice \(\mathsf{L}=(\mathsf{L},\leq)\) is _distributive_ if, for any \(x,y,z\in\mathsf{L}\), the following two conditions hold: (i) \(z\sqcup(x\sqcap y)=(z\sqcup x)\sqcap(z\sqcup y)\), (ii) \(z\sqcap(x\sqcup y)=(z\sqcap x)\sqcup(z\sqcap y)\). Note that, for any lattice \(\mathsf{L}\), the operations \(\sqcup\) and \(\sqcap\) are associative. **Example 3.3**.: **(Some useful lattices)** Figure 3 presents five distributive lattices with applicability motivated by clear practical intuitions. The total order \(\mathbf{3}\) is the most popular lattice in multi-valued verification. The intuition is simple: stands for absolute truth, \(\bot\) for absolute falsity, and \(u\) can be read as "unknown" or "undefined." That is, when a formula \(\varphi\) is assigned the value \(u\), this indicates that the statement represented by \(\varphi\) cannot be conclusively evaluated (its truth value - in the classical sense - cannot be determined, or even does not exist at the moment). The lattice is very often used in model checking approaches based on abstraction, with \(u\) assigned to formulas for which the verification has proved inconclusive. The total order \(\mathbf{4}\) allows for representing graded uncertainty. For instance, in 4-valued approaches to runtime monitoring, \(s\) is interpreted as "still possibly true," and \(n\) stands for "not proved false yet." In evidence-based reasoning, the values can correspond to situations when there is much (respectively, little) evidence supporting \(\varphi\). The lattice can be generalized to the \(k\)-valued linear order \(\mathbf{k}\), useful in scenarios where the amount of positive/negative evidence is weakly indicative for the truth of a statement. Consider a corpus of data coming from event logs that support or reject proposition \(\mathsf{p}\). Then, the logical value of \(\mathsf{p}\) can be, e.g., defined as the difference between the amounts of positive and negative evidence. A more sophisticated, partially ordered lattice could also involve the number of conflicts and the support set size. The partial order \(\mathbf{2}\times\mathbf{2}\) can be used to interpret statements with evidence coming from two different, possibly disagreeing sources \(A\) and \(B\). Then, the value \(a\) can be read as "true according to source \(A\), but not necessarily according to \(B\)," and analogously for \(b\). The dual interpretation is also possible, Figure 3: Distributive lattices and their join-irreducible elements i.e., we can use \(a\) to represent "false according to source \(A\), but not necessarily according to \(B\)", and likewise for \(b\). Actually, these two interpretations correspond to two different choices of the set \(\mathcal{D}\) of so-called _designated values_, i.e., values corresponding to truth in classical logic and representing satisfaction of a formula. Namely, the first interpretation corresponds to \(\mathcal{D}=\{a,\top\}\), and the second -- to \(\mathcal{D}=\{b,\top\}\). Another useful lattice, \(\mathbf{2}+\mathbf{2}\times\mathbf{2}\), allows for a natural representation of both uncertainty and disagreement. It provides truth values for statements with inconsistent evidence (\(i\)) and inconclusive evidence (\(u\)). Combinations of inconsistency and uncertainty can be easily obtained by join and meet (\(u\sqcup i\), \(u\sqcap i\)) For a multi-valued interpretation of the formulas in the drone model, we propose another lattice denoted by \(\mathbf{2}+\mathbf{2}\times\mathbf{2}+\mathbf{2}\times\mathbf{2}\). The lattice combines two instances of \(\mathbf{2}\times\mathbf{2}\): one representing incomplete evidence of truth, and the other incomplete evidence of falsity. The idea is that a drone \(i\) will evaluate the truth of the proposition \(\mathsf{pol}_{i}\) ("according to \(i\), its current location is polluted"), based on the readings from its own sensor and the nearest ground sensor. Besides that, the lattice includes explicit nodes for the maximal and minimal elements, similarly to the previous lattice. This gives us the following basic truth values and their interpretation: * \(\top\): both readings indicate presence of pollution at the location, * \(\top_{d}\): reading from the drone sensor indicates pollution, while the ground sensor indicates no pollution or provides no reading, * \(\top_{g}\): reading from the ground sensor shows pollution; absent or negative reading from the drone, * \(u\): there are no readings, neither from the ground nor from the drone, * \(\bot_{d}\): drone sensor indicates no pollution; there is no reading from the ground, * \(\bot_{g}\): ground sensor indicates no pollution; no reading from the drone, * \(\bot\): no pollution (both readings are within the norm). \(\Box\) Not every lattice is distributive, as shown in Figure 4. However, distributive lattices have a very simple characterization: a lattice is distributive iff it contains neither M5 nor N5 as a sublattice [40]. **Remark 3.4**.: **(Quasi-boolean lattices and De Morgan algebras)** The operations of join and meet are natural semantic counterparts of disjunction and conjunction. Some multi-valued approaches also add the _complement_ operation \(\sim\) to the lattice, as the semantic counterpart of multi-valued negation. A lattice with complement is usually called _quasi-Boolean_, and when distributive it is referred to as a _De Morgan algebra_. The most popular case is the lattice Figure 4: Non-distributive lattices M5 and N5 underlying 3-valued Kleene logic, used e.g. in [30, 25]. However, the choice of a generic complement to suit any lattice is problematic from the conceptual point of view. For instance, what should be the "opposite" value to \(i\) in the lattice \(\mathbf{2}+\mathbf{2}\times\mathbf{2}\)? In other words, if statement \(\varphi\) is assessed as "inconsistent", what should be the evaluation of "not \(\varphi\)?". There is no uniform answer to that question, so in a general approach paper, like ours, a good strategy is to avoid the negation as much as possible. This is why instead of negation we have chosen to use here another non-monotonic connective: two-valued implication representing the lattice order, which is useful in all practical cases when we want to compare the truth values of two formulas. ### Join-irreducible elements **Definition 3.5**.: Let \(\mathtt{L}=(\mathtt{L},\leq)\) be a lattice. An element \(\ell\in\mathtt{L}\) is called _join-irreducible_ iff \(\ell\neq\bot\) and, for any \(x,y\in\mathtt{L}\), if \(\ell=x\sqcup y\), then either \(\ell=x\) or \(\ell=y\). The set of all join-irreducible elements of \(\mathtt{L}\) is denoted by \(\mathcal{JI}(\mathtt{L})\). It is well known [41] that every element \(x\neq\bot\) of a finite distributive lattice can be uniquely decomposed into the join of all join-irreducible elements in its downward closure, i.e. \[x=\bigsqcup(\mathcal{JI}(\mathtt{L})\cap\downarrow x) \tag{1}\] **Example 3.6**.: The join-irreducible elements of the distributive lattices in Figure 3 are marked with black dots. All other ones can be decomposed into the join of some join-irreducible elements. \(\Box\) We will use the characterization (1) to define translations from multi-valued to standard model checking through the following theorem. **Theorem 3.7**.: **([4])** Let \(\mathcal{L}\) be a finite distributive lattice, and let \(\ell\in\mathcal{JI}(\mathtt{L})\). Then the _threshold function_ \(f_{\ell}:\mathtt{L}\longrightarrow\{\bot,\top\}\), defined by \[f_{\ell}(x)=\left\{\begin{array}{ll}\top&\mbox{if }x\geq\ell\\ \bot&\mbox{otherwise}\end{array}\right.\] preserves arbitrary bounds, i.e., for an arbitrary set of indices \(I\), we have: \[f_{\ell}(\bigcap_{i\in I}x_{i})=\bigcap_{i\in I}f_{\ell}(x_{i})\qquad\quad and \qquad\quad f_{\ell}(\bigsqcup_{i\in I}x_{i})=\bigsqcup_{i\in I}f_{\ell}(x_{ i}). \tag{2}\] **Remark 3.8**.: The above does not hold for lattices which are not distributive. To see this, consider the element \(\ell_{1}\) of lattice **M5**, which is join-irreducible. However, \(f_{\ell_{1}}\) does not preserve the upper bounds, as \(f_{\ell_{1}}(\ell_{2}\sqcup\ell_{3})=f_{\ell_{1}}(\top)=\top\) whereas \(f_{\ell_{1}}(\ell_{2})\sqcup f_{\ell_{1}}(\ell_{3})=\bot\sqcup\bot=\bot\). ## 4 Multi-valued strategic logic \(\mathrm{mv}\text{-}\mathrm{ATL}^{*}_{\rightarrow}\) In this section we extend the syntax and semantics of \(\mathrm{ATL}^{*}\) to allow for multi-valued reasoning. That is, we propose a variant of \(\mathrm{ATL}^{*}\) where formulas are interpreted in an arbitrary lattice \(\mathtt{L}=(\mathtt{L},\leq)\). It is sometimes useful to refer to logical values in the object language. To enable this, we assume that a natural interpretation of suitable constants is given, in the following way. **Definition 4.1**.: **(Interpreted lattice)** Let \(\mathcal{C}\) be a countable set of symbols. An interpreted lattice over \(\mathcal{C}\) is a triple \(\mathsf{L}^{+}=(\mathsf{L},\leq,\sigma)\), where \((\mathsf{L},\leq)\) is a lattice, and \(\sigma:\mathcal{C}\rightarrow\mathsf{L}\) is an interpretation of the symbols in \(\mathcal{C}\) as truth values in \(\mathsf{L}\). To explicitly show the connection between the named truth values and their names in \(\mathcal{C}\), for any interpreted lattice \(\mathsf{L}^{+}=(\mathsf{L},\leq,\sigma)\) and any truth value \(x\in\sigma(\mathcal{C})\), we will use the notation \(\boxed{x}\) to denote an arbitrarily selected symbol \(c\in\mathcal{C}\) such that \(\sigma(c)=x\). We do not make any specific assumptions about \(\sigma\). In particular, we do not assume that \(\sigma\) must be surjective, as in many situations only some truth values in \(\mathsf{L}\) need to be referred to in formulas. However, in all the examples that follow in this paper, \(\sigma\) actually _is_ a bijection, since every truth value used there has a specific purpose. Thus, for all our examples, it holds that \(\mathcal{C}=\{\boxed{l}\mid l\in\mathsf{L}\}\), and \(\sigma(\boxed{x})=x\) for all \(x\in\sigma(\mathcal{C})\). ### Syntax Since, as explained in Remark 3.4, multi-valued negation can be problematic from the conceptual viewpoint, we will use instead the binary implication operator \(\rightarrow\) corresponding to the lattice order. Our implication operator is similar to the well-known implication of many-valued Goedel-Dummet logic, and in general to relevant implication or residuum of lattice meet, of which the former is just a special case. However, our implication is a two-valued operator, which makes it better suited for proof system purposes. As it can be used for comparing truth values of formulas, it is of obvious practical importance in many applications, where we are mainly interested in ascertaining whether the truth value of a given formula \(\varphi\) is greater than the logical value of some other formula \(\psi\) (see Example 4.3 below for more explanations and illustration). Moreover, in case of two-valued logic, classical negation can be expressed using \(\rightarrow\) and the constant representing \(\bot\). To increase the expressive power of the language, we also allow for the use of symbols in \(\mathcal{C}\). The resulting logic is called \(\text{mv-ATL}^{*}_{\rightarrow}\) and has the following syntax: \[\begin{array}{l}\varphi::=c\mid\mathsf{p}\mid\varphi\land\varphi\mid \varphi\lor\varphi\mid\varphi\rightarrow\varphi\mid\langle\!\langle A\rangle\! \rangle\gamma\mid\llbracket\!\langle A\rrbracket\!\rangle\gamma,\\ \gamma::=\varphi\mid\gamma\land\gamma\mid\gamma\lor\gamma\mid\mathsf{X}\gamma \mid\gamma\mathsf{U}\gamma\mid\gamma\mathsf{W}\gamma.\end{array}\] where \(\mathsf{p}\in Prop\), \(A\subseteq\mathbb{A}\mathrm{gt}\), and \(c\in\mathcal{C}\), with \(Prop\) being a countable set of atomic propositions, and \(\mathcal{C}\) a countable set of constants. In what follows, by an _implication formula_ we mean any formula of the form \(\varphi_{1}\rightarrow\varphi_{2}\). Additionally, we define an equivalence formula as \(\varphi_{1}\cong\varphi_{2}=(\varphi_{1}\rightarrow\varphi_{2})\land(\varphi_ {2}\rightarrow\varphi_{1})\). The sublogic of \(\text{mv-ATL}^{*}_{\rightarrow}\) without the implication operator will be denoted by \(\text{mv-ATL}^{*}\). ### Semantics The semantics of \(\text{mv-ATL}^{*}_{\rightarrow}\) is defined over concurrent game structures with multi-valued interpretation of atomic propositions. **Definition 4.2**.: **(Multi-valued CGS)** Let \(\mathsf{L}^{+}=(\mathsf{L},\leq,\sigma)\) be an interpreted lattice. A _multi-valued concurrent game structure (mv-CGS)_ over \(\mathsf{L}^{+}\) is a tuple \(M=\langle\mathtt{Agt},St,Act,d,t,Prop,\,V,\mathsf{L}^{+}\rangle\), where \(\mathtt{Agt}\), \(St\), \(Act\), \(d\), \(t\), \(Prop\) are as before, and \(\,V:Prop\times St\to\mathsf{L}\) assigns at any state all atomic propositions with truth values from the logical domain \(\mathsf{L}\). **Example 4.3**.: **(Drones ctd.)** A multi-valued model of the drone scenario is presented in Figure 5. To evaluate atomic propositions and their negations, we use the lattice \(\mathbf{2}+\mathbf{2}\times\mathbf{2}+\mathbf{2}\times\mathbf{2}\) introduced in Section 3.1. Each proposition \(\mathsf{pol}_{i},\ i=1,\ldots,k\), refers to the level of pollution from the viewpoint of drone \(i\), that is, given by the available measurements at the current location of the drone. Whenever a proposition evaluates to \(\bot\), we omit that valuation from the picture. \(\Box\) Logical operators can often be naturally interpreted as either maximizers or minimizers of the truth values. For example, disjunction (\(\varphi\vee\psi\)) can be understood as a maximizer ("the most that we can hope to make of either \(\varphi\) or \(\psi\)"), and conjunction as a minimizer ("the least that we can guarantee for both \(\varphi\) and \(\psi\)"). This extends to existential quantification (maximizing) and universal quantification (minimizing) over paths, strategies, moments in time, etc. Formally, let \(M=\langle\mathtt{Agt},St,Act,d,t,Prop,\,V,\mathsf{L}^{+}\rangle\) be an mv-CGS over \(\mathsf{L}^{+}=(\mathsf{L},\leq,\sigma)\). The valuation function \([\cdot]\) is given as below. We sometimes use \(\bigsqcap_{X}\{Y\}\) as a shorthand for \(\bigsqcap\{Y\mid X\}\), and similarly for the supremum. For any \(q\in St\) and any path \(\lambda\) in \(M\), we define: \[[c]_{M,q}=\sigma(c)\quad\text{for $c\in\mathcal{C}$;}\] \[[\mathsf{p}]_{M,q}=\,V(\mathsf{p},q)\quad\text{ for $\mathsf{p}\in Prop$;}\] Figure 5: Multi-valued model \(M_{2}\) for the drone scenario. \[[\varphi_{1}\wedge\varphi_{2}]_{M,q}=[\varphi_{1}]_{M,q}\sqcap[ \varphi_{2}]_{M,q}\text{;}\] \[[\varphi_{1}\vee\varphi_{2}]_{M,q}=[\varphi_{1}]_{M,q}\sqcup[ \varphi_{2}]_{M,q}\text{;}\] \[[\gamma_{1}\wedge\gamma_{2}]_{M,\lambda}=[\gamma_{1}]_{M,\lambda} \sqcap[\gamma_{2}]_{M,\lambda}\quad\text{and}\quad[\gamma_{1}\vee\gamma_{2}]_{M,\lambda}=[\gamma_{1}]_{M,\lambda}\sqcup[\gamma_{2}]_{M,\lambda}\text{;}\] \[[\varphi]_{M,\lambda}=[\varphi]_{M,\lambda[0]}\text{;}\] \[[\mathsf{X}\,\gamma]_{M,\lambda}=[\gamma]_{M,\lambda[1..\infty]} \text{;}\] \[[\gamma_{1}\,\mathsf{U}\,\gamma_{2}]_{M,\lambda}=\bigsqcup_{i\in \mathbb{N}_{0}}\bigsqcap_{0\leq j<i}\{[\gamma_{2}]_{M,\lambda[i..\infty]}\sqcap [\gamma_{1}]_{M,\lambda[j..\infty]}\}\text{;}\] \[[\gamma_{1}\,\mathsf{W}\,\gamma_{2}]_{M,\lambda}=\bigsqcap_{i\in \mathbb{N}_{0}}\{[\gamma_{1}]_{M,\lambda[i..\infty]}\}\sqcup\bigsqcup_{i\in \mathbb{N}_{0}}\bigsqcap_{0\leq j<i}\{[\gamma_{2}]_{M,\lambda[i..\infty]}\sqcap [\gamma_{1}]_{M,\lambda[j..\infty]}\}\text{;}\] \[[\langle\!\langle A\rangle\!\rangle\!\rangle_{M,q}=\bigsqcup_{s_{A }\in\Sigma_{A}}\bigsqcap_{\lambda\in out(q,s_{A})}\{[\gamma]_{M,\lambda}\} \text{;}\] \[[\![A]\!]\gamma_{M,q}=\bigsqcap_{s_{A}\in\Sigma_{A}}\bigsqcup_{ \lambda\in out(q,s_{A})}\{[\gamma]_{M,\lambda}\}\text{;}\] \[[\varphi_{1}\rightarrow\varphi_{2}]_{M,q}=\top\text{ if }[\varphi_{1}]_{M,q} \leq[\varphi_{2}]_{M,q}\text{ and }\bot\text{ otherwise.}\] It is worth noting that our implication operator differs from the well-known residue of lattice meet in being two-valued -- which makes it better suited for use in any proof system, and more intuitive in specification of many requirements. The semantics of the two "until" operators demands a more detailed explanation. The computation of \([\gamma_{1}\,\mathsf{U}\,\gamma_{2}]_{M,\lambda}\) seeks to achieve a position \(i\) on path \(\lambda\), for which the value of \(\gamma_{2}\) at \(\lambda[i]\), and the values of \(\gamma_{1}\) at all the points preceding \(\lambda[i]\), are guaranteed maximal. The semantics of \(\gamma_{1}\,\mathsf{W}\,\gamma_{2}\) is based on the well-known unfolding \(\gamma_{1}\,\mathsf{W}\,\gamma_{2}\equiv(\mathsf{G}\,\gamma_{1})\vee(\gamma_{ 1}\,\mathsf{U}\,\gamma_{2})\), transformed here to a multi-valued interpretation. Note also that in case of the derived temporal operators "sometime" and "always" the semantic rules reduce to: \[[\mathsf{F}\,\gamma]_{M,\lambda}=\bigsqcup_{i\in\mathbb{N}}[ \gamma]_{M,\lambda[i..\infty]}\text{;}\] \[[\mathsf{G}\,\gamma]_{M,\lambda}=\bigsqcap_{i\in\mathbb{N}}[ \gamma]_{M,\lambda[i..\infty]}\text{.}\] Thus, for instance, the formula \(\langle\!\langle A\rangle\!\rangle\mathsf{F}\,\mathsf{pol}\) can be read as: "the maximal level of pollution readings that \(A\) can guarantee to reach." Clearly, such statements do not always submit to intuitive understanding, in particular when nested strategic operators are used. Because of that, we will stick to simple formulas in our working examples, that is, ones that are relatively easy to read. **Example 4.4**.: **(Drones ctd.)** For the model in Figure 5, we have \([\langle\!\langle 1\rangle\!\rangle\mathsf{F}\,\mathsf{pol}_{1}]_{M_{2},(0,0)}=\top\): there is a strategy for drone 1 to surely detect pollution (the strategy being to fly North in state \((0,0)\), and then East in \((1,1)\) or \((1,2)\)). Similarly for the other drone we have \([\langle\!\langle 2\rangle\!\rangle\mathsf{F}\,\mathsf{pol}_{2}]_{M_{2},(0,0)}=\top\) (the same strategy, but now executed by drone 2). On the other hand, \([\langle\!\langle 1\rangle\!\rangle\mathsf{G}\,\mathsf{pol}_{1}]_{M_{2},(0,0)}=u\): the maximal _guaranteed_ level of detection throughout the mission is \(u\) (obtained by the same strategy again). This means that if drone 1 wants to maximize its detection level, the best it can achieve is to keep it consistently at the level of "uncertain" or higher. Finally, \[[\langle\!\langle 1,2\rangle\!\rangle\mathsf{F}\,(\text{target}\wedge\text{ allvisited}\wedge(\mathsf{pol}_{1}\vee\mathsf{pol}_{2}))]_{M_{2},(0,0)}=\top_{d}\text{.}\] That is, if the drones cooperate, and their goal is to reach the target, visit all the locations on the way, and at the end get the pollution detected by at least one of them, then their degree of success is \(\top_{d}\) (pollution indicated by the drone sensor but not by the ground sensor). \(\Box\) The logical constants we have introduced are especially useful in implication formulas, as the subsequent example demonstrates. **Example 4.5**.: **(Implication formulas)** The "implication" operator provides several interesting specification patterns. For instance, it allows for specifications that are accepted when the "strength" of a property reaches a given threshold, similarly to the probabilistic approaches of [19, 39]. As an example, the formula \(\overline{u}\rightarrow\langle\!\langle 1\rangle\!\rangle\mathsf{G}\ \mathsf{pol}_{1}\) can be used to specify that the truth value of \(\langle\!\langle 1\rangle\!\rangle\mathsf{G}\ \mathsf{pol}_{1}\) is at least \(u\) (intuitively: there is no evidence that the formula is false). It is easy to see that the formula is true in the model of Figure 5; formally: \([\overline{u}]\rightarrow\langle\!\langle 1\rangle\!\rangle\mathsf{G}\ \mathsf{pol}_{1}]_{M_{2},(0,0)}=\top\). Naturally, any stronger requirement on the value of \(\langle\!\langle 1\rangle\!\rangle\mathsf{G}\ \mathsf{pol}_{1}\) evaluates to "false," e.g., \([\overline{\top}]\rightarrow\langle\!\langle 1\rangle\!\rangle\mathsf{G}\ \mathsf{pol}_{1}]_{M_{2},(0,0)}=\bot\). Moreover, the formula \(\langle\!\langle 1\rangle\!\rangle\mathsf{F}\ \mathsf{pol}_{1} \rightarrow\langle\!\langle 2\rangle\!\rangle\mathsf{F}\ \mathsf{pol}_{2}\) says that the ability of drone 2 to spot pollution is at least as good as that of drone 2 (the formula evaluates to \(\top\) in \(M_{2},(0,0)\)). Finally, \(\langle\!\langle 1\rangle\!\rangle\mathsf{F}\ (\mathsf{pol}_{1}\cong\overline{\top}_{g} \big{)}\) says that the first drone has a strategy to ensure that it will reach a location where only the ground sensor indicates pollution. Clearly, the last formula evaluates to \(\bot\) in \(M_{2},(0,0)\). \(\Box\) We note that most approaches to general multi-valued model checking of temporal specifications [3, 14, 4, 16] allow also for _multi-valued transitions_ in the models, analogous to probabilistic transitions in Markov chains and Markov Decision Processes. That is, transitions can be assigned "weights" drawn from the same algebra \(\mathsf{L}\). Similarly, most 3-valued approaches to temporal abstraction and model checking implicitly assume 3-valued transitions by distinguishing between _may_ and _must_ transitions [46, 20, 21, 22]. However, the two approaches differ in how such transitions affect the semantics of formulas with universal quantification (such as "for all paths \(\gamma\)"). In the general multi-valued approach, the "weaker" the path is, the more it decreases the value of the formula. In the 3-valued approach, "weaker" paths have less influence on the overall value. We do not engage in this discussion here, and leave a proper treatment of multi-valued transitions until Section 6. ### Truth Levels We assume that \(\top\) is a single designated value, standing for full logical truth. In consequence, the truth and validity of formulas can be defined in a straightforward way as follows: **Definition 4.6**.: **(Validity levels)** Let \(M\) be mv-CGS, \(q\) a state in \(M\), and \(\varphi\) a state formula of mv-ATL\({}_{\rightarrow}^{*}\). Then: * \(\varphi\) is _true in_\(M,q\) (written \(M,q\models\varphi\)) iff \([\varphi]_{M,q}=\top\). * \(\varphi\) is _valid in_\(M\) (written \(M\models\varphi\)) iff \(\varphi\) is true in every state of \(M\). * \(\varphi\) is _valid_ (written \(\models\varphi\)) iff \(\varphi\) is valid in every mv-CGS \(M\). * Additionally, for a path formula \(\gamma\), we can say that \(\gamma\) holds on run \(\lambda\) in a mv-CGS \(M\) (written \(M,\lambda\models\gamma\)) iff \([\gamma]_{M,\lambda}=\top\). We now show that mv-ATL\({}^{*}_{\rightarrow}\) agrees with standard ATL\({}^{*}\) on 2-valued models, unlike the 3-valued version of ATL\({}^{*}\) from [29]. **Theorem 4.7**.: The logic mv-ATL\({}^{*}_{\rightarrow}\) is a conservative extension of ATL\({}^{*}\), i.e., every CGS \(M\) for ATL\({}^{*}\) can be identified with an mv-CGS \(M^{\prime}\) for mv-ATL\({}^{*}_{\rightarrow}\) over the lattice \(\mathbf{2}\) such that, for any ATL\({}^{*}\) formula \(\varphi\) and any state (path) \(\xi\), we have \(M^{\prime},\xi\models\varphi\) iff \(M,\xi\models\varphi\). Proof.: For any CGS \(M=\langle\mathtt{Agt},St,Act,d,t,Prop,\,V\rangle\) for ATL\({}^{*}\), let \(M^{\prime}=\langle\mathtt{Agt},St,Act,d,t,Prop,\,V^{\prime},\mathbf{2}\rangle\), where: (i) \(\mathbf{2}=(\{\bot,\top\},\leq,\sigma)\) is an interpreted classical lattice of two truth values over \(\mathcal{C}=\{\bigbox{\framebox{\bot}},\bigbox{\framebox{\top}}\}\) and \(\sigma(\bigbox{\framebox{\bot}})=l\) for any \(l\in\{\bot,\top\}\); (ii) \(V^{\prime}(p,q)=\top\) if \(q\in V(p)\) and \(\bot\) otherwise. Then \(M^{\prime}\) is an mv-CGS for mv-ATL\({}^{*}_{\rightarrow}\), and an easy check shows that, for any ATL\({}^{*}\) formula \(\varphi\) and any state (path) \(\xi\), it indeed obtains \(M^{\prime},\xi\models\varphi\) iff \(M,\xi\models\varphi\). ## 5 Model checking mv-ATL\({}^{*}_{\rightarrow}\) Given an mv-CGS \(M\), a state \(q\) in \(M\), and an mv-ATL\({}^{*}_{\rightarrow}\) formula \(\varphi\), the model checking problem consists in computing the value of \([\varphi]_{M,q}\). This can be done in two ways: either by using a dedicated algorithm, or through an efficient reduction to the "classical", 2-valued version of model checking. The latter option has many advantages. First and foremost, it allows us to benefit from the ongoing developments in 2-valued model checking, including symbolic model checking techniques, heuristics, model reduction techniques, etc. In this section, we show how model checking of mv-ATL\({}^{*}_{\rightarrow}\) can be reduced to the 2-valued variant of this problem. Since a basic result underlying such reduction holds for distributive lattices only, throughout the section we assume that all lattices under consideration are distributive, unless stated to the contrary. We emphasize again that, while multi-valued model checking typically provides a _conceptual_ approximation of classical verification, the results in this section are about something else. Here, we look for a _technical_ reduction from multi-valued to two-valued model checking, with the sole purpose of facilitating the verification process. ### From multi-valued model checking to classical model checking It is well known that model checking multi-valued temporal logics can be reduced to classical, 2-valued model checking [3, 14, 15, 4]. The reduction is of one-to-many type, i.e., a single instance of multi-valued model checking translates to linearly many instances of classical model checking. The key result in this respect is [3, Theorem 1]. It proposes a method for "clustering" the truth values from lattice \(\mathtt{L}\) into a smaller lattice \(\mathtt{L}^{\prime}\) in such a way that the outcome of model checking is preserved. We will now show that the analogue of that theorem holds for mv-ATL\({}^{*}\), i.e., the sublanguage of mv-ATL\({}^{*}_{\rightarrow}\) without the \(\rightarrow\) operator. **Definition 5.1**.: 1. By a lattice reduction triple (LRT) we mean a triple \((\mathsf{L},\mathsf{L}_{f},f)\), where \(\mathsf{L}=(\mathsf{L},\leq)\) is an arbitrary finite lattice, \(\mathsf{L}_{f}=(\mathsf{L}_{f},\leq_{f})\) its sublattice, and \(f:\mathsf{L}\to\mathsf{L}_{f}\) a homomorphism -- a mapping which preserves arbitrary bounds in \(\mathsf{L}\), i.e. such that \[f(\bigcap_{i\in I}x_{i})=\bigcap_{i\in I}f(x_{i})\qquad\quad and\qquad\quad f (\bigsqcup_{i\in I}x_{i})=\bigsqcup_{i\in I}f(x_{i})\] (3) for an arbitrary set of indices \(I\). 2. Given an LRT \((\mathsf{L},\mathsf{L}_{f},f)\) and an mv-CGS \(M=\langle\mathbb{A}\mathrm{gt},St,Act,d,t,Prop,\mathcal{V},\mathsf{L}^{+}\rangle\) over an interpreted lattice \(\mathsf{L}^{+}=(\mathsf{L},\leq,\sigma)\), by the _reduction of \(M\) to \(\mathsf{L}_{f}\) via \(f\)_ we mean the mv-CGS \(f(M)=\langle\mathbb{A}\mathrm{gt},St,Act,d,t,Prop,\mathcal{V}_{f},(\mathsf{L }_{f},\leq_{f},\sigma_{f})\rangle\), where 1. \(\sigma_{f}(c)=f(\sigma(c))\) for any \(c\in\mathcal{C}\), and 2. \(\mathcal{V}_{f}(p,q)=f(\mathcal{V}(p,q))\) for any \(q\in St\) and \(p\in Prop\). **Definition 5.2**.: For any LRT \((\mathsf{L},\mathsf{L}_{f},f)\) and any model \(M\) over L, by the _translation condition for LRT and formula \(\varphi\) we mean the relationship_ \[[\varphi]_{M,\xi}\in f^{-1}(x)\qquad\text{iff}\qquad[\varphi]_{f(M),\xi}=x \tag{4}\] _holding for any state (respectively, path) \(\xi\)._ The proof of the theorem follows easily from the key result given below: **Lemma 5.3**.: Let a state or path formula \(\varphi\) be such that \[[\varphi]_{M,\xi}=\bigsqcup_{i\in I}\bigcap_{j_{i}\in J_{i}}[\varphi_{j_{i}} ]_{M,\xi_{j_{i}}}\quad\text{or}\quad[\varphi]_{M,\xi}=\bigcap_{i\in I}\bigsqcup _{j_{i}\in J_{i}}[\varphi_{j_{i}}]_{M,\xi_{j_{i}}}\] for any mv-CGS \(M\), any states and/or paths \(\xi,\xi_{j_{i}}\) of \(M\), any countable sets \(I,J_{i}\), and state (resp. path) formulas of mv-ATL\({}^{*}\)\(\varphi_{j_{i}}\) for \(j_{i}\in J_{i},i\in I\), such that all \(\varphi_{j_{i}}\)'s satisfy translation condition (4). Then \(\varphi\) satisfies the translation condition too. **Proof:** We consider the case \([\varphi]_{M,\iota}=\bigsqcup_{i\in I}\bigsqcap_{j_{i}\in J_{i}}[\varphi_{j_{ i}}]_{M,\iota_{j_{i}}}\); the other case follows by symmetry. As \(f\) preserves the bounds, by the assumption on \(\varphi\) we have \(f([\varphi]_{M,\iota})=\bigsqcup_{i\in I}\bigsqcap_{j_{i}\in J_{i}}f([\varphi_ {j_{i}}]_{M,\iota_{j_{i}}})\). Each \(\varphi_{j_{i}}\) satisfies (4), so \(f([\varphi]_{M,\iota})=\bigsqcup_{i\in I}\bigsqcap_{j_{i}\in J_{i}}[\varphi_{j _{i}}]_{M_{f},\iota_{j_{i}}}=[\varphi]_{M_{f},\iota}\), whence \([\varphi]_{M_{f},\iota}=x\) iff \([\varphi]_{M,\iota}\in f^{-1}(x)\), and (4) holds for \(\varphi\). \(\Box\) Now, we can formulate the reduction theorem. **Theorem 5.4**.: **(Reduction theorem)** Let \(\mathsf{L}=(\mathsf{L},\leq)\) be an arbitrary finite lattice, and \((\mathsf{L},\mathsf{L}_{f},f)\) an LRT. Further, let \(M\) be an mv-CGS over an interpreted lattice \(\mathsf{L}^{+}=(\mathsf{L},\leq,\sigma)\) with \(M=\langle\mathbb{A}\mathrm{gt},St,Act,d,t,Prop,\mathcal{V}_{f},(\mathsf{L}_{f},\leq_{f},\sigma_{f})\rangle\), and let \(f(M)=\langle\mathbb{A}\mathrm{gt},St,Act,d,t,Prop,\mathcal{V}_{f},(\mathsf{L}_{ f},\leq_{f},\sigma_{f})\rangle\) be the image of \(M\) under \(f\). Then, for any state (respectively, path) formula \(\varphi\) of mv-ATL\({}^{*}\) over \(\mathsf{L}\) and any state (respectively, path) \(\xi\), the following condition is satisfied: \[[\varphi]_{M,\xi}\in f^{-1}(x)\qquad\text{iff}\qquad[\varphi]_{f(M),\xi}=x \tag{5}\] **Proof:** We use induction on the length of a formula. Equation (5) clearly holds for propositional variables and their negations. Assume it holds for formulas of length at most \(k\), and consider formula \(\varphi\) of length \(k+1\). Then we have the following cases: * \(\varphi=\varphi_{1}\wedge\varphi_{2}\) or \(\varphi=\varphi_{1}\vee\varphi_{2}\), where each \(\varphi_{i}\) is a formula of length at most \(k\). Then \([\varphi_{1}]_{M,q}=[\varphi_{1}]_{M,q}\sqcap[\varphi_{2}]_{M,q}\) or \([\varphi_{1}]_{M,q}=[\varphi_{1}]_{M,q}\sqcup[\varphi_{2}]_{M,q}\), respectively, where \(\varphi_{1},\varphi_{2}\) satisfy (5). As \([\varphi_{1}]_{M,q}\) is in one of the two dual forms prescribed by Lemma 5.3 for \(I=\{1\}\) and \(J_{1}=\{1,2\}\), by that lemma, \(\varphi\) must satisfy (5), too. * \(\gamma=\gamma_{1}\wedge\gamma_{2}\) or \(\gamma=\gamma_{1}\vee\gamma_{2}\) -- analogously to (a). * \(\varphi=\mathrm{X}\psi\), where \(\psi\) is of length at most \(k\). Then \([\mathsf{X}\,\gamma]_{M,\lambda}=[\gamma]_{M,\lambda[1..\infty]}\), and as \(\psi\) satisfies (5) by inductive hypothesis, so obviously does \(\varphi\). The reasoning is similar to (a). * \(\varphi=\mathrm{U}\psi\), where \([\gamma_{1}\mathsf{U}\,\gamma_{2}]_{M,\lambda}=\bigsqcup_{i\in\mathbb{N}_{0}} \bigcap_{0\leq j<i}\{[\gamma_{2}]_{M,\lambda[i..\infty]}\sqcap[\gamma_{1}]_{M,\lambda[j..\infty]}\}\); Since the operator \(\mathrm{U}\) corresponds to a combination of finite and infinite lower and upper bounds applied to values of formulas of length at most \(k\) for which (5) holds, then by Lemma 5.3 Equation (5) must hold for \(\varphi\) too. * \(\varphi=\mathrm{W}\psi\), where \[[\gamma_{1}\mathsf{W}\,\gamma_{2}]_{M,\lambda}=\bigsqcap_{i\in\mathbb{N}_{0}} \{[\gamma_{1}]_{M,\lambda[i..\infty]}\}\sqcup\bigsqcup_{i\in\mathbb{N}_{0}} \bigsqcap_{0\leq j<i}\{[\gamma_{2}]_{M,\lambda[i..\infty]}\sqcap[\gamma_{1}]_{ M,\lambda[j..\infty]}\}\); Since the operator \(\mathrm{W}\) corresponds to a combination of finite and infinite lower and upper bounds applied to values of formulas of length at most \(k\) for which (5) holds, then by Lemma 5.3 Equation (5) must hold for \(\varphi\) too. * \(\varphi=\langle\!\langle A\rangle\!\rangle\gamma\), where \(\gamma\) is of length at most \(k\). Then \(\gamma\) satisfies (5) by inductive hypothesis, and as \([\langle\!\langle A\rangle\!\rangle\gamma]_{M,q}=\bigsqcup_{s_{A}\in\Sigma_{A} }\bigsqcap_{\lambda\in out(q,s_{A})}[\gamma]_{M,\lambda}\), then \(\varphi\) satisfies (5) by Lemma 5.3. * \(\varphi=\llbracket A\rrbracket\gamma\) -- analogously to (e). \(\Box\) Note that the mapping \(f\) can be seen as an abstraction of truth values similar to the well-known technique of _state abstraction_[42, 43]. That is, we can view each value \(x\in\mathrm{L}_{f}\) as an _abstract truth value_ corresponding to the subset \(f^{-1}(x)\) of the original truth values in \(\mathrm{L}\). Clearly, those subsets partition \(\mathrm{L}\) into equivalence classes. Theorem 5.4 says that if \(f\) satisfies conditions (3), then model checking in the abstract model \(M_{f}\) yields the equivalence class corresponding to the output of the original model checking problem in the concrete model \(M\). How can we use Theorem 5.4 to reduce multi-valued model checking to the 2-valued case? Recall the threshold functions \(f_{\ell}:\mathrm{L}\longrightarrow\{\bot,\top\}\), defined by \[f_{\ell}(x)=\left\{\begin{array}{ll}\top&\mbox{if }x\geq\ell\\ \bot&\mbox{otherwise}\end{array}\right.\] We already stated in Theorem 3.7 that those functions preserve bounds. The following is an immediate corollary of the above: **Corollary 5.5**.: For any state (respectively, path) formula \(\varphi\) of mv-ATL\({}^{*}\) and any state (respectively, path) \(\xi\), we have \[[\varphi]_{M,\xi}\geq\ell\qquad\text{iff}\qquad M_{f_{\ell}},\xi\models\varphi. \tag{6}\] Note that each \(M_{f_{\ell}}\) is a classical, 2-valued model. Together with Equation (1), we have \([\varphi]_{M,\xi}=\bigsqcup\{\ell\in\mathcal{JI}(\mathds{L})\ \mid\ [\varphi]_{M^{\ell},\xi}=\top\}\). This gives us a simple algorithm for computing \([\varphi]_{M,\xi}\), presented in Figure 6. The following is straightforward. Theorem 5.6 ().: The one-to-many reduction from multi-valued model checking of mv-ATL\({}^{*}\) to 2-valued model checking of ATL\({}^{*}\) runs in linear time with respect to the size of the model and the number of truth values. **Example 5.7**.: **(Testing of the drones)** Consider the pollution monitoring scenario from the previous examples. Suppose that we want to test the design of a drone patrol before its deployment in the physical environment. One way to carry out offline testing is to model-check the relevant properties of the design against a randomly generated sample of area maps. For the clarity of the examples we have used a crafted by hand map. For the map in Figure 1 and the mv-CGS \(M_{2}\) in Figure 5, we obtain the collection of classical models presented in Figure 7. Note that projections \((M_{2})_{f\top}\) and \((M_{2})_{f\top_{g}}\) are in fact identical, and similarly for \((M_{2})_{f\bot_{d}}\), \((M_{2})_{f\bot_{g}}\), and \((M_{2})_{f\bot_{d}\top\bot_{g}}\). Suppose now that we want to compute the value of \(\langle\!\langle 1,2\rangle\!\rangle\mathsf{F}\) (target\(\wedge\) allvisited \(\wedge\) (pol\({}_{\mathsf{1}}\vee\) pol\({}_{\mathsf{2}}\))) in \(M_{2},(0,0)\). The formula holds in state \((0,0)\) of models \((M_{2})_{f\top_{d}},(M_{2})_{f\bot_{d}},(M_{2})_{f\bot_{g}}\), and \((M_{2})_{f\bot_{d}\top\bot_{g}}\), but not in \((M_{2})_{f\top}\) and \((M_{2})_{f\top_{g}}\). Thus, the output of model checking is \(\top_{d}\sqcup\bot_{d}\sqcup\bot_{g}\sqcup(\bot_{d}\sqcap\bot_{g})=\top_{d}\). Moreover, to model-check \(\langle\!\langle 1\rangle\!\rangle\mathsf{F}\) pol\({}_{\mathsf{1}}\), we observe that the formula holds in all the projection models in Figure 7. Thus, its value in \(M_{2},(0,0)\) is \(\top\sqcup\top_{d}\sqcup\top_{g}\sqcup\bot_{d}\sqcup\bot_{g}\sqcup(\bot_{d} \sqcap\bot_{g})=\top\). The algorithm in Figure 6 is an example of _local_ model checking. That is, given a state (respectively, path) and a formula, it returns the truth value of the formula in that state (respectively, on that path). In two-valued modal logics, verification of state formulas is often done by means of _global model checking_ that returns the exact set of states where the input formula holds. For many logics - including ATL and ATL\({}^{*}\)- this provides strictly more information with no extra computational cost. The analogous problem for multi-valued modal logics would ask for a _valuation_ of the input formula, Figure 6: Translation-based model checking for mv-ATL\({}^{*}\) Figure 8: Translation-based global model checking for mv-ATL i.e., a mapping from the states of the model to the truth values of \(\varphi\). A global model checking algorithm for mv-ATL\({}^{*}\), based on the translation to two-valued model checking, is presented in the next section, in Figure 8. ### Translating implication formulas: Impossibility result Unfortunately, Theorem 5.4 cannot be extended to mv-ATL\({}_{\rightarrow}^{*}\), i.e., to the full language containing implication formulas of the form \(\varphi_{1}\rightarrow\varphi_{2}\), where \(\rightarrow\) represents the lattice order. **Proposition 5.8**.: There are lattice reduction triples and formulas of mv-ATL\({}_{\rightarrow}^{*}\) that do not satisfy translation condition (5). Proof.: Consider the lattice \(\mathtt{L}_{o}=(\mathtt{L}_{o},\leq)\), where \(\mathtt{L}_{o}=\{0,...,k-1,k,...,2k-1\}\), and \(\leq\) is the usual total order on \(\mathtt{L}_{o}\). Clearly, in \(\mathtt{L}_{o}\) we have \(0=\bot\) and \(2k-1=\top\) according to our lattice notation. Then \((\{0,2k-1\},\leq)\) is a sublattice of \(\mathtt{L}_{o}\) and the reduction \(f:\mathtt{L}_{o}\rightarrow\{0,2k-1\}\) given by \(f(x)=2k-1\) if \(x\geq k\), and \(f(x)=0\) if \(x<k\) preserves the bounds in \(\mathtt{L}_{o}\). Thus \((\mathtt{L}_{o},\{0,2k-1\},f)\) is a lattice reduction triple. Now take arbitrary \(k_{1},k_{2}\) such that \(0<k_{1}<k_{2}<k\), and an mv-CGS \(M\) over \(\mathtt{L}_{o}^{+}=(\mathtt{L}_{o},\sigma)\) for an arbitrary \(\sigma:\mathcal{C}\rightarrow\mathtt{L}_{o}\) such that, for some state \(q\in St\) of \(M\) and atomic propositions \(p_{1},p_{2}\in Prop\), we have \(V(p_{i},q)=k_{i}\) for \(i=1,2\). Next, let \(\varphi=p_{2}\to p_{1}\). Since \([p_{i}]_{M,q}=k_{i}\) for \(i=1,2\) and \(k_{2}>k_{1}\), we have \(\neg([p_{2}]_{M,q}\leq[p_{1}]_{M,q})\), whence \([\varphi]_{M,q}=0\). However, for the model \(M_{1}\) obtained from \(M\) with the reduction \(f\) we get \([p_{i}]_{M_{1},q}=0\) for \(i=1,2\) (as \(k_{i}<k\), \(f(k_{i})=0\) for \(i=1,2\)), where \([p_{2}]_{M_{1},q}\leq[p_{1}]_{M_{1},q}\), which implies \([\varphi]_{M_{1},q}=2k-1\). Yet, as \(f^{-1}(2k-1)=\{k,k+1,...,2k-1\}\) we have \([\varphi]_{M,q}=0\not\in f^{-1}(2k-1)\), which contradicts Equation (5). \(\Box\) The above result can be generalized as follows: **Theorem 5.9**.: If \(\mathtt{L}=(\mathtt{L},\leq)\) of Theorem 5.4 contains a chain or anti-chain of cardinality \(n\), and \(\mathtt{L}^{\prime}=(\mathtt{L}^{\prime},\leq^{\prime})\) is a sublattice of \(\mathtt{L}\) of cardinality \(n^{\prime}<n\), then there is no function \(f:\mathtt{L}\rightarrow\mathtt{L}^{\prime}\) satisfying translation condition (5) if the language under consideration contains implication formulas. Proof.: Assume \(X=\{x_{1},x_{2},\ldots,x_{n}\}\) is a chain or anti-chain in \(L\). Let \(M\) be a model such that, for some state \(s\in Q\) of \(M\) and propositional variables \(p_{1},p_{2},\ldots,p_{n}\in Prop\), we have \(V(s,p_{i})=k_{i}\) for \(i=1,2,\ldots,n\). Consider any \(f:L\to L^{\prime}\). As \(card(L^{\prime})=n^{\prime}<n\), than there must be \(k,l\in\{1,\ldots,n\}\) such that \(k\neq l\) and \(f(x_{k})=f(x_{l})\). Let \[\varphi=\left\{\begin{array}{ll}p_{l}\to p_{k}&\mbox{if $X$ is a chain and $x_{k}<x_{l}$}\\ p_{k}\to p_{l}&\mbox{otherwise}\end{array}\right.\] Then, clearly, \(x_{l}\nleq x_{k}\) (note that if \(X\) is an anti-chain, then \(x_{r}\nleq x_{q}\) for any \(1\leq r,s\leq n\)). Hence \([p_{l}]_{M,q}\nleq[p_{k}]_{M,q}\), and \([\varphi]_{M,q}=\bot\). However, as \(f(x_{k})=f(x_{l})\), we have \([p_{l}]_{M_{1},q}\leq[p_{k}]_{M_{1},q}\), whence \([\varphi]_{M_{1},q}=\top\). Thus \(f\) does not satisfy Equation (5). \(\Box\) In other words, if the size of the target lattice \(\mathbb{L}^{\prime}\) is strictly smaller than the "diameter" of the source lattice \(\mathbb{L}\) (that is, the cardinality of the longest chain or antichain in \(\mathbb{L}\)), then a "clustering" of truth values from \(\mathbb{L}\) into \(\mathbb{L}^{\prime}\) that preserves Equation (5) is impossible. Note that the diameter of _any_ lattice with more than 2 values must be at least 3, and hence it exceeds the size of the classical 2-valued lattice \(\mathbf{2}\). The following is an immediate consequence of the above: **Corollary 5.10**.: For any multi-valued lattice \(\mathbb{L}\) there is no reduction of \(\mathbb{L}\) to the 2-valued lattice of classical truth values that satisfies the translation condition (5) for the whole language of mv-ATL\({}_{\rightarrow}^{*}\). ### Translating implication formulas even more impossible We already know that there is no general translation from multi-valued to two-valued model checking for implication formulas. Now we will show that the impossibility result can be extended to any "clustering" of truth values from the original lattice \(\mathbb{L}\). To this end, we give a necessary and sufficient condition for the existence of a function \(f:L\to L_{f}\) that preserves bounds in \(\mathbb{L}\) and satisfies the translation condition (5) also for implication formulas. The following lemma states that, in order to obtain the analogue of Theorem 5.4 for implication formulas, the mapping \(f\) would have to preserve both the ordering and incomparability of the elements in \(\mathbb{L}\). **Lemma 5.11**.: Let \((\mathbb{L},\mathbb{L}_{f},f)\) be a lattice reduction triple, let \(M\) be an mv-CGSs over \(\mathbb{L}\), and \(f(M)\) its reduction to \(\mathbb{L}_{f}\). Then translation condition (5) of that theorem is satisfied for all implication formulas iff the following conditions hold: * \((\forall x_{1},x_{2}\in\mathbb{L})\)\([x_{1}<x_{2}\ \Rightarrow\ f(x_{1})<f(x_{2})]\) * \((\forall x_{1},x_{2}\in\mathbb{L})\)\([x_{1}\bowtie x_{2}\ \Rightarrow\ f(x_{1})\bowtie f(x_{2})]\) Proof.: Note that an implication formula is a state formula, and that for any such formula \(\psi\) we have \([\psi]_{M,q}\in\{\bot,\top\}\) for any mv-CGS \(M\) and any \(q\in St\). Thus in order to prove (5) for such formulas, it suffices to show that, for any implication formula \(\varphi\) and any state \(q\): \[[\varphi]_{M_{f},q}=\top\ \text{iff}\ [\varphi]_{M,q}\in f^{-1}(\top) \tag{7}\] "(7) \(\Rightarrow\) (C1 \(\wedge\) C2)":We start by proving the necessity of conditions C1 and C2. Assume first that \(f\) satisfies (7) for implication formulas. We should prove that \(f\) satisfies Conditions C1, C2 for all such formulas. For what follows, denote \(\xi=p_{1}\to p_{2},\psi=p_{2}\to p_{1}\), where \(p_{1},p_{2}\in Prop\), \(p_{1}\neq p_{2}\) where \(Prop\) is the set of atomic propositions of our logic). **C1:**: We argue by contradiction. Suppose \(x_{1},x_{2}\in\text{L},x_{1}<x_{2}\) and \(f(x_{1})\geq f(x_{2})\). Then, as \(x_{1}<x_{2}\) implies \(x_{1}\leq x_{2}\) and \(f\) preserves bounds, we also have \(f(x_{1})\leq f(x_{2})\), whence \(f(x_{1})=f(x_{2})\). Now let \(M\) be an mv-CGS over L such that, for some state \(q\in St\), we have \(V(p_{i},q)=x_{i}\) for \(i=1,2\), and let \(M_{f}\) be the image of \(M\) under \(f\). Since \(\psi=p_{2}\to p_{1}\) and \([p_{2}]_{M,q}=x_{2}>x_{1}=[p_{1}]_{M,q}\), we have \([\psi]_{M,q}=\bot\). However, \([\psi]_{M_{f},q}=\top\), because \([p_{2}]_{M_{f},q}=f(x_{2})=f(x_{1})=[p_{1}]_{M_{f},q}\). As \(\bot\not\in f^{-1}(\top)\), this contradicts (7). **C2:**: We again argue by contradiction. Suppose \(x_{1},x_{2}\in L,x_{1}\bowtie x_{2}\) and \(\neg(f(x_{1})\bowtie f(x_{2}))\). Without any loss of generality, we can assume that \(f(x_{1})\leq f(x_{2})\). Let mv-CGSs \(M,M_{f}\) and state \(q\) of \(M\) be like in the preceding item. Then, as \([p_{1}]_{M,q}=x_{1}\bowtie x_{2}=[p_{2}]_{M,q}\), we have in particular \([p_{1}]_{M,q}\not\leq[p_{2}]_{M,q}\). Since \(\xi=p_{1}\to p_{2}\), this implies \([\xi]_{M,q}=\bot\). In turn, \([\xi]_{M_{f},q}=\top\), because \([p_{1}]_{M_{f},q}=f(x_{1})\leq f(x_{2})=[p_{2}]_{M_{f},q}\), which again contradicts (7). "(\(\text{C1}\wedge\text{C2})\Rightarrow\text{(7)}\)":It remains to prove the sufficiency of conditions C1 and C2. We assume that C1, C2 hold, and prove that (7) holds for formulas of the form \(\varphi=\varphi_{1}\to\varphi_{2}\). We start by proving this result for non-nested implication formulas, i.e., we assume that \(\varphi_{1},\varphi_{2}\) do not contain \(\to\). Then, by Theorem 5.4, (7) holds for \(\varphi_{1},\varphi_{2}\), which implies that \[[\varphi_{i}]_{M_{f},q}=f([\varphi_{i}]_{M,q}),\ \ i=1,2. \tag{8}\] "(\(\text{7L})\Rightarrow\text{(7R)}\)": We begin with the forward implication in (7). Assume that \([\varphi]_{M_{f},q}=\top\). Then \([\varphi_{1}]_{M_{f},q}\leq[\varphi_{2}]_{M_{f},q}\). By (8), this implies \(f([\varphi_{1}]_{M,q})\leq f([\varphi_{2}]_{M,q}\). We show by contradiction that \[[\varphi_{1}]_{M,q}\leq[\varphi_{2}]_{M,q}. \tag{9}\] Suppose that (9) does not hold, then we have two possible cases: **Case 1:**: \([\varphi_{1}]_{M,q}>[\varphi_{2}]_{M,q}\). Then by C1 we have \(f([\varphi_{1}]_{M,q})>f([\varphi_{2}]_{M,q})\), whence from (8) we get \([\varphi_{1}]_{M_{f},q}>[\varphi_{2}]_{M_{1},q}\) and \([\varphi]_{M_{f},q}=\bot\) -- which is a contradiction. **Case 2:**: \([\varphi_{1}]_{M,q}\bowtie[\varphi_{2}]_{M,q}\). Then \(f([\varphi_{1}]_{M,q})\bowtie f([\varphi_{2}]_{M,q})\) by C2, whence from (8) we get \(\neg([\varphi_{1}]_{M_{f},q}\leq[\varphi_{2}]_{M_{f},q})\). Consequently, \([\varphi]_{M_{f},q}=\bot\) -- which is again a contradiction. Thus (9) above holds, whence \([\varphi]_{M,q}=\top\in f_{1}(\top)\), and the forward implication in (7) holds. "(\(\text{7R})\Rightarrow\text{(7L)}\)": The final step is proving the backward implication in (7). Assume that \([\varphi]_{M,q}=f^{-1}(\top)\). As \([\varphi]_{M,q}\in\{\bot,\top\}\) and \(f(\bot)\neq\top\) by the preservation of bounds by \(f\) and the non-triviality of \(L,L_{f}\), we obtain \([\varphi]_{M,q}=\top\), whence \([\varphi_{1}]_{M,q}\leq[\varphi_{2}]_{M,q}\). Since \(f\) preserves bounds, this implies \(f([\varphi_{1}]_{M,q})\geq f([\varphi_{2}]_{M,q})\), whence from (8) we obtain \([\varphi_{1}]_{M_{1},q}\leq[\varphi_{2}]_{M_{f},q}\). This yields \([\varphi]_{M_{f},q}=\top\), whence the backward implication in (7) holds, too. Nested formulas: The proof for nested formulas proceeds by induction. Assume that (7) holds for implication formulas with \(\to\) nested at most \(k\) times, and assume \(\varphi\) is an implication formula with \(\to\) nested \(k+1\) times. Then \(\varphi=\varphi_{1}\to\varphi_{2}\), where \(\to\) is nested at most \(k\) times in \(\varphi_{1},\varphi_{2}\). Consequently, by the inductive assumption (9) holds for \(\varphi_{1},\varphi_{2}\), and repeating the proof given above for implication formulas without nesting of \(\to\) we can show that (7) holds for \(\varphi\) too. This completes the proof of the sufficiency of C1, C2 for all implication formulas, and the proof of Lemma 5.11. \(\Box\) From Lemma 5.11 we can easily derive by induction the following general result: **Theorem 5.12**.: Let the \((\mathsf{L},\mathsf{L}_{f},f)\) be an LRT. Then, the translation condition is satisfied for all formulas of mv-ATL\({}_{\rightarrow}^{*}\) over L iff conditions C1, C2 of Lemma 5.11 hold. It can be seen that conditions C1, C2 imply that any translation \(f\) meeting them must preserve the exact structure of the lattice \(\mathsf{L}\). An important consequence of that fact is: **Corollary 5.13**.: Given a lattice \(\mathsf{L}=(\mathsf{L},\leq)\) and its sublattice \(\mathsf{L}_{f}=(\mathsf{L}_{f},\leq_{f})\), any function \(f:\mathsf{L}\rightarrow\mathsf{L}_{f}\) preserving the algebra bounds and satisfying translation condition (5) for all implication formulas must be one-to-one. Proof.: Suppose that \(f\) satisfies the above assumption, \(x_{1},x_{2}\in L\) and \(x_{1}\neq x_{2}\). Then we have one of the following cases: 1. \(x_{1}<x_{2}\) or \(x_{2}<x_{1}\). Then \(f(x_{1})\neq f(x_{2})\) by C1 of Theorem 5.12. 2. \(x_{1}\bowtie x_{2}\). Then \(f(x_{1})\bowtie f(x_{2})\) by C2 of Theorem 5.12, which again implies \(f(x_{1})\neq f(x_{2})\). \(\Box\) The meaning of Corollary 5.13 is that there is no way of reducing \(n\)-valued model checking to \(k\)-valued model checking for \(k<n\), if we want to handle all implication formulas. Clearly, Corollary 5.10 in Section 5.2 is a special case of the above result. ### Translation of model checking for _some_ implication formulas By Corollary 5.5, there is a simple translation from multi-valued to classical model checking for strategic and temporal operators. By Corollary 5.13, we know that it cannot be generally extended to implication formulas. The next question is: can we construct such a translation for _some_ implication formulas? If so, for which ones? The impossibility result in Corollary 5.13 is due to the fact that implication formulas can be used to encode the semantics in the language -- including in particular its \(n\)-valued character. However, one usually wants to model-check one formula at a time. Then, Theorem 5.12 can be in some cases modified to provide the desired reduction: **Theorem 5.14**.: Let \((\mathsf{L},\mathsf{L}_{f},f)\) be a lattice reduction triple (LRT), let \(M\) be a CGS over \(\mathsf{L}\) and \(f(M)\) its reduction to \(\mathsf{L}_{f}\). Further, let \(\varphi\) be a formula of mv-ATL\({}_{\rightarrow}^{*}\) and \(Sub(\varphi)\) the set of all its subformulas. Then \(\varphi\) satisfies translation condition (5) whenever, for any implication formula \(\phi\in Sub(\varphi)\) such that \(\phi=\varphi_{1}\rightarrow\varphi_{2}\), any state (resp. path) \(\xi\), and \(x_{i}=[\varphi_{i}]_{M,\xi},i=1,2,\) the following conditions hold: \[\begin{array}{ll}\mbox{\bf C1':}&x_{1}<x_{2}\ \Rightarrow\ f(x_{1})<f(x_{2}) &\mbox{\bf C2':}&x_{1}\bowtie x_{2}\ \Rightarrow\ f(x_{1})>f(x_{2})\\ \end{array}\] **Proof:** To prove the thesis, we assume that C1', C2' are satisfied, and show by structural induction that translation condition (5) \[[\psi]_{M_{f},\xi}=x\quad\mbox{iff}\quad[\psi]_{M,\xi}\in f^{-1}(x)\] holds for any \(\psi\in Sub(\varphi)\). For atomic or constant \(\psi\), the thesis follows from Theorem 5.4. Suppose now that Equation (5) holds for all subformulas of \(\varphi\) having rank \(k\), and assume \(\psi\) is of rank \(k+1\). If \(\psi\) is obtained from subformulas of rank at most \(k\) using any operator \(Op\) other than \(\to\), then Equation (5) follows from the fundamental Lemma 5.3. Thus, it remains to consider the case of \(\to\). Assume \(\psi=\psi_{1}\to\psi_{2}\), with Equation (5) being satisfied for both \(\psi_{1},\psi_{2}\). Since \(\psi\) is an implication formula, according to what we have already noted in the proof of Lemma 5.11, proving (5) for \(\psi\) reduces to showing condition (7), i.e., \[[\psi]_{M_{f},q}=\top\quad\mbox{iff}\quad[\psi]_{M,q}\in f^{-1}(\top).\] Note that since \(\psi_{1},\psi_{2}\) are in \(Sub(\varphi)\), then C1', C2' hold for \(x_{i}=[\varphi_{i}]_{M,q},i=1,2\). By the inductive assumption, we also have \[[\psi_{i}]_{M_{f},q}=f([\psi_{i}]_{M,q}),\ \ i=1,2 \tag{10}\] "\(\Longrightarrow\)":We begin with the forward implication in (7). Assume that \([\psi]_{M_{f},q}=\top\). Then \([\psi_{1}]_{M_{f},q}\leq[\psi_{2}]_{M_{f},q}\). By (10), this implies \(f([\psi_{1}]_{M,q})\leq f([\psi_{2}]_{M,q}\). We show by contradiction that it implies \[[\psi_{1}]_{M,q}\leq[\psi_{2}]_{M,q} \tag{11}\] Suppose that (11) does not hold, then we have two possible cases: Case 1: \([\psi_{1}]_{M,q}>[\psi_{2}]_{M,q}\). Then by condition C1' we have \(f([\psi_{1}]_{M,q})>f([\psi_{2}]_{M,q})\), whence from (10) we get \([\psi_{1}]_{M_{f},q}>[\psi_{2}]_{M_{f},q}\) and \([\psi]_{M_{f},q}=\bot\), which is a contradiction. Case 2: \([\psi_{1}]_{M,q}\bowtie[\psi_{2}]_{M,q}\). Then \(f([\psi_{1}]_{M,q})>f([\psi_{2}]_{M,q})\) by condition C2', which again leads to a contradiction by what we have already proved for Case 1. Thus (11) above holds, whence \([\psi]_{M,q}=\top\in f^{-1}(\top)\), and the forward implication in (7) holds. "\(\Leftarrow\)":The final step consists in proving the backward implication in (7). Assume that \([\psi]_{M,q}=f^{-1}(\top)\). As \([\psi]_{M,q}\in\{\bot,\top\}\) and \(f(\bot)\neq\top\) by the preservation of bounds by \(f\) and the non-triviality of \({\bf L},{\bf L}_{f}\), we get \([\psi]_{M,q}=\top\), and consequently \([\psi_{1}]_{M,q}\leq[\psi_{2}]_{M,q}\). Since \(f\) preserves bounds, this implies \(f([\psi_{1}]_{M,q})\leq f([\psi_{2}]_{M,q})\), whence from (10) we obtain \([\psi_{1}]_{M_{f},q}\leq[\psi_{2}]_{M_{f},q}\). This yields \([\psi]_{M_{f},q}=\top\), whence the backward implication in (7) holds, too. \(\Box\) Assume that our mv-CGSs are defined over distributive lattices. We now show that the translation method of Section 5.1, based on join irreducible elements \({\cal JI}({\bf L})\), can be applied to a formula \(\varphi\) of ATL\({}^{*}\) and an mv-CGS \(M\), provided that the assumptions of Theorem 5.14 are satisfied. By (1), for each \(x\in{\bf L}\) we have \(x=\bigsqcup({\cal JI}({\bf L})\,\cap\,\downarrow\,x)\). Let \(M^{\ell}\) be the model obtained using the translation \(f_{\ell}\). Therefore, according to Theorem 5.14: \([\varphi]_{M^{\ell},\xi}=x\) iff \([\varphi]_{M,\xi}\in f_{\ell}^{-1}(x)\) whence \([\varphi]_{M^{\ell},\xi}=\top\) iff \([\varphi]_{M,\xi}\in\uparrow\ell\). Thus, \[[\varphi]_{M,\xi}=\bigsqcup\{\ell\in{\cal JI}({\bf L})\ \mid\ [\varphi]_{M^{\ell}, \xi}=\top\}. \tag{12}\] **Example 5.15**.: Consider model \(M_{2}\) in Figure 5 and formula \(\phi=\langle\!\langle 1\rangle\!\rangle\mathsf{G}\ (\mathsf{pol}_{1}\to(\mathsf{target}\wedge\mathsf{pol}_{2}))\). Subformula \(\mathsf{pol}_{1}\) can take the following truth values throughout the model: \(\bot,u,\top_{d},\top\). Similarly, \(\mathsf{target}\wedge\mathsf{pol}_{2}\) can evaluate to \(\bot,\top_{d}\). Thus by Theorem 5.14 mapping \(f_{\top_{d}}\) meets translation condition (5), and we can use the translation method of Section 5.1 to check if the value of \(\phi\) is at least \(\top_{d}\). On the other hand, all the other "cutoff" mappings (i.e., \(f_{\bot_{d}\sqcap\bot_{g}},f_{\bot_{d}},f_{\bot_{g}},f_{\top_{g}}\), and \(f_{\top}\)) do not satisfy condition C1', and hence the correctness of the translation is not guaranteed for those truth values. The following is an immediate consequence of Theorem 5.14. **Corollary 5.16**.: Let \(\mathsf{L}\), its sublattice \(\mathsf{L}_{f}\), \(f:\mathsf{L}\to\mathsf{L}_{f}\), and \(M,M_{f}\) be as in Theorem 5.4. Further, let \(\varphi\) be a formula of \(\text{mv-ATL}^{*}_{\to}\) such that every implication subformula of \(\varphi\) is of the form \(\psi_{1}\to\psi_{2}\), where \(\psi_{i}\in\{\framebox{\bot},\framebox{\top}\}\) for some \(i\in\{1,2\}\). Then \(\varphi\) satisfies the translation condition (5). Proof.: It suffices to observe that if at least one of the formulas \(\psi_{1},\psi_{2}\) is either \(\framebox{\bot}\) or \(\framebox{\top}\), then the implication subformula \(\psi_{1}\to\psi_{2}\) trivially satisfies conditions C1' and C2' of Theorem 5.14. ### Recursive model checking of ATL\({}^{*}\) For many model checking instances, the assumptions of Theorem 5.14 do not hold. In those cases, we cannot translate the multi-valued model checking of \(\text{mv-ATL}^{*}_{\to}\) formulas to the classical model checking for ATL\({}^{*}\). Then, the simplest solution is to adapt the standard recursive algorithm that, in order to model-check formula \(\varphi\), proceeds bottom-up from the simplest subformulas, and replaces them with fresh atomic propositions. In our case, this means that model checking of each implication formula \(\varphi_{1}\to\varphi_{2}\) consists in computing the values of \(\varphi_{1},\varphi_{2}\) by means of the translation in Section 5.1, and then fixing the valuation of the fresh variable \(\mathsf{p}_{\varphi_{1}\to\varphi_{2}}\) according to their comparison. The detailed algorithm is presented in Figure 9. Figure 9: Recursive global model checking for ATL The main disadvantage of the above method compared to the direct translation method is that it requires computing the values of the new atomic propositions for all states of the model \(M\). In other words, we need to carry out global model checking, whereas for formulas without the implication operator \(\rightarrow\) both global and local model checking were possible. The method can be possibly improved if we assume that a specific symbolic model checking method for two-valued ATL\({}^{*}\) is used; we leave a study of this subject for future work. Nevertheless, the algorithm presented in Figure 9 has two important consequences. First, it provides a general linear-time reduction from model checking mv-ATL\({}_{\rightarrow}^{*}\) (resp. mv-ATL\({}_{\rightarrow}\)) to model checking standard 2-valued ATL\({}^{*}\) (resp. ATL). We state it formally as follows. **Theorem 5.17**.: The one-to-many reduction from multi-valued model checking of mv-ATL\({}_{\rightarrow}^{*}\) to 2-valued model checking of ATL\({}^{*}\) runs in linear time with respect to the size of the model, the length of the formula, and the number of truth values. **Corollary 5.18**.: Model checking mv-ATL\({}_{\rightarrow}^{*}\) (resp. mv-ATL\({}_{\rightarrow}\)) is **2EXPTIME**-complete (resp. **P**-complete) in the size of the model, the length of the formula, and the number of truth values. Secondly, we note that correctness of the translation does not depend on the type of strategies being used in the semantics of mv-ATL\({}_{\rightarrow}^{*}\). As it is, the translation provides a model checking reduction to the _IR_ variant of ATL\({}^{*}\) (perfect information + perfect recall). If we used memoryless strategies of type \(s_{a}:St\to Act\) instead of perfect recall, the translation would yield reduction to the _Ir_ variant of ATL\({}^{*}\) (perfect information + imperfect recall [44]). Since the _IR_ and _Ir_ semantics coincide in 2-valued ATL (though not in ATL\({}^{*}\)!), we get the following. **Theorem 5.19**.: For mv-ATL\({}_{\rightarrow}\), memory is irrelevant, i.e., its semantics can be equivalently given by memoryless strategies. ## 6 Multi-valued transitions In this paper our aim is to propose a framework for a graded interpretation of logical statements referring to the strategic ability of agents and coalitions. Until this point, the "graded" truth values have only originated from non-classical interpretation of atomic propositions and literals. Typically, this happens because when constructing a model we cannot determine the truth of some basic statements in absolute terms (as either true or false). Instead, we assign such basic statements with their "truth degrees" (which can be also seen as "weights of evidence") drawn from a suitable lattice, which then propagate to more complex formulas. Another source of non-classical truth values sometimes considered in the literature is a graded interpretation of transitions. In that case, each transition is labelled according to its "strength." An extension of mv-ATL\({}_{\rightarrow}^{*}\) with weighted transitions is discussed in this section. ### Weighted transitions: Potential interpretations The shift from 2-valued to multi-valued modal logic typically arises when we extend the domain of interpretation for atomic propositions in _states_ of the model. The level of truth for \(\mathsf{p}_{1},\mathsf{p}_{2},\dots\), is not crisp anymore, and this propagates to more complex formulae \(\varphi\) via semantic clauses. So far, we have assumed that the transition relation is crisp, i.e., given states \(q,q^{\prime}\) and a vector of actions \(\vec{\alpha}\), the transition from \(q\) to \(q^{\prime}\) labeled by \(\vec{\alpha}\) is either fully included in the model, or is completely absent from it. An alternative would be to consider multi-valued transition relations, with transitions that are possible to a certain degree. There are at least two sensible interpretations of such weighted transitions. On the one hand, the weight can be interpreted as the strength of evidence supporting the existence of the transition. This approach has been adopted in the previous works on multi-valued temporal logics over arbitrary lattices of truth values [3, 4], with the additional assumption that the weights of transitions are drawn from the same lattice as the values of propositions. A characteristic feature of the semantics in [3, 4] is that, whenever the weights on transitions decrease sufficiently, the value of a temporal formula must also decrease. Formally, consider a multi-valued transition system \(M\), a state \(q\) in \(M\), and a formula \(\mathsf{AX}\,\varphi\) such that \([\mathsf{AX}\,\varphi]_{M,q}=x\). Moreover, let \(M^{\prime}\) be the same as \(M\) except for the weights of all the outgoing transitions from \(q\) being strictly lower than \(x\). Then we have \([\mathsf{AX}\,\varphi]_{M^{\prime},q}<[\mathsf{AX}\,\varphi]_{M,q}\). Analogous characterizations can be shown for all other temporal operators. On the other hand, the transition weights can be also interpreted as a qualitative distribution, similarly to quantitative transitions in Markov chains and Markov decision processes, used in the semantics of probabilistic temporal logics [45, 35, 19] and their strategic variants [38, 39, 31, 33, 34]. A natural assumption in that case is that the distribution is complete. In the probabilistic case, it amounts to the weights on the outgoing edges from \(q\) always summing up to \(1\). The requirement is essential in models of multi-agent systems, where establishing what _cannot_ happen is often as important as reasoning about what can. In the qualitative case, at a minimum, \([\varphi]_{M,q^{\prime}}=x\) on all the successors \(q^{\prime}\) of \(q\) should imply \([\mathsf{AX}\,\varphi]_{M,q}=x\) (and analogously for other temporal operators and strategic operators). In particular, if the value of \(\varphi\) is bound to be \(\top\) at the next moment - no matter how the systems evolves - then \(\mathsf{AX}\,\varphi\), \(\langle\!\langle A\rangle\!\rangle\mathsf{X}\,\varphi\), etc., should also evaluate to \(\top\). It is easy to see that the semantics in [3, 4] do not satisfy this requirement. In the remainder of the section, we outline how the probabilistic approach can be adapted to arbitrary lattices of transition weights. Our proposal is based on the concept of _designated paths_, i.e., paths that are considered relevant in a given context. We also show that the idea of may/must abstraction can be seen as a special, 3-valued case of this kind of reasoning. ### Weighted transitions in concurrent game structures **Definition 6.1**.: **(Weighted multi-valued CGS)** Assume two lattices: an interpreted lattice \(\mathsf{L}^{+}=(\mathsf{L},\leq,\sigma)\) of truth values, and a lattice \(\mathsf{L}_{t}=(\mathsf{L}_{t},\leq_{t})\) for weights that will be assigned to transitions. A _weighted multi-valued concurrent game structure (wmv-CGS)_ over \(\mathsf{L}^{+}\) and \(\mathsf{L}_{t}\) is a tuple \(M=\langle\mathsf{Agt},St,Act,d,t,w,Prop,\,V,\mathsf{L}^{+},\mathsf{L}_{t}\rangle\), where \(\mathsf{Agt}\), \(St\), \(Act\), \(t\), \(Prop\) are as in case of an mv-CGS, and \(w:t\to\mathsf{L}_{t}\) is a weight function which maps each individual transition in \(t\) (i.e., each tuple \((q,\alpha_{1},\ldots,\alpha_{k},t(q,\alpha_{1},\ldots,\alpha_{k}))\) to a value in \(\mathsf{L}_{t}\). The interpretation of mv-ATL\({}_{\rightarrow}^{*}\) formulas in a wmv-CGS \(M\) as above is parameterized by the set \(\mathcal{D}\) of designated values in \(\mathbf{L}_{t}\) -- the logical values whose assignment to a formula makes it deemed to be satisfied. A path in an wmv-CGS is defined analogously as in an mv-CGS. A path \(\lambda=q_{0}q_{1}q_{2}\dots\) is said to be designated if for every \(i\) there are actions \(\alpha_{1},\dots,\alpha_{k}\) such that \(t(q_{i},\alpha_{1},\dots,\alpha_{k})=q_{i+1}\) and \(w(q_{i},\alpha_{1},\dots,\alpha_{k},q_{i+1})\in\mathcal{D}\). Given \(\mathcal{D}\), we reduce a wmv-CGS \(M\) to an mv-CGS \[M_{\mathcal{D}}=\langle\mathbb{A}\mathrm{gt},St,Act,d,t_{\mathcal{D}},Prop,\,V,\mathbf{L}^{+}\rangle\] where \[t_{\mathcal{D}}(q,\alpha_{1},\dots,\alpha_{k})=\left\{\begin{array}{ll}t(q, \alpha_{1},\dots,\alpha_{k})&\mbox{if }w(q,\alpha_{1},\dots,\alpha_{k},t(q,\alpha_{1},\dots,\alpha_{k}))\in \mathcal{D}\\ \mbox{undefined}&\mbox{otherwise}\end{array}\right.\] Then any state of \(M\) is a state of \(M_{\mathcal{D}}\), and any designated path in \(M\) is a path in \(M_{\mathcal{D}}\). As the interpretation of mv-ATL\({}_{\rightarrow}^{*}\) in \(M\) we take the interpretation of mv-ATL\({}_{\rightarrow}^{*}\) in \(M_{\mathcal{D}}\): For any state or designated path \(\xi\) in \(M\) and any formula \(\varphi\) in mv-ATL\({}_{\rightarrow}^{*}\), we take: \[[\varphi]_{M,\xi,\mathcal{D}}=[\varphi]_{M_{\mathcal{D}},\xi} \tag{13}\] ### Embedding may/must abstractions A natural example of many-valued transitions is provided by may-must transitions: the "may" transitions are only possible, but need not happen, while the "must" transitions will necessarily take place. This kind of models were used in [46]. Also, the may/must abstractions presented in [28, 29, 30] produce models with the same or similar structure and interpretation. Adapting the notation, those models take the form \(M^{GJ}=\langle St,Prop,\delta_{must},\delta_{may},V,\mathbf{L}\rangle\), where \(St,Prop,V\) are defined as before, \(\mathbf{L}=\{\mathbf{t},\mathbf{f},\mathbf{u}\}\), and \(\delta_{must}\), \(\delta_{may}\subseteq St\times St\) are transition relations such that \(\delta_{must}\subseteq\delta_{may}\). The language contains negation and conjunction interpreted as in Kleene three-valued calculus over \(\{\mathbf{t},\mathbf{f},\mathbf{u}\}\), and the \(AX\) operator interpreted as: \[[AX\varphi]_{M^{GJ},q}=\left\{\begin{array}{ll}\mathbf{t}&\mbox{if }\forall s ^{\prime}(\delta_{may}(s,s^{\prime})\Rightarrow[\varphi]_{M^{GJ},s^{\prime}}= \mathbf{t})\\ \mathbf{f}&\mbox{if }\exists s^{\prime}(\delta_{must}(s,s^{\prime})\wedge[ \varphi]_{M^{GJ},s^{\prime}}=\mathbf{f})\\ \mathbf{u}&\mbox{otherwise}\end{array}\right.\] If, following [46], we disregard explicit inclusion of agents and actions in our approach, then if the transition relation \(\delta_{may}\) is a function, such a model can be represented as an wmv-CGS \(M^{JKP}=\langle St,\delta_{may},Prop,\,V,\mathbf{L}^{+},\mathbf{L}_{t}\rangle\) with three-valued transitions, where \(LT=\{\top,U,\bot\}\), and the weight function \(w:St\times St\to LT\) is defined by: \[w(s,s^{\prime})=\left\{\begin{array}{ll}\top&\mbox{if }(s,s^{\prime})\in \delta_{must}\\ U&\mbox{if }(s,s^{\prime})\in\delta_{may}\setminus\delta_{must}\\ \bot&\mbox{otherwise}\end{array}\right.\] Denote \(\mathcal{D}_{\top}=\{\top\},\mathcal{D}_{U}=\{U,\top\}\). We can show that Godefroid's-Jagadessan's semantics based on \(M^{GJ}\) can be expressed using our model \(M^{JKP}\) as follows: **Lemma 6.2**.: If \(\varphi\) does not contain the \(AX\) operator, then: 1. \([\varphi]_{M^{GJ},q}=[\varphi]_{M^{JKP},q,\mathcal{D}_{\top}}\) 2. \([AX\varphi]_{M^{GJ},q}=\left\{\begin{array}{ll}\textbf{t}&\text{if }[AX\varphi]_{M^{JKP},q, \mathcal{D}_{U}}=\textbf{t}\\ \textbf{f}&\text{if }[AX\varphi]_{M^{JKP},q,\mathcal{D}_{\top}}=\textbf{f}\\ \textbf{u}&\text{otherwise}\end{array}\right.\) Proof.: Since both \(M^{JKP}\) and \(M^{GJ}\) are based on Kleene 3-valued calculus of propositional formulas, Condition 1 obviously holds. Further, as the translation of \(M^{JKP}\) to \(M_{D_{U}}\) preserves all transitions in \(\delta_{may}\), we have \([AX\varphi]_{M^{JKP},q,\mathcal{D}_{U}}=\textbf{t}\) iff \([\varphi]_{M^{JKP},q^{\prime},\mathcal{D}_{U}}=\textbf{t}\) for every \((q,q^{\prime})\in\delta_{may}\). In view of Condition 1, the latter implies \([\varphi]_{M^{GJ},q^{\prime}}=\textbf{t}\) for every \((q,q^{\prime})\in\delta_{may}\). Consequently, the first clause in Condition 2 holds. For the second clause, note that as \(M_{D_{\top}}\) only contains transitions in \(\delta_{must}\), then \([AX\varphi]_{M^{JKP},q,\mathcal{D}_{\top}}=\textbf{f}\) iff there is a transition \((q,q^{\prime})\in\delta_{must}\) such that \([\varphi]_{M^{JKP},q^{\prime}}=\textbf{f}\). Then by Condition 1 \([\varphi]_{M^{GJ},q^{\prime}}=\textbf{f}\), whence \([AX\varphi]_{M^{GJ},q}=\textbf{f}\) -- and so Condition 2 holds. ### Model checking multi-valued CGS with weighted transitions Fortunately, the introduction of weighted transition, while enriching our models and making them better suited to some practical applications, does not introduce any essential complications into model-checking compared to mv-CGS's with two-valued transitions. Thus the results obtained in the latter case carry over to mv-CGS, and we have the following generalization of the Reduction Theorem 5.4: **Theorem 6.3**.: Let \(\textbf{L}=(\textbf{L},\leq)\) be an arbitrary finite lattice, \(\textbf{L}_{f}=(\textbf{L}_{f},\leq_{f})\) a sublattice of **L**, and let \(f:\textbf{L}\rightarrow\textbf{L}_{f}\) a mapping which preserves arbitrary bounds in **L**. Furthermore, let \(M=\langle\mathbb{A}\mathrm{gt},St,Act,d,t,w,Prop,\mathcal{V},\textbf{L}_{f}\rangle\) be an wmv-CGS over an interpreted lattice \(\textbf{L}^{+}=(\textbf{L},\leq,\sigma)\) over \(\mathcal{C}\), and let \(M_{f}=\langle\mathbb{A}\mathrm{gt},St,Act,d,t,w_{f},Prop,\mathcal{V}_{f}\rangle\) be the mv-CGS obtained from \(M\) by "clustering" the truth values in \(M\) according to \(f\), i.e.: 1. \(\sigma_{f}(c)=f(\sigma(c))\) for any \(c\in\mathcal{C}\), 2. \(w_{f}(\tau)=f(w(\tau)))\) for any \(\tau\in t\), and 3. \(V_{f}(p,q)=f(V(p,q))\) for any \(q\in St\) and \(p\in Prop\). Then, for any state (respectively, path) formula \(\varphi\) of mv-ATL\({}_{\rightarrow}^{*}\) over **L**, any state (respectively, path) \(\xi\), and any set of designated truth values \(\mathcal{D}\), we have \[[\varphi]_{M,\xi,\mathcal{D}}\in f^{-1}(x)\qquad\text{iff}\qquad[\varphi]_{M_{ f},\xi,\mathcal{D}}=x \tag{14}\] Proof.: Straightforward from Equation (13) and Theorem 5.4. Note that the conditions of the above theorem (preservation of the bounds plus Conditions 1 and 3) correspond to those of Theorem 5.4, with an analogous Condition 2 for weights added). This theorem can be used, in a way analogous to that employed for mv-ATL\({}_{\rightarrow}^{*}\) with two-valued transitions, to reduce mv-model checking for mv-ATL\({}_{\rightarrow}^{*}\) with mv-transitions to two-valued model checking. This is because the semantics of mv-ATL\({}_{\rightarrow}^{*}\) with many-valued transitions contains an embedded reduction of models with mv-transitions to models with two-valued transitions -- and for those models we can again use the reduction based on our threshold functions. Consequently, the local and global model checking algorithm given in Figure 9 and Figure 8 also carry-over to the case of many-valued transitions. Like previously, the positive results quoted above apply to formulas which do not involve the implication operator. For the formulas involving that operator, the negative result obtained in case of two-valued transitions of course still holds -- because a CGS with two-valued transitions is just a special case of a CGS with many-valued transitions. ## 7 Multi-valued verification of agents with imperfect information ATL and ATL\({}^{*}\) were originally proposed for reasoning about agents in perfect information scenarios. It can be argued that realistic multi-agent systems always include some degree of limited observability [44, 47, 48, 49, 50, 51, 52]. However, model checking of ATL and ATL\({}^{*}\) with imperfect information is hard - more precisely, \(\mathbf{\Delta_{2}^{P}}\)- to \(\mathbf{PSPACE}\)-complete for agents playing memoryless strategies [44, 53, 54] and undecidable for agents with perfect recall [55]. Furthermore, the imperfect information semantics of strategic ability does not admit standard fixpoint equivalences [56], which makes incremental synthesis of strategies cumbersome. Practical attempts at the problem have emerged only recently [57, 58, 59, 60, 61, 62], and the experimental results show that verification is feasible only for very small models. Such hard problems can be potentially tackled by means of approximation techniques [30, 63]. In particular, abstraction techniques [42, 30, 23] can be used to cluster multiple states and/or transitions in the system into _abstract_ states and transitions, thus reducing the model size. However, in order to be effective, the abstraction must be very coarse, which potentially results in loss of information about the truth of (some) atomic propositions and the existence of (some) transitions. This leads to a substantial reduction of the verification cost, possibly at the expense of introducing non-classical truth values of propositions in some abstract states, as well as transitions of various strength. In consequence, multi-valued model checking can be extremely useful when reasoning about strategies under uncertainty. Clearly, all the previously cited reasons for using multi-valued verification (description of the world based on a non-classical notion of truth, lifting the logical reasoning to a richer domain of answers, inconclusive or inconsistent information about the system, conflicting evidence coming from different sources, inconclusive verification procedure, etc.) are also relevant for agents with uncertainty. In this section, we show that the framework of mv-ATL\({}_{\rightarrow}^{*}\) can be easily extended to the case of imperfect information. ### Logic mv-ATL\({}_{\rightarrow}^{*}\) with imperfect information Let us extend mv-CGS with epistemic indistinguishability relations \(\sim_{1},\ldots,\sim_{k}\subseteq St\times St\), one per agent in \(\mathbb{A}\mathrm{gt}\). The idea is that, whenever \(q\sim_{a}q^{\prime}\) and the system is in state \(q\), agent \(a\) might think that the system is actually in \(q^{\prime}\). Each \(\sim_{a}\) is assumed to be an equivalence relation. We also assume that the resulting model is _uniform_ with respect to the indistinguishability relations, i.e., \(q\sim_{a}q^{\prime}\) implies \(d_{a}(q)=d_{a}(q^{\prime})\). In other words, the choices available to an agent are identical in the states indistinguishable for that agent. In a similar way, strategies under imperfect information must specify identical choices in indistinguishable situations. That is, memoryless strategies with imperfect information (_ir_ strategies, for short) are functions \(s_{a}:St\to Act\) such that \(q\sim_{a}q^{\prime}\) implies \(s_{a}(q)=s_{a}(q^{\prime})\). Moreover, perfect recall strategies with imperfect information (shortly: _iR_ strategies) are functions \(s_{a}:St^{+}\to Act\) st. \(q_{0}\sim_{a}q^{\prime}_{0},\ldots,q_{n}\sim_{a}q^{\prime}_{n}\) implies \(s_{a}(q_{0}\ldots q_{n})=s_{a}(q^{\prime}_{0}\ldots q^{\prime}_{n})\). Again, collective strategies for \(A\subseteq\mathbb{A}\mathrm{gt}\) are tuples of individual strategies for \(a\in A\). We denote them by \(\Sigma^{ir}_{A}\) and \(\Sigma^{iR}_{A}\), respectively. The semantics of mv-ATL\({}^{*}_{\mathfrak{G}\rightarrow}\), parameterized by the type of strategies \(\mathfrak{S}=\mathit{IR},\mathit{Ir},\mathit{ir},\mathit{iR}\), can be defined by replacing the clause for the strategic operators from Section 4 as follows: \[[\langle\!\langle A\rangle\!\rangle\gamma]^{\mathfrak{S}}_{M,q} =\ \bigsqcup_{s_{A}\in\Sigma^{\mathfrak{S}}_{A}}\bigsqcup_{\lambda\in out(q,s_{A} )}\{[\gamma]^{\mathfrak{S}}_{M,\lambda}\};\] \[[[A]\!]\gamma]^{\mathfrak{S}}_{M,q} =\ \bigsqcup_{s_{A}\in\Sigma^{\mathfrak{S}}_{A}}\bigsqcup_{\lambda\in out(q,s_ {A})}\{[\gamma]^{\mathfrak{S}}_{M,\lambda}\}.\] Figure 10: Multi-valued model \(M_{3}\) for drones with imperfect information. Epistemic indistinguishability is depicted by dotted lines. **Example 7.1**.: **(Drones with partial information)** Consider again the drone model introduced in Example 4.3 and Figure 5. We assume now that drone \(1\) sees its own position but not that of drone \(2\), whereas drone \(2\) only sees if the other drone is in the same location but does not recognize the location itself. Moreover, each drone can identify the initial state (i.e., \((0,0)\)), as well as recognize that it has run out of battery (states \((3,3)_{1}\) and \((3,3)_{2}\)). Finally, drone \(2\) - not knowing its exact position - may try to fly in a direction which is not available for a given location (e.g., fly North in location \(2\)). In that case, the attempt fails, and the drone stays in its current location. The updated mv-CGS \(M_{3}\) is presented in Figure 10. For the formulas from Example 4.4, we now have : * \([\langle\!\langle 1\rangle\!\rangle\mathsf{F\ }\mathsf{pol}_{1}]^{ir}_{M_{2},(0,0)}=[ \langle\!\langle 1\rangle\!\rangle\mathsf{F\ }\mathsf{pol}_{1}]^{ir}_{M_{2},(0,0)}=\top\), as the strategy to fly North in state \((0,0)\), and then East in \((1,1)\) or \((1,2)\) is uniform for drone \(1\); * \([\langle\!\langle 2\rangle\!\rangle\mathsf{F\ }\mathsf{pol}_{2}]^{ir}_{M_{2},(0,0)}=[ \langle\!\langle 2\rangle\!\rangle\mathsf{F\ }\mathsf{pol}_{2}]^{ir}_{M_{2},(0,0)}=\top\) (the analogous strategy for drone \(2\) is _not_ uniform, but the agent can achieve the goal by playing \(N\) in all the states); * \([\langle\!\langle 1,2\rangle\!\rangle\mathsf{F\ }(\mathsf{target\wedge allvisited\wedge( pol_{1}\vee pol_{2})})]^{ir}_{M_{2},(0,0)}=\bot\) because neither of the uniform memoryless strategies leads to a state where target \(\wedge\) allvisited holds; * \([\langle\!\langle 1,2\rangle\!\rangle\mathsf{F\ }(\mathsf{target\wedge allvisited\wedge( pol_{1}\vee pol_{2})})]^{ir}_{M_{2},(0,0)}=\top_{d}\) (example strategy: drone \(1\) flies North in the first step, and East in the second, while drone \(2\) moves East and then North). \(\Box\) **Objective vs. subjective semantics of ability.** We note that the above semantic rule corresponds to the notion of _objective ability_. That is, given a strategy, we only look at its outcome paths starting from the current global state of the system \(q\). The alternative, _subjective ability_, requires the strategy to succeed on all the paths starting from states indistinguishable from \(q\). Let \(\sim_{a}(q)=\{q^{\prime}\ |\ q\sim_{a}q^{\prime}\}\). This can be formalized by the following adaptation of the semantic rule: \[[\langle\!\langle A\rangle\!\rangle\gamma]^{\mathsf{S}}_{M,q}\ =\ \bigsqcup_{s_{A}\in\Sigma^{\mathsf{S}}_{A}}\bigsqcap_{a\in A}\bigsqcap_{q^{ \prime}\in\sim_{a}(q)}\bigsqcap_{\lambda\in out(q^{\prime},s_{A})}\{[\gamma]^ {\mathsf{S}}_{M,\lambda}\};\] \[[[A]\gamma]^{\mathsf{S}}_{M,q}\ =\ \bigsqcap_{s_{A}\in\Sigma^{\mathsf{S}}_{A}} \bigsqcup_{a\in A}\bigsqcup_{q^{\prime}\in\sim_{a}(q)}\bigsqcup_{\lambda\in out (q^{\prime},s_{A})}\{[\gamma]^{\mathsf{S}}_{M,\lambda}\}.\] A more detailed discussion on the epistemic aspects of strategic ability can be found in [64, 65]. We leave the proper treatment of diverse epistemic variants of mv-ATL\({}^{*}_{\rightarrow}\) for the future. ### Model checking techniques and formal results We emphasize again that the correctness of the techniques proposed in Section 5_does not depend on the actual definition of the strategy sets_\(\Sigma_{A}\). In consequence, the results carry over to the imperfect information case, and the techniques can be applied _in exactly the same way_ to obtain model checking reductions from mv-ATL\({}^{*}_{\mathfrak{S}\rightarrow}\) to the corresponding 2-valued cases. This demonstrates the power of the translation method that can be directly applied to a vast array of possible semantics for ATL\({}^{*}\). Again, multi-valued verification of mv-ATL\({}^{*}_{\mathfrak{S}\rightarrow}\) incurs only linear increase in the complexity compared to the 2-valued case. Case study: Multi-valued verification of the drone model Besides the theoretical results discussed in the preceding sections, we present an experimental evaluation of our approach to verification of strategic abilities. To this end, we propose a new scalable benchmark based on the running example employed throughout the paper. We use the CGS template of the team of drones patrolling for pollution in a city (cf. Examples 4.3 and 7.1, as well as the graphs in Figures 5 and 10), but with a more complex map to make the study more realistic. The details and outcomes of the experiments are presented further on in this section. ### Model description The benchmark is an extension of the drone model used in the previous sections. To recall, we consider a number of drones flying over a fixed area, with each drone modeled as a separate agent. The map is represented by a directed graph that defines the locations \(Loc\) and the paths used by the agents to move between those locations. We employ the map shown in Figure 11. For the experiments, we assume that the connections between locations are symmetric (i.e., can be traversed both ways), and hence an undirected graph is a sufficient representation of the map. The system consists of a number of drone agents and the environment. The set of all drones is denoted by \(D\). A drone can use its sensors to measure the pollution at its current location. Moreover, it can communicate with the other drones at the same and adjacent locations using bluetooth, and obtain their current readings. The readings from all the ground sensors are broadcasted by the monitoring center, and hence are available to all drones at all times. This is modeled by an epistemic indistinguishability relation, with the following information available to the drone: * Its current position (i.e., a location number); Figure 11: The map used in the experiments * Reading from the drone sensor in its current position; * Readings from the adjacent drones; * Readings from all the ground sensors; * A battery charge level; * A set of already visited places. We assume that the time span of the mission is at most \(30\) mins (currently, there are still relatively few types of drones that can fly longer than a couple of minutes, and they are mostly used in industrial and military contexts). With this provision, we can assume that the environment is stationary throughout the mission. That is, while traversing the map the drones will always get the same readings from a given location. Each drone can perform five possible actions: _go North_, _South_, _East_, _West_, and _Wait_. Any movement consumes energy. When the battery level drops to zero, the only action that the drone can perform is _Wait_. This means it will stay at its current location forever (since our model does not feature battery recharging). However, such an immobilized drone can still broadcast information to the nearby drones. As before, we use multi-valued atomic propositions \(\mathsf{pol}_{\mathsf{d}}\), \(d\in D\), with values drawn from the lattice \(\mathbf{2}+\mathbf{2}\times\mathbf{2}+\mathbf{2}\times\mathbf{2}\). The interpretation of \(\mathsf{pol}_{\mathsf{d}}\) is given by the combined readings of drone \(d\)'s sensor and of the ground sensor at the current location of \(d\). The models are scaled with respect to the following parameters: * Number of drones; * Initial battery level (the same for each drone). ### Formulas In the rest of this section, \(\mathsf{d}\) will refer to an arbitrary drone in the set \(D\). The first formula to be verified is \[\phi_{1}\quad=\quad\mathsf{EF}\ \mathsf{pol}_{\mathsf{d}}\ \to\ \langle\!\langle d \rangle\!\rangle\mathsf{F}\ \mathsf{pol}_{\mathsf{d}}\] It says that if drone \(d\)_might_ detects pollution to some degree, then \(d\) has a strategy to guarantee that this will indeed be the case. Note that the formula is an implication, and hence - due to the results in Section 5.2 - a straightforward reduction to classical model checking is problematic. Because of that, we use the recursive reduction algorithm of Section 5.5. That is, we split \(\phi_{1}\) into its left hand side (\(\mathsf{EF}\ \mathsf{pol}_{\mathsf{d}}\)) and right hand side (\(\langle\!\langle d\rangle\!\rangle\mathsf{F}\ \mathsf{pol}_{\mathsf{d}}\)). We also observe that the left hand side of the implication, expressed in ATL and transformed to the negation normal form, becomes \([\![\emptyset]\!]\mathsf{F}\ \mathsf{pol}_{\mathsf{d}}\). Thus, in order to determine the value of \(\phi_{1}\), we need to carry out multi-valued model checking of the following two formulas: * \(\phi_{1L}\ =\ [\![\emptyset]\!]\mathsf{F}\ \mathsf{pol}_{\mathsf{d}}\), and * \(\phi_{1R}\ =\ \langle\!\langle d\rangle\!\rangle\mathsf{F}\ \mathsf{pol}_{\mathsf{d}}\), each of them satisfying the preconditions of Theorem 5.4. We observe that the above specification is relatively weak: it requires that if pollution is present somewhere then the drone is able to find it at some location. In order to allow for a finer-grained specification, we add to the drone model a family of atomic propositions \(\mathsf{at}_{\mathsf{d},\mathsf{loc}}\) with classical, 2-valued interpretation. More precisely, \(\mathsf{at}_{\mathsf{d},\mathsf{loc}}\) evaluates to \(\top\) in the states where drone \(d\) is at location \(loc\in Loc\), and to \(\bot\) everywhere else. In addition, we allow for cooperation between the drones. More exactly, we will be looking at joint strategies of the team of all drones \(D\) with the following property: if any of the drones might detect pollution at location \(loc\), then the drones can ensure that one of them will indeed detect it: \[\phi_{2}\quad=\quad\bigwedge_{loc\in Loc}\big{(}\mathsf{EF}\ \bigvee_{d\in D}( \mathsf{at}_{\mathsf{d},\mathsf{loc}}\wedge\mathsf{pol}_{\mathsf{d}})\quad \rightarrow\quad\langle\!\langle D\rangle\!\rangle\mathsf{F}\ \bigvee_{d\in D}(\mathsf{at}_{\mathsf{d},\mathsf{loc}}\wedge\mathsf{pol}_{ \mathsf{d}})\big{)}.\] Again, the formula is an implication, and thus requires separate treatment of the left and right hand sides of "\(\rightarrow\)." Here, we only report the verification results for the right-hand subformula, i.e.: * \(\phi_{2R}^{loc}\ =\ \langle\!\langle D\rangle\!\rangle\mathsf{F}\ \bigvee_{d\in D}(\mathsf{at}_{\mathsf{d},\mathsf{loc}}\wedge\mathsf{pol}_{ \mathsf{d}})\) for an arbitrary selected value of \(loc\). The considered formulas emphasize the importance of the comparison operator \(\rightarrow\) for actual specification of properties. Many (if not most) relevant properties of multi-agent systems are expressed as an implication: if the assumptions are satisfied, the target property should hold as well. The multi-valued variant of such requirements demands that \(\psi\) is satisfied to at least the same degree as \(\varphi\). ### Semantics and algorithms We note that formula \(\phi_{1L}\) refers only to the abilities of the empty coalition, and hence does not involve reasoning about imperfect information. In consequence, one can as well evaluate it using the perfect information semantics of mv-ATL\({}^{*}\). Then, the translation in Section 5.1 reduces the multi-valued verification of \(\phi_{1L}\) to model checking of 2-valued ATL with perfect information. We implement the latter by means of the standard fixpoint algorithm from [2]. In contrast, the semantics of formulas \(\phi_{1R}\) and \(\phi_{2R}\) refers to strategies with imperfect information. Accordingly, the translation in Section 5.1 reduces the problem to model checking of 2-valued ATL with imperfect information. Since the exact model checking of abilities under imperfect information is hard, both theoretically [44, 66] and in practice [9, 67, 57], we go around the complexity by using the fixpoint-based approximate model checking algorithm proposed recently in [63]. That is, the 2-valued model checking of \(\phi_{1R}\) proceeds by a model-independent translation to its upper and lower variants \(\phi_{1R}^{U},\phi_{1R}^{L}\), both of which can be verified by fixpoint algorithms. If the verification output for \(\phi_{1R}^{U}\) and \(\phi_{1R}^{L}\) matches, it is guaranteed correct for \(\phi_{1R}\), too; otherwise, the outcome is inconclusive. The 2-valued model checking for \(\phi_{2R}\) is obtained analogously. As we will see, the output of the lower and the upper approximation always matched in our experiments (cf. Figures 13 and 14), thus providing fully conclusive outcome. ### Experimental results The results of the experiments are presented in Figures 12 (for formula \(\phi_{1L}\)), 13 (formula \(\phi_{1R}\)), and 14 (formula \(\phi_{2R}\)). For each of the formulas, we considered several configurations of the drone model. The main scaling factor was the number of the drones in the system. The second source of complexity was the initial energy level (the same for every drone). The initial location on the map was always 0, for all the drones in the system. Figure 12: Experimental results for \(\phi_{1L}\) Figure 14: Experimental results: \(\phi_{2R}^{loc}\) for \(\#drones=3\) and \(loc=7\) Figure 13: Experimental results for \(\phi_{1R}\) The experiments were conducted on an Intel Core i7-6700 CPU with dynamic clock speed of 2.60-3.50 GHz, 32 GB RAM, running under 64bit Windows 10. The times are given in seconds; the timeout was set to 2 hours. As the performance results show, multi-valued verification of strategic ability scales up similarly to two-valued model checking [63], which confirms the theoretical results in Section 5.5. The software used to conduct the experiments can be found at the address [https://github.com/blackbat13/stv](https://github.com/blackbat13/stv). The software is implemented in Python 3. As it is an ongoing development, it does not accept any input language. Instead, model generators are used. Models are generated as transition graphs and stored explicitly in the memory. As the experiments show, the result of the formula depends mostly on the initial energy of the drones. If given enough energy, drones can visit every place on the map, hence detecting any pollution. On the other hand, even a drone with very small capacity of the battery can detect something. As can be seen in Figures 12 and 13, for the cases in which initial energy of the drones were less than 3, answer was more informative than simple false, as it would have been if we had used two-valued logic. It shows that multi-valued logics can provide the designer or analyst with much more useful information beyond a simple yes/no answer. ## 9 Conclusions In this paper we study a variant of alternating-time temporal logic, denoted as mv-ATL\({}_{\rightarrow}^{*}\), where the truth values are taken from an arbitrary distributive lattice. We argue that multi-valued model checking of mv-ATL\({}_{\rightarrow}^{*}\) specifications can be useful, especially for systems whose models cannot be fully analyzed due to their complexity and/or inaccessibility of the relevant information. Other examples include systems with information coming from multiple, potentially conflicting sources. We propose the semantics of mv-ATL\({}_{\rightarrow}^{*}\) first in the simplest case of perfect information strategies and models with crisp, classical transition functions. Then, we show how to extend the framework to the case of multi-valued transitions, as well as other notions of strategies (in particular, variants of strategic reasoning for agents with limited observation capabilities). In terms of technical results, we prove that our multi-valued semantics of mv-ATL\({}_{\rightarrow}^{*}\) provides a conservative extension of the classical 2-valued variant. More importantly, we propose efficient (i.e., polynomial-time) translations from multi-valued model checking to the 2-valued case. We formally characterize the conditions under which the translation can be carried out by non-recursive one-to-many reduction, and propose a recursive procedure for the remaining instances of the problem. The proposed techniques are elegant enough to be directly applicable to other semantic variants of strategic ability, for example, those referring to imperfect information scenarios. This allows for non-classical model checking of abilities while benefiting from the ongoing development of classical model checkers and game solvers. Finally, we back up our proposal by a series of experiments in a simulated scenario of drones patrolling for pollution in a city. Besides promising performance results, the experiments demonstrate also the use of the _relevant implication_, based on comparison of truth values, which is among the main novel contributions of this paper. The operator can be used to provide a multi-valued counterpart of material implication, with an intuitive and appealing interpretation. This is especially important in multi-agent systems where many relevant properties are indeed based on implication, which makes them difficult to formalize in the multi-valued case. In the future, we plan to extend the framework of mv-ATL\({}_{\rightarrow}^{*}\) to richer specification languages, such as Strategy Logic [68, 69, 70]. We would also like to take a closer look at multi-valued models arising from state and action abstractions, and to the application of multi-valued model checking to verification of strategic ability under imperfect information. **Acknowledgements.** The authors thank Arthur Queffelec for his help in the implementation of the model checking algorithm. Moreover, Wojciech Jamroga acknowledges the support of the 7th Framework Programme of the European Union under the Marie Curie IEF project ReVINK (PIEF-GA-2012-626398). Wojciech Jamroga, Damian Kurpiewski, and Wojciech Penczek acknowledge the support of the National Centre for Research and Development (NCBR), Poland, under the PolLux projects VoteVerif (POLLUX-IV/1/2016) and STV (POLLUX-VII/1/2019). Damian Kurpiewski and Wojciech Penczek acknowledge also the support of CNRS/PAN under the project PARTIES.
2310.00094
What are the Radial Distributions of Density, Outflow Rates, and Cloud Structures in the M 82 Wind?
Galactic winds play essential roles in the evolution of galaxies through the feedback they provide. Despite intensive studies of winds, the radial distributions of their properties and feedback are rarely observable. Here we present such measurements for the prototypical starburst galaxy, M 82, based on observations by Subaru telescope. We determine the radial distribution of outflow densities ($n_e$) from the spatially-resolved [S II] $\lambda\lambda$ 6717, 6731 emission-lines. We find $n_e$ drops from 200 to 40 cm$^{-3}$ with radius ($r$) between 0.5 and 2.2 kpc with a best-fit power-law index of $r^{-1.2}$. Combined with resolved H$\alpha$ lines, we derive mass, momentum, and energy outflow rates, which drop quite slowly (almost unchanged within error bars) over this range of $r$. This suggests that the galactic wind in M 82 can carry mass, momentum, and energy from the central regions to a few kpc with minimal losses. We further derive outflow cloud properties, including size and column densities. The clouds we measure have pressures and densities that are too high to match those from recent theoretical models and numerical simulations of winds. By comparing with a sample of outflows in local star-forming galaxies studied with UV absorption-lines, the above-derived properties for M 82 outflows match well with the published scaling relationships. These matches suggest that the ionized gas clouds traced in emission and absorption are strongly related. Our measurements motivate future spatially resolved studies of galactic winds, which is the only way to map the structure of their feedback effects.
Xinfeng Xu, Timothy Heckman, Michitoshi Yoshida, Alaina Henry, Youichi Ohyama
2023-09-29T19:08:02Z
http://arxiv.org/abs/2310.00094v1
# What are the Radial Distributions of Density, Outflow Rates, and Cloud Structures in the M 82 Wind? ###### Abstract Galactic winds play essential roles in the evolution of galaxies through the feedback they provide. Despite intensive studies of winds, the radial distributions of their properties and feedback are rarely observable. Here we present such measurements for the prototypical starburst galaxy, M 82, based on observations by Subaru telescope. We determine the radial distribution of outflow densities (\(n_{\rm e}\)) from the spatially-resolved [S ii] \(\lambda\lambda\)6717, 6731 emission-lines. We find \(n_{\rm e}\) drops from 200 to 40 cm\({}^{-3}\) with radius (\(r\)) between 0.5 and 2.2 kpc with a best-fit power-law index of \(r^{-1.2}\). Combined with resolved H\(\alpha\) lines, we derive mass, momentum, and energy outflow rates, which drop quite slowly (almost unchanged within error bars) over this range of \(r\). This suggests that the galactic wind in M 82 can carry mass, momentum, and energy from the central regions to a few kpc with minimal losses. We further derive outflow cloud properties, including size and column densities. The clouds we measure have pressures and densities that are too high to match those from recent theoretical models and numerical simulations of winds. By comparing with a sample of outflows in local star-forming galaxies studied with UV absorption-lines, the above-derived properties for M 82 outflows match well with the published scaling relationships. These matches suggest that the ionized gas clouds traced in emission and absorption are strongly related. Our measurements motivate future spatially resolved studies of galactic winds, which is the only way to map the structure of their feedback effects. Galactic Winds (572), Galaxy evolution (1052), Galaxy kinematics and dynamics(602), Starburst galaxies (1570), Galaxy spectroscopy (2171) + Footnote †: journal: AASJournal ApJ ## 1 Introduction Galactic winds, which are driven by energy and momentum supplied by star-formation (SF) or active galactic nuclei (AGNs), play an essential role in the evolution of galaxies (e.g., Chevalier & Clegg, 1985; Silk & Rees, 1998). They are responsible for various feedback effects, including regulating SF in galaxies, enriching the intergalactic and circumgalactic medium (IGM and CGM) with metals, and reducing the baryons in galactic discs to solve the "overcooling problem" (see reviews in Naab & Ostriker, 2017; Donahue & Voit, 2022; Heckman & Best, 2023; and references therein). The current understanding of starburst-driven galactic winds is that the energy source is provided by stellar winds and core-collapse supernovae from the population of massive stars (Chevalier & Clegg, 1985). These ejecta are thermalized in shocks to produce a very hot (up to \(\sim 10^{8}\) K) fluid which expands outward to form a very fast (up to 3000 km/s) and tenuous wind. This wind fluid interacts with ambient gas clouds, accelerating them outward at velocities of \(10^{2}\) to \(10^{3}\) km/s. These outflowing clouds span a wide range of phases, including hot (few million K), warm ionized (\(10^{4}\) K), neutral atomic, and molecular gas (e.g., Leroy et al., 2015). These outflowing clouds can be readily observed in both emission and absorption (see reviews in Heckman & Thompson, 2017; Veilleux et al., 2020). Constraining the impact of galactic winds requires estimating the total mass/energy/momentum that outflows carry. In principle, these can be measured for the different gas phases listed above. These quantities are commonly characterized as outflow rates, i.e., mass/energy/momentum carried by the outflowing material per unit time. In SF galaxies, which are the focus of this paper, the most extensive such studies refer to the warm ionized gas, from UV and optical absorption lines (e.g., Heckman et al., 2000; Pettini et al., 2000; Rupke et al., 2002; Martin, 2005; 2006; Grimes et al., 2009; Weiner et al., 2009; Rubin et al., 2010; Steidel et al., 2010; Martin et al., 2012; Bordoloi et al., 2014; Rubin et al., 2014; Heckman et al., 2015; Heckman & Borthakur, 2016; Chisholm et al., 2016;a, 2017; Sugahara et al., 2017; Chisholm et al., 2018; Wang et al., 2022; Xu et al., 2022), and from optical emission lines (e.g., Heckman et al., 1990; Lehnert & Heckman, 1996; Newman et al., 2012;b; Rupke & Veilleux, 2013; Wood et al., 2015; Davies et al., 2019; Freeman et al., 2019; Perna et al., 2019; Rupke et al., 2019; Swinbank et al., 2019; Burchett et al., 2021; Zabl et al., 2021; Avery et al., 2022; Marasco et al., 2022). Nearly all of these previous studies only derived the global outflow rates integrated over the observing aperture, except in a few cases (e.g., Leroy et al., 2015; Burchett et al., 2021). This is mainly due to the lack of spatially-resolved observations of well-extended outflows in nearby galaxies. Nonetheless, it has been long recognized that outflows in SF galaxies are the collections of gas with a large range of radius, velocity, and gas phase (e.g., Heckman & Thompson, 2017; Veilleux et al., 2020). Thus, it is critical to map out the wind properties and outflow rates. Furthermore, the radial distribution of the structures of the outflowing clouds, including their volume filling factors (FF), densities, column densities (\(N_{\rm H,cl}\)), and sizes (\(R_{\rm cl}\)) are largely unknown (but see Xu et al., 2023). However, these parameters are not only essential in quantifying feedback effects, but also vital to constraining sub-grid physics in theoretical models and numerical simulations of winds. For the latter, recent studies show that the \(R_{\rm cl}\) and \(M_{\rm cl}\) are the key parameters to determine if outflowing clouds can survive long enough to be accelerated by the hot wind fluid (e.g., Gronke & Oh, 2020; Li et al., 2020; Sparre et al., 2020; Fielding & Bryan, 2022). However, there currently exist no constraints on the radial distribution of these parameters from observations. In this paper, we seek to shed light on the radial distribution of outflow rates and cloud properties based on the wealth data on M 82, which is the most intensively studied nearby starburst galaxy. M 82 hosts the best-observed wind in any galaxy, exhibiting clear biconical outflowing multiphase gas out to distances of at least a few kpc. Detailed studies of the starburst activity and the multi-phase wind are made possible by its proximity (distance \(\sim\) 3.89 Mpc, Sakai & Madore, 1999), and are summarized in Heckman & Thompson (2017). Combining archival results with new Subaru imaging and resolved spectroscopic data, we aim to tackle various key problems that have not been well-studied previously: 1. What are the radial distributions of the mass, momentum, and energy outflow rates? Do the distributions imply that winds can supply sufficient mass, momentum, and energy onto scales large enough to impact the CGM? 2. What are the properties of outflowing clouds at different radii? Are they consistent with the cloud-survival criteria in current theoretical models and numerical simulations? 3. How are outflow rates connected between different phases? What are the overall combined feedback effects of these phases? The structure of this paper is as follows. In Section 2, we introduce the observations and data. Then we describe how to calculate outflow density, rates, and cloud properties in Section 3, where we also present the radial profiles of these derived parameters. In Section 4, we discuss and compare our results with measurements of outflow rates estimated for other gas phases. We also place the M 82 outflow in the context of a large sample of local SF galaxies and compare our results to theoretical models and numerical simulations. We conclude the paper in Section 5. Throughout the paper, we adopt a distance to M 82 of 3.89 Mpc (Sakai & Madore, 1999), which leads to 18.9 pc/\({}^{\prime\prime}\). ## 2 Observations and Data ### Imaging Data Optical imaging observations of M 82 were conducted with Faint Object Camera And Spectrograph (FOCAS, Kashikawa et al., 2002) on the Subaru Telescope (Kaifu et al., 2000) on February 2000. These images are published in Ohyama et al. (2002). Two narrow-band filters are used, including the N658 filter that has covered the emission lines of H\(\alpha\)\(\lambda\)6563 and [N ii]\(\lambda\lambda\)6548, 6583; and the N642 filter to detect the adjacent continuum level. Their total exposure times are 600s and 360s, respectively, split into 120s sub-exposures to avoid saturation. The seeing was 0.7\({}^{\prime\prime}\) to 0.8\({}^{\prime\prime}\) during the observations. The data were reduced by pipelines in IDL and IRAF (Yoshida et al., 2000). In this paper, we are interested in the H\(\alpha\) emission lines. Thus, we subtract the scaled N642 image from the N658 image to get a pure H\(\alpha\) +[N ii] emission-line map following the same methodology in Ohyama et al. (2002). The result is shown in Figure 1. ### Spectroscopic Data M 82 was then observed by FOCAS with the spectropolarimetric mode on January 2013. The detailed observations and data reductions are presented in Yoshida et al. (2019). For a summary, the observations adopt a slit mask of eight 0.8'' \(\times\) 20.6'' slitlets at 23.7'' intervals and a VPH grism with 665 grooves mm\({}^{-1}\). These result in a spectral resolution of \(R\sim\) 1700 and a central wavelength of 6500 A, which covered the important emission lines for our analyses, including H\(\alpha\) and [S ii] \(\lambda\lambda\)6717, 6731 doublet. The slits are spatially distributed in three position angles (PAs) and at \(\sim\) 0 - 3 kpc from the nucleus. We draw the slit locations as the red circles in Figure 1. The total science exposure times are 24,000s. The data were reduced using standard CCD reduction pipelines in IRAF in Yoshida et al. (2019). ## 3 Analysis and Results We start with measuring the radial distributions of the outflow's electron density and width in Sections 3.1 and 3.2, respectively. These values are then used to calculate the radial distribution of outflow rates in Section 3.3. In Yoshida et al. (2019) the emission-lines are separated into two components given their spectropolarimetric observations: a polarized component due to scattering of the emission from the central starburst by dust in the outflow, and the total light, which is dominated by the intrinsic emission from the outflowing ionized gas. In this paper, we only use the latter. Since our main tracers are H\(\alpha\) and [S ii], our calculations in this paper only represent the properties of the warm-ionized phase of outflows (T \(\sim\) 10\({}^{4}\) K). As described in previous publications for M 82 (e.g., Strickland & Heckman, 2007; 2009), this warm phase is presumably immersed in and interacting with a dilute volume-filling hot wind fluid (T \(\sim\) few \(\times\) 10\({}^{7}\) - 10\({}^{8}\) K). Hereafter, we distinguish these two components by referring to the former as (warm) outflows, while the latter as (hot) winds. ### Radial Distribution of Outflow Densities Yoshida et al. (2019) have already derived the electron density (\(n_{\rm e}\)) from the intensity ratio of [S ii] \(\lambda\)6731/\(\lambda\)6717 for each slit location, assuming a typical ionized gas temperature of 10\({}^{4}\) K (Osterbrock & Ferland, 2006). These measurements represent a luminosity-weighted mean density at certain radius. However, these values show moderate scatter (due to intrinsic variations or low S/N) and also contain upper limits (see the gray symbols in Figure 2). Thus, to account for the scatters, we bin their values to get robust estimates of \(n_{\rm e}\) at different radial distances to the galactic center (i.e., \(r\)) as follows. We first split the measurements into radial bins given \(r_{min}\) = 0.5 kpc, \(r_{max}\) = 2.5 kpc, and log(\(\Delta r\)) = 0.1 dex. The cut of \(r_{min}\) is because the central region of M 82 is dominated by starburst and does not show clear features of outflows (Shopbell & Bland-Hawthorn, 1998; Westmoquette et al., 2013). The cut of \(r_{max}\) is due to fewer reliable measurements at larger \(r\) reported in Yoshida et al. (2019). This is because the [S ii]-based density measurement reaches its low-density limit at \(n_{e}\sim\) 10 cm\({}^{-3}\)(Osterbrock & Ferland, 2006), so direct measurements of \(n_{\rm e}\) at larger radii (with lower densities) are not possible. Thus, we adopt survival analysis (which incorporates these upper limits1) to calculate the average \(n_{\rm e}\) in each radial bin. The results are shown as the red curve in Figure 2. We find that \(n_{\rm e}\) declines from \(\sim\) 200 cm\({}^{-3}\) at \(r\) = 0.5 kpc to \(\sim\) 40 cm\({}^{-3}\) at \(r\) = 2.2 kpc. We have fit the data to a power-law and find that \(n_{\rm e}(r)\) = 100 \(\times\) (\(\frac{r}{1165pc}\))\({}^{-1.17}\) cm\({}^{-3}\) (see the blue dashed line in Figure 2). Footnote 1: We use the _survfit_ package in R-language (R Core Team, 2021) ### Radial Distribution of the Lateral Outflow Widths As shown in Figure 1, the galactic wind exhibits a biconical structure in H\(\alpha\), which is roughly perpendicular to the galaxy disk (rotated to be the x-axis, white dashed line). To better quantify the regions occupied by the H\(\alpha\) outflows, we estimate the lateral width of the outflow (\(W_{\rm out}\)) at different radial scales as follows. To remove the background, we first calculate the average counts in blank regions of the image and subtract them from the entire image. Then, for each of the radial bins adopted Figure 1: Subaru/FOCAS image of H\(\alpha\) +[N ii] emission lines (with continuum already subtracted, see Section 2.1). The galaxy disk (white dashed line) has been rotated to be parallel with the x-axis, so the bipolar galactic outflow is perpendicular to the galactic disk (north-west side is up). We overlay the position of FOCAS spectroscopic observations described in Yoshida et al. (2019) as red circles (Section 2.2). in 3.1, we draw the cumulative H\(\alpha\) counts (F\({}_{\rm cum}\)) as a function of increasing x. An example is shown in the left panel of Figure 3. We then define the lateral width of the H\(\alpha\) outflow (i.e., \(W_{\rm out}\)(H\(\alpha\))) as the region between F\({}_{\rm cum}\) = 10% and F\({}_{\rm cum}\) = 90%. The resulting radial distribution of \(W_{\rm out}\)(H\(\alpha\)) is shown in the right panel. We find that \(W_{\rm out}\)(H\(\alpha\)) \(\sim\) 1.5 \(\times\)\(r\). If we assume the H\(\alpha\) outflow structure is cone-like, this corresponds to an opening angle of 73.7\({}^{\circ}\) or a solid angle of 0.4 \(\pi\) ster (per cone). ### Radial Distribution of Outflow Rates To estimate the amounts of mass/energy/momentum carried by the warm ionized outflows per unit time, we can calculate various outflow rates. From the definition, we have: \[\begin{split}\dot{M}_{\rm out}(r)&=\frac{dM}{dt} =\frac{dM}{dr}\cdot\frac{dr}{dt}=\frac{dM}{dr}V_{\rm out}(r)\\ \dot{E}_{\rm out}(r)&=\frac{1}{2}\times\dot{M}_{ \rm out}(r)\times V_{\rm out}^{2}(r)\\ \dot{P}_{\rm out}(r)&=\dot{M}_{\rm out}(r)\times V _{\rm out}(r)\end{split} \tag{1}\] where \(\dot{M}_{\rm out}\), \(\dot{E}_{\rm out}\), and \(\dot{P}_{\rm out}\) are the mass, energy, and momentum rates of outflows for a certain radial bin, respectively; \(M\) and \(V_{\rm out}(r)\) are the mass and velocity of the outflows at this bin, respectively. For \(V_{\rm out}\), we first adopt the observed line-of-sight (LOS) outflow velocities (\(V_{\rm obs}\)) from Shopbell & Bland-Hawthorn (1998), where the measurements are based on detailed maps of H\(\alpha\) emission lines in the southern side of the galaxy2. We note the deprojected outflow velocity depends on the orientation of the galaxy and outflows, i.e., \(V_{\rm out}\) = DF \(\times\)\(V_{\rm obs}\), where \(V_{\rm out}\) is the deprojected outflow velocity in the rest-frame of M 82 and DF is the deprojection factor. Based on the radial velocity profiles of the double-peaked H\(\alpha\) emission mapped over the entire outflow, Shopbell & Bland-Hawthorn (1998) found the best model to fit the data has the geometry as a pair of cones arranged as funnels with DF \(\sim\) 2 (see their Section 4.3.4). We adopt their DF hereafter in this paper. Footnote 2: The northern side of M 82 is receding from us and is more dusty due to the obscuration by the disk itself. Therefore, measurements of \(V_{\rm out}\) and other related parameters are more reliable in southern outflow of M 82 (e.g., Contursi et al., 2013), which is our focus in this paper. Then we can calculate the volume of a partial cone for a certain radial bin with \(r_{\rm min}\) and \(r_{max}\) as the lower and upper radii by: \[\begin{split} dVol&=\frac{1}{3}\pi dr\left(\left( \frac{W_{\rm min}}{2}\right)^{2}+\frac{W_{\rm min}}{2}\frac{W_{\rm max}}{2}+ \left(\frac{W_{\rm max}}{2}\right)^{2}\right)\\ &=\frac{3}{16}\pi dr\times(r_{min}^{2}+r_{min}r_{max}+r_{max}^{2} )\end{split} \tag{2}\] In step 2 above, we have adopted \(W=1.5r\) as we measured from H\(\alpha\) outflows in Section 3.2. Then we can calculate \(dM/dr\) for each radial bin: \[\frac{dM}{dr}=\frac{dVol}{dr}\times\mathrm{FF}n_{\rm e}\mu_{e} \tag{3}\] where \(\mu_{e}\) is the average atomic mass per electron, and FF is the volume filling factor of H\(\alpha\) emitting outflows for this bin. Given \(n_{\rm e}\) has been derived for each radial bin in Section 3.1, the only unknown here is FF. We calculate FF from the H\(\alpha\) profiles as: \[\mathrm{FF}=\frac{\mathrm{F}(H\alpha)}{n_{\rm e}n_{\rm p}W\times\alpha_{H \alpha}E_{H\alpha}} \tag{4}\] where F(H\(\alpha\)) is the average surface-brightness of H\(\alpha\) for a certain radial bin, \(n_{\rm p}\) is the proton number density and we assume \(n_{\rm p}\) = \(n_{\rm e}\)/1.1 for fully ionized gas, \(\alpha_{H\alpha}\) = 7.88\(\times\) 10\({}^{-14}\) cm\({}^{3}\) s\({}^{-1}\) is the recombination coefficient of H\(\alpha\) assuming T = 10,000 K (Draine, 2011), and \(E_{H\alpha}\) = \(3.0\times\) 10\({}^{-12}\) ergs is the H\(\alpha\) photon energy. For F(H\(\alpha\)), we measure it from the surface brightness of H\(\alpha\) given Subaru/FOCAS spectra (Yoshida et al., 2019) and correct it by the dust extinction measured from Balmer decrements (Heckman et al., 1990). As noted above, due to the higher/uncertain dust extinction on the north-west side, we only compute a dust-corrected H\(\alpha\) surface brightness for the south-east side of the outflows. We will finally multiply the resulting outflow rates by a factor of two to represent the total amounts for M 82 (see below). Figure 2: Radial Distribution of electron number density (\(n_{\rm e}\)) for M 82’s galactic outflows. The gray symbols are the measurements from [S ii] doublet reported in Yoshida et al. (2019) at different slit positions (Figure 1). Their upper limits are shown as arrows. The red curve is the binned \(n_{\rm e}\) at different radii considering the gray data points. We have adopted survival analyses to include the upper limits. We also show the best-fit power-law as the blue dashed line. See section 3.1). Figure 4: **Top-Left:** Radial distribution of the volume filling factor (FF) derived from the H\(\alpha\) and [S ii] emission lines. The best-fit power-law is shown as the blue dashed line. **Top-Right:** The black line represents the radial distribution of mass outflow rates (\(\dot{M}_{\rm out}\)) derived in Section 3.3. The blue line represents the de-projected outflow velocity (\(V_{\rm out}\)) from H\(\alpha\) (Shopbell & Bland-Hawthorn 1998). **Bottom:** Kinetic energy and momentum outflow rates derived in Section 3.3. Figure 3: **Left:** The cumulative counts of H\(\alpha\) from Subaru FOCAS image (see Figure 1 and Section 3.2). The two blue dashed lines represents the locations where the cumulative counts equal 10% and 90% of the total counts, separately. We define the outflow width as the region between the blue dashed lines, which is 1.8 kpc for this radial bin. **Right:** Outflow width distribution among different radial distances to the galactic center of M 82 (\(r\)). The resulting radial distribution of FF is shown in the first panel of Figure 4. We find that FF steadily decreases from 0.3% at \(r\) = 0.5 kpc to 0.005% at 2.2 kpc, which shows that the warm ionized clouds are extremely clumpy, and occupy less volume as they travel further out of the galaxy. The best-fit power-law is given by \(\mathrm{FF}(r)=10^{-3}\times(\frac{r}{628pc})^{-1.8}\) and is shown as the blue dashed line in Figure 4. Overall, combining Equations (1) - (4), we can get the radial distributions of \(\dot{M}_{\mathrm{out}}\), \(\dot{E}_{\mathrm{out}}\), and \(\dot{P}_{\mathrm{out}}\) for the warm-ionized outflows in M 82 (Figure 4). We find the outflow rates drop quite slowly and stay almost unchanged within the error bars from 0.8 to 2.2 kpc. This suggests that the galactic wind in M 82 can indeed carry the mass, energy, and momentum from the central regions out to a few kpc with minimal losses. ## 4 Discussion ### Comparisons of Spatially-resolved Multi-phase Outflow Rates in M 82 Detailed spatially-resolved studies of the M 82 wind exist for various outflowing phases. Here we will compare our measured outflow rates as a function of radius from H\(\alpha\) and [S ii] to the published values for these other phases. Specifically, we will compare to maps of the outflow rates of cold atomic gas (Martini et al., 2018) and cold molecular gas (Leroy et al., 2015). Estimating these rates requires a measurement of the intrinsic outflow velocity. Given the nearly edge-on orientation of M 82, there is a significant correction needed to convert observed line-of-sight outflow velocities to intrinsic values. To be able to compare the outflow rates in the different phases in a consistent way, we use the deprojection factor (DF) adopted for the warm ionized gas to measure outflow rates in the previous studies, i.e., DF = 2 (Shopbell and Bland-Hawthorn, 1998; see Section 3.3 above). We have updated their measured outflow rates accordingly. We list the results in Table 1 and briefly discuss these values as follows. Spatially-resolved maps of the cold neutral outflowing gas in M 82 via spatially resolved H i 21 cm emission were discussed by Martini et al. (2018). Adopting DF = 2, the outflow velocities decline with radius from about 200 km s\({}^{-1}\) at 1 kpc to only 50 km s\({}^{-1}\) at 10 kpc. There is a corresponding steep decline in the mass outflow rates. Over the radial range we probe for the warm ionized gas (out to 2.2 kpc), the mass outflow rates in HI are similar to those for the warm ionized gas, but the outflow rates of momentum and (especially) kinetic energy are significantly smaller (Table 1). Maps of outflowing cold molecular gas in M 82 based on observations of CO emission lines were described by Leroy et al. (2015). For DF = 2, the inferred \(V_{\mathrm{out}}\) is about 150 km s\({}^{-1}\), which is significantly smaller than the values in the warm ionized phase, but similar to the values for the atomic phase. (Table 1). The implied mass outflow rates decline from about 10 M\({}_{\odot}\) yr\({}^{-1}\) at a radius of 1 kpc to values over an order-of-magnitude smaller by a radius of 3 kpc. Comparing these to the the outflow rates for the warm ionized gas over the the radial range we probe, the mass outflow rates probed by CO are two times larger, the momentum outflow rates are about three times smaller, and the kinetic energy outflow rates are about ten times smaller. Taken together, the relatively small outflow velocities and the steep decline in outflow rates with radius are consistent with a picture in which the atomic and molecular gas traces a fountain flow that launches gas out to a few kpc (Leroy et al., 2015). We also note that the combined outflow rates of the three phases amount to about 50% of the momentum injected by the M 82 starburst, and only about 16% of the injected kinetic energy. These results are consistent with the finding that the very hot gas (3 - 8 \(\times 10^{7}\) K) in M 82 that is feeding the fast wind fluid carries the rest of the momentum flux and nearly all the kinetic energy flux (Strickland and Heckman, 2009). ### Comparisons of the Outflow in M 82 to those in the CLASSY Sample The systematic properties of warm ionized outflows have been widely studied via interstellar absorption lines (e.g., Martin, 2005; Rupke et al., 2005; Chisholm et al., 2015; Heckman et al., 2015; Heckman and Borthakur, 2016; Xu et al., 2022). However, how the galactic outflow in M 82 as traced in emission compares to these estimates for a large sample of starburst galaxies is still an open question. Here we compare the ionized outflow properties in M 82 with the outflows observed in the COS Legacy Archive Spectroscopy SurveY (CLASSY) atlas (Berg et al., 2022; James et al., 2022). CLASSY includes 45 low-redshift starburst galaxies (z = 0.002 - 0.182), which occupy a wide range of important galaxy properties, including stellar mass, SFR, and metallicity. Their outflow features and correlations with the galaxy properties have been analyzed in a homogeneous way and are reported in Xu et al. (2022) and Xu et al. (2023). These results are based on spatially unresolved FUV spectra from the Hubble Space Telescope (HST)/Cosmic Origins Spectrograph (COS). In Figure 5, we show the outflow velocity versus two important galaxy properties, i.e., stellar mass (left) and SFR (right). The measurements of warm ionized outflows (Section 3) in M 82 are shown as red stars, where we take 600 km s\({}^{-1}\) that is valid for \(r>0.7\) kpc (Figure 4). In general, we find the ionized outflow in M 82 studied in emission-line matches the scaling relationships derived from the CLASSY sample based on absorption-line data. Similarly, in Figure 6, we compare the integrated (total) mass, momentum, and energy outflow rates with the mass, momentum, and energy input (i.e., SFR, \(\dot{P}_{*}\), \(\dot{E}_{*}\)) provided by the starburst regions, respectively (Xu et al., 2022). For M 82, we show the mean values for radii between 0.5 and 2.2 kpc. We find these outflow rates in M 82 well match the trends reported for the CLASSY sample. In the top panels, we also show the mass-loading factor (\(\dot{M}_{\rm out}\)/SFR) versus \(V_{\rm out}\) and \(V_{\rm cir}\), where M 82 is located at the bottom-right corner. Again, the location of M 82 in these plots is consistent with the CLASSY sample. Furthermore, the median value of FF for CLASSY galaxies is 4 \(\times 10^{-3}\). This is similar to the one for M 82 (\(\sim 10^{-3}\) to 10\({}^{-4}\), Figure 4). Given the best-fit \(n_{\rm e}\)(r) and FF(r) in Figures 2 and 4, we find FF\(\times n\propto r^{-3}\). Then we can also derive the LOS integrated outflow column density from H\(\alpha\)+[S ii] observations as: \[N_{\rm H,LOS}(r)=\int_{500pc}^{\infty}{\rm FF}\times n\times dr=3.2\times 10^{ 20}cm^{-2} \tag{5}\] This value is quite close to the median \(N_{\rm H,LOS}\) = 4.9 \(\times 10^{20}\) cm\({}^{-2}\) measured in the CLASSY sample. Overall, we find that the warm ionized outflows probed by H\(\alpha\) emission lines in M 82 follow the same scaling relationships and have similar LOS column densities as those reported in the CLASSY sample (Xu et al., 2022; 2023) for warm ionized outflows studied in absorption. This suggests that the outflow properties in M 82 are similar to those in other low-redshift starburst galaxies, and that the ionized gas seen in emission and absorption is likely to trace similar material. We summarize all these comparisons in Table 23. Footnote 3: We refer readers to Table 3 in Xu et al. (2022) for more comparisons between CLASSY and other low-redshift starburst galaxy samples. ### Constraints on Outflow Cloud Parameters \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline Phases & Tracer & Log(\(V_{\rm out}\))\({}^{(a)}\) & Log(\(M_{\rm out}\)) & Log(\(\dot{P}_{\rm out}\)) & Log(\(\dot{E}_{\rm out}\)) & Radii\({}^{(b)}\) & Reference \\ & & (km/s) & (M\({}_{\odot}\)/yr) & (dynes) & (ergs/s) & (kpc) & \\ \hline \hline Warm ionized & H\(\alpha\) & 2.8 & \(0.8-0.0\) & 34.3 – 33.9 & 41.7 – 41.4 & 0.7 – 2.2 & This paper \\ Cold atomic & H i 21cm & 2.3 & \(0.5-0.1\) & 33.6 – 33.2 & 40.6 – 40.2 & 1.0 – 2.2 & Martini et al. (2018) \\ Cold molecular & CO & 2.2 & \(1.0-0.3\) & 34.0 – 33.3 & 40.9 – 40.2 & 1.0 – 2.2 & Leroy et al. (2015) \\ \hline \hline \end{tabular} Note. – **(*).** For the cold atomic and molecular gas, we assume that the deprojection factor is the same as applied to the warm ionized gas (DF = 2, see Shopbell & Bland-Hawthorn, 1998; and Section 3.3). **(a).** These outflow velocities are used with the mass outflow rates to compute \(\dot{P}_{\rm out}\) and \(\dot{E}_{\rm out}\). For H\(\alpha\), we adopt the \(V_{\rm out}\) from Shopbell & Bland-Hawthorn (1998). **(b).** The values for all parameters cover the corresponding range in radii shown in this column. **(c).** The relevant values for the M 82 starburst are Log(SFR) = 0.9 (M\({}_{\odot}\)/yr), Log(\(\dot{P}_{*}\)) = 34.6 (dynes), and Log(\(\dot{E}_{*}\)) = 42.5 (ergs/s). \end{table} Table 1: Comparisons of Spatially Resolved Outflow Rates from Different Phases in M 82\({}^{(*)}\) Figure 5: The log of the outflow velocity (\(V_{\rm out}\)) versus circular velocity (**Left**) and star formation rate (**Right**). Galaxies from the CLASSY sample is shown in black, while measurements for in M 82 in this paper are shown as the red star. Orange dashed lines represent the best-fit linear correlation presented in Xu et al. (2022). In general, the galactic winds in M 82 match the scaling relationships reported for low-redshift SF galaxies in CLASSY. Outflows are in the form of separate clouds, whose sizes (\(R_{\rm cl}\)), masses (\(M_{\rm cl}\)), and column densities (\(N_{\rm H,cl}\)) are key parameters to determine whether they can survive long enough to be accelerated by the hot wind (e.g., Gronke & Oh, 2020; Li et al., 2020; Sparre et al., 2020; Kanjilal et al., 2021; Abruzzo et al., 2022; Fielding & Bryan, 2022). Thus, these parameters are critical to decipher if outflows can still be significant at large scales within a galaxy. Nonetheless, observational constraints of them are rare in the literature (except in Xu et al., 2023). As shown in Xu et al. (2023), one can estimate \(R_{\rm cl}\) and \(N_{\rm H,cl}\) as follows [see their Section 4.4]: \[\begin{split} R_{\rm cl}(r)&=\frac{3}{4}\frac{\rm FF }{\rm CF_{sh}}L(r)\\ N_{\rm H,cl}(r)&=R_{\rm cl}(r)\times n_{\rm H,\;cl} (r)\end{split} \tag{6}\] where L(r) is the line-of-sight (LOS) path-length through the outflow, CF\({}_{sh}\) = CF/\(\beta_{sh}\) is the outflow LOS covering factor after accounting for shadowing effects. This is because the projected areas by different outflow clouds within the LOS can overlap so that the total covered area drops by a factor of \(\beta_{sh}\). In M82, we do not have direct estimates of CF\({}_{sh}\). Given the similarities of outflows seen in M82 and CLASSY galaxies (see Section 4.2), we adopt the median value of CF\({}_{sh}\) from CLASSY sample as a rough estimate (\(\sim\) 1.7). Overall, from radius of 0.5 to 2 kpc, we get \(R_{\rm cl}\) = 0.9 - 0.07 pc and \(N_{\rm H,cl}\) = 10\({}^{20.7}\) - 10\({}^{19.1}\)cm\({}^{-2}\). We also show the distributions for them in Figure 7. We can compare the above derived warm-ionized outflow cloud properties to the results in Krieger et al. (2021), which attempted to constrain the molecular cloud properties in M 82 based on CO(1-0) observations. They find the molecular clouds are much larger (50\(\pm\)10 pc) and slower moving than the ionized clouds. Comparing with galaxies in CLASSY Figure 6: **Top:** Correlations related to the mass loading factor (\(\dot{M}_{\rm out}\)/SFR). Labels and captions are the same as Figure 5. **Bottom:** Correlations of the momentum (energy) outflow rates versus momentum (energy) suppied by the starburst. See disucssion in Section 4.2. sample, our measured \(R_{\rm cl}\) and \(N_{\rm H,cl}\) values are also \(0.2-2\) dex smaller (see the last two rows in Table 2). ### Comparisons with Theory #### 4.4.1 Description of Models and Simulations The results we have presented can be compared to recent models and simulations of outflows that are designed to capture the physical processes occurring in the multi-phase galactic winds. Here we focus on two recent investigations of multi-phase galactic winds, namely the semi-analytic models by Fielding & Bryan (2022) and the high-resolution numerical simulations by Schneider et al. (2020)4 In both cases, there is an important distinction between the tenuous and high-velocity "wind fluid" that is created by the thermalized ejecta (winds and supernovae) of massive stars, and the denser, slower-moving ambient gas with which it interacts ("outflowing clouds"). The reported warm ionized outflow through H\(\alpha\) in this paper represents only the latter phase. Fielding & Bryan (2022) and Schneider et al. (2020) represent significant improvements on previous models and simulations. The Fielding & Bryan (2022) model is the first to incorporate physically-based mechanisms for the exchange of mass, momentum, and energy between the clouds and the wind. The Schneider et al. (2020) simulations combine significantly-improved spatial resolution that better captures the underlying physics, and more realistic treatments of how mass and energy are injected by the massive stars. Footnote 4: There are other recent numerical simulations of outflows (Kim et al., 2020; Steinwandel et al., 2022; Rey et al., 2023), however these simulations are very poor matches to M 82 in terms of the galaxy mass, SFR, and SFR/A. In the Fielding & Bryan (2022) semi-analytic model, the wind fluid created by stellar ejecta in the starburst interacts with a population of pre-existing clouds in which there can be a two-way exchange of mass, momentum, and energy. To match the exact conditions of M 82, we have rerun their models with SFR = 8 M\({}_{\odot}\) yr\({}^{-1}\), starburst radius = 300 pc, and bi-polar outflows with opening angle of 73.7\({}^{\circ}\) (as derived in Section 3.2). We present the results of four settings in Figures 8 and 9 (colored lines) and compare them with our observed outflow properties from H\(\alpha\) (black dotted lines). We discuss the comparisons in details over the next few subsections. The simulation in Schneider et al. (2020) starts with a wind-fluid created in a super-star-clusters located inside a starburst with a radius of 1000 pc and SFR = 20 M\({}_{\odot}\) yr\({}^{-1}\). The starburst is embedded in a gaseous disk with a radial exponential scale-length of 1.6 kpc and a gas mass of 2.5 \(\times 10^{9}\) M\({}_{\odot}\). The spatial resolution is 5 pc. There is a temperature floor at \(10^{4}\) K, so that the warm phase will be more homogeneous in temperature in the simulation than in reality. In the simulations the collimating effects of the disk lead to a bi-polar outflow that resembles the one in M 82. #### 4.4.2 Outflow Properties We begin by comparing the radial profiles of the outflow velocity (\(V_{\rm out}\)) and outflow rates of the warm ionized gas in M 82 to the predictions of Fielding & Bryan (2022) and Schneider et al. (2018). The former is shown in Figure 8 and the latter is summarized in Tables 3. For models by Fielding & Bryan (2022), blue and red lines represent models that produce faster, less denser winds and slower, denser winds, respectively (hereafter, FW and SW models). The dashed and solid lines represent different initial outflow cloud masses 5. Our measured values from H\(\alpha\) in M 82 are shown as the black dotted lines. We find the ones with \(M_{\rm cl}\) = 10\({}^{1}\) M\({}_{\odot}\) predict larger velocities than are observed, and the agreement is better for the most massive clouds (\(M_{\rm cl}\) = 10\({}^{6}\) M\({}_{\odot}\)). The model with a slower and denser wind fluid (red curves) predicts lower velocities (more consistent with M 82). For \(\dot{M}_{\rm out}\), the FW model with lower mass clouds (blue solid line in the second panel) produce \(\dot{M}_{\rm out}\) values that do not match the data for M82: the rates are too small and decline too quickly with radius. Their SW model (red lines) shows a better match to our measurements. For \(\dot{E}_{\rm out}\), the FW model with lower \(M_{\rm cl}\) and SW model with higher \begin{table} \begin{tabular}{l c r r} \hline \hline Parameters & Unit & M 82 & CLASSY \\ (1) & (2) & (3) & (4) \\ \hline Log(\(M_{\rm out}\)/SFR) & (1) & –0.30 & –0.60 \\ Log(\(V_{\rm out}\)/\(V_{\rm cir}\)) & (1) & 0.60 & 0.55 \\ Log(\(P_{\rm out}\)) & (dynes) & 34.1 & 34.2 \\ Log(Equ) & (erg s\({}^{-1}\)) & 41.6 & 41.3 \\ Log(N\({}_{\rm H,LOS}\))\({}^{(a)}\) & (cm\({}^{-2}\)) & 20.5 & 20.7 \\ \hline Log(\(n_{\rm e}\)) & (cm\({}^{-3}\)) & 1.7 to 2.3 & 1.5 \\ Log(FF) & (1) & –2.6 to –4.1 & –2.5 \\ Log(\(R_{\rm cl}\))\({}^{(b,c)}\) & (pc) & –1.2 to –0.0 & 0.7 \\ Log(N\({}_{\rm H,cl}\))\({}^{(b,c)}\) & (cm\({}^{-2}\)) & 19.1 to 20.7 & 20.8 \\ \hline \hline \end{tabular} Note. – **(a).** For M 82 values related to outflow rates (first four rows), we show the mean values between radii of 0.5 and 2.2 kpc. For M 82’s outflow cloud properties (latter four rows), we present the values correspond to the range in radii of \(0.5-2.2\) kpc. For CLASSY sample, we list their published median values (Xu et al., 2022; 2023). See discussion in Section 4.2. **(a).** The line-of-sight integrated hydrogen column density (see Equation 5). **(b).** The cloud properties for M 82 are derived in Section 4.3. **(c).** The cloud radii and column densities for M 82 assume that the cloud covering factor has the median value derived for the CLASSY sample (see equation 6). \end{table} Table 2: Comparisons of Outflows in M 82 and CLASSY\({}^{(*)}\) match better with the data. For \(\dot{P_{\rm out}}\), the SW model with initial \(M_{\rm cl}\) between our two cases should match the data better. On the other hand, we find that the outflow velocities are under-predicted in the Schneider et al. (2020) simulation (Tables 3). The simulation also shows a steady increase in outflow velocity with increasing radius, which is barely seen in M 82. In the columns (4) and (5) of Table 3, the values of \(\dot{M_{\rm out}}\) derived for the warm ionized gas (normalized for the SFR) in the Schneider et al. (2020) simulation are significantly smaller than the values measured for M 82. #### 4.4.3 Radial Density and Pressure Profiles Next, we compare the measured the radial density gradient in the warm ionized outflow in M 82 (Figure 2) to predictions from the two papers described above. Fielding and Bryan (2022) have shown the radial profile of the gas pressure \(P\) in the outflows. To convert our electron densities in M 82 to pressures (\(P\)), we take \(P/k_{\rm B}=2n_{e}T\), and assume \(T\sim 10^{4}\) K for the warm ionized gas. This is appropriate for photoionized gas (Schneider et al., 2020; Xu et al., 2022)6. In the left panel of Figure 9, it is clear that the predicted wind pressures by Fielding and Bryan (2022) are too low in all four models. The discrepancies grow with distance, reaching about a factor of \(\sim\)30 to over 100 at a distance of 2 kpc. Footnote 6: If the emission-line gas is shock-heated, it will have a higher temperature and hence a higher inferred pressure. This will only strengthen our conclusions below One possible interpretation of this would be that the densities (and hence the pressures) derived from the [S ii] doublet ratio are biased to higher-than-average values. This could occur if there is a range in density along a line-of-sight, and the [S ii] emission is weighted towards the higher density regions (since the emissivity per unit volume is proportional to \(n^{2}\)).7 To test this, we can compare the pressures derived from [S ii] for the warm phase to those derived independently for the hot X-ray-emitting phase. This is reasonable, since there is a close morphological correspondence between the optical and soft X-ray emission in the M 82 outflow (Heckman and Thompson, 2017). Therefore, in the same panel, we also overlay the radial wind pressure profile derived from X-ray measurements of M 82 in Lopez et al. (2020) (green line). Since they assume X-ray volume filling factors (FF\({}_{\rm X}\)) of unity and since \(P_{\rm X}\propto\rm FF_{\rm X}^{-1/2}\), their measurements are strict lower limits, corresponding to minimum pressures \(\sim 25\%\) as large as our estimates. We emphasize that the complex filamentary structure seen in the soft X-ray emission is inconsistent with unit filling factor8. Even if FF\({}_{\rm X}=1\), all the models significantly underpredict \(P_{\rm X}\) for \(r>0.7\) kpc. Footnote 7: If the [S ii] densities are indeed biased high (by some factor B \(>>1\), as required to match the thermal pressures in the models), then all the observed outflow rates in Figure 8 would be boosted by B \(>>1\), and become unphysically large. Footnote 8: The pressures derived from the [S ii] ratios and the X-ray data would agree for FF\({}_{\rm X}\sim 0.06\). Comparison with the numerical simulation in Schneider et al. (2020) shows the same discrepancy (the last three columns in Table 3). In their simulations, a bi-conical outflow naturally develops, which resembles M 82. They adopt an SFR = 20 M\({}_{\odot}\) yr\({}^{-1}\), so we reduce their predicted densities by a factor 20/8 = 2.5. In their model, the starburst extends to a radius of 1 kpc, so we only compare their predicted range in outflow density to our data at radii of 1 and 2 kpc. The predicted pressures are 30 to 70 times lower than our measurements. We note that in both the model and simulation, the rapid radial drop in the predicted density in the warm gas is caused by the rapid drop in the thermal pressure (\(P_{\rm th}\)) in the wind fluid (via both a \(r^{-2}\) drop in density and by the associated Figure 7: **Left:** Radial distribution of the outflow cloud sizes. **Right:** Radial distribution of the cloud hydrogen column density (\(N_{\rm h,cl}\)) for the outflows. See Section 4.3 for details. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Radius} & \(\mathrm{V_{out,~{}S_{e}}}\) & \(\mathrm{V_{out,~{}M~{}S2}}\) & \((M_{\mathrm{out}}/\mathrm{SFR})_{\mathrm{s*}}\) & \((M_{\mathrm{out}}/\mathrm{SFR})_{\mathrm{M~{}S2}}\) & \(P_{\mathrm{th,~{}S*}}\) & \(P_{\mathrm{mm,~{}S*}}\) & \(P_{\mathrm{H~{}S2}}\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline \hline 500 &... & 200 &... & 0.37 &... &... & 4.2 \(\times\) 10\({}^{6}\) \\ 1000 & 100 to 300 & 600 & 0.02 & 0.25 & 1.0 \(\times\) 10\({}^{5}\) & 4.6 \(\times\) 10\({}^{5}\) & 1.8 \(\times\) 10\({}^{6}\) \\ 2000 & 200 to 500 & 600 & 0.03 & 0.13 & 1.3 \(\times\) 10\({}^{4}\) & 2.1 \(\times\) 10\({}^{5}\) & 9.0 \(\times\) 10\({}^{5}\) \\ \hline \hline \end{tabular} Note. – **(*).** Radius is in unit of parsec, while all velocities are in units of km s\({}^{-1}\) and pressures are in units of Pa/K. The mass outflow rates are normalized by the SFR. **(2), (4), (6) and (7):** Model predictions from Schneider et al. (2020). For pressures, we have divided their values by 2.5 to correct for the lower SFR in M 82 compared to the model. **(5) and (8):** Results derived in this paper based on rest-optical observations of M 82 (see Figure 4). The pressures in M 82 are derived from the measured \(n_{e}\) (see Figure 2 and Section 4.4.3). \end{table} Table 3: Comparisons of Outflows Properties from M 82 and Schneider et al. (2020)\({}^{(*)}\) Figure 8: Comparisons of the semi-analytic models by Fielding & Bryan (2022) with the derived outflow parameters in this paper. We have rerun their models to match the input parameters of M 82 (i.e., SFR = 8 M\({}_{\odot}\) yr\({}^{-1}\), starburst radius = 300 pc, and bi-polar outflows with opening angle of 73.7\({}^{\circ}\)). The blue and red lines represent the models with initial hot phase mass loading factor as 0.1 and 0.5, respectively. The former produces faster and less dense hot wind fluid (hereafter, FW model), while the latter generates slower and denser wind fluid (hereafter, SW model). The solid and dashed lines represent different outflow cloud mass into the models, i.e., 10\({}^{1}\) and 10\({}^{6}\) M\({}_{\odot}\), respectively. In the four panels, we show the model predicted outflow cloud velocity, mass, energy, and momentum outflow rate, respectively. In each panel, we overlay our observed outflow properties from H\(\alpha\) as black dotted lines with errorbars. We discuss these comparisons in detail in Section 4.4. adiabatic cooling), and by the assumption that the warm ionized clouds are in balance with \(P_{\rm th}\) of the wind fluid. This mismatch between the models and the data implies that the clouds we observed are highly over-pressured relative to \(P_{\rm th}\) of the wind fluid. In contrast, Heckman et al. (1990) and Lehnert & Heckman (1996) showed that the radial density profiles in starburst outflows (including M 82) could be explained if the cloud pressure is set by the ram pressure (\(P_{\rm ram}\)) of the wind fluid. The ratio of the ram and thermal pressure in the wind fluid will be 5/3 \(M^{2}\), where \(M\) is the Mach number in the wind fluid. Since \(M\gg 1\) in these supersonic winds, the pressure differences can be substantial. To compute the ram pressure on the clouds, we need to use the relative velocity between the wind and the cloud rather than the wind velocity. Doing so, we then find that in both the Fielding & Bryan (2022) models and Schneider et al. (2020) simulations, the ram pressures on the clouds are indeed significantly larger than the thermal pressures, and in better agreement with the M 82 data (see the right panel in Figure 9 and Table 3). We note that existing numerical simulations of wind-cloud interactions are not consistent with a balance between cloud thermal pressure and wind ram pressure. Regardless of the physical basis of the disagreement between theory and the data, this implies that both the numerical simulations and the semi-analytic model are missing some important physics. It is then unclear how this missing physics would affect the other theoretical predictions. #### 4.4.4 Other Cloud Properties The results presented in Fielding & Bryan (2022) allow us to compare other key properties of their clouds to what we have estimated for M 82, namely the cloud radii and column densities (i.e., \(R_{\rm cl}\), and \(N_{\rm H,cl}\) derived in Section 4.3). Fielding & Bryan (2022) do not plot the modelled \(R_{\rm cl}\), but these can be inferred from their \(M_{\rm cl}\) and \(N_{\rm H,cl}\) values. At a fiducial distance of 1 kpc from the starburst the implied \(R_{\rm cl}\) range from \(\sim\) 5 to 200 pc for \(M_{\rm cl}=10^{1}\) to \(10^{6}\) M\({}_{\odot}\), respectively. Our measured \(R_{\rm cl}\) in M 82 (\(<0.9\) pc) are less than the lowest mass clouds in the models. Then we can compare \(N_{\rm H,cl}\) between the models and the data. At a fiducial distance of 1 kpc, we find a range in the models of \(N_{H}\) from \(1\times 10^{19}\) to 8 \(\times 10^{20}\) cm\({}^{-2}\) for the clouds with masses of \(10^{1}\) to \(10^{6}\) M\({}_{\odot}\), respectively. Here, the observed M 82 warm ionized clouds are quite similar to the values in the models (see Figure 7). We conclude that the cloud in the models have similar column density but are too large compared with our observed warm ionized outflows for M 82. In the future, we will apply similar models to the CLASSY sample to study this discrepancy more generally. ## 5 Conclusion In this paper, we have reported the first estimates of the radial distributions of the gas density, outflow rates and cloud properties for the warm ionized gas in the M 82 wind based on the rest-optical data from the Subaru telescope. The main results are summarized as follows: * We have derived the radial distribution of outflow densities based on [S ii] \(\lambda\)6717, 6731 emission lines. We find that the density drops from \(\sim\) 200 cm\({}^{-3}\) at \(r\) = 0.5 kpc to \(\sim\) 40 cm\({}^{-3}\) at \(r\) = 2.2 kpc, while the best-fit power-law is \(n_{\rm e}(r)=100\times(\frac{r}{1165pc})^{-1.17}\) (Figure 2 and Section 3.1). * We calculated the radial distribution of the lateral width of the outflow based on the Subaru/FOCAS image of M 82. We find that the lateral width is \(\sim\) 1.5 \(\times\)\(r\), where \(r\) is the outflow distance to the galactic center (Section 3.2). This leads to a total solid angle of 0.8 \(\pi\) ster for the bi-conical H\(\alpha\) outflow in M 82. * Based on the derived outflow densities and widths, we then estimated the radial distributions of the volume filling factor (FF), which drops from \(\sim\) 10\({}^{-3}\) to 10\({}^{-4}\) over the range of \(r\) = 0.5 to 2.2 kpc. This leads to the best-fit power-law as \({\rm FF}(r)=10^{-3}\times(\frac{r}{628pc})^{-1.8}\). * We measured the mass/energy/momentum outflow rates and found that they drop quite slowly with radius, and stay almost unchanged between 0.8 and 2.2 kpc (Section 3.3). This suggests that the galactic winds in M 82 can indeed supply mass, momentum, and kinetic energy from the central regions out to at least a few kpc with minimal losses. * 4 times larger than the velocities seen in the H i 21 cm and CO emission lines. These velocities are consistent with a picture in which the atomic and molecular gas actually trace a fountain flow extending out to a few kpc (Section 4.1). * By comparing to a large sample of local star-forming galaxies in the CLASSY sample studied using UV absorption-lines, we found the warm ionized outflow probed by the H\(\alpha\) emission lines in M 82 follows similar scaling relationships. This suggests that the outflow properties in M 82 are similar to other local star-forming and starburst galaxies. The consistency between CLASSY and M 82 also suggests that the ionized gas seen in emission and absorption is likely to trace similar material (Section 4.2). * 0.9 pc and cloud column densities are \(10^{19.1}\) - \(10^{20.7}\) cm\({}^{-2}\), which drops steadily from 0.5 to 2.0 kpc. These values are \(0.2-2\) dex smaller than the ones measured for the clouds in the warm ionized outflows based on the UV absorption-line data for the CLASSY sample. * We compared the warm ionized outflows in M 82 with the theoretical models and simulations from Schneider et al. (2020) and Fielding and Bryan (2022) in Section 4.4. After accounting for geometrical and SFR differences, we found that the thermal pressures in the clouds predicted by these models are far smaller than our measured valuesin the M 82 data. There is better agreement with the wind ram pressures in the models and simulations. Overall, we have presented novel measurements of radial distributions of outflow properties in M 82, including the outflow velocity, density, rates, and cloud properties. Our work motivates similar spatially resolved studies of the ionized gas in a larger sample of galactic winds. Various essential questions await answers. For example, what are the general radial distributions of outflow properties and associated feedback? How can we adopt these radial distributions to help constrain future models and simulations of galactic winds? How are the different phases of winds connected spatially and kinetically, based on high spatial resolution, multi-wavelength data? Answering these questions will ultimately reveal the complex properties and structures of galactic winds and resolve their feedback on their host galaxies. This research is based on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. We are honored and grateful for the opportunity of observing the Universe from Maunakea, which has the cultural, historical, and natural significance in Hawaii. X.X. and T.H. thank D. Fielding, G. Bryan, and M. Gronke for useful discussions. Subaru Telescope
2305.19773
Pareto Frontier for the Performance-Complexity Trade-off in Beyond Diagonal Reconfigurable Intelligent Surfaces
Reconfigurable intelligent surface (RIS) is an emerging technology allowing to control the propagation environment in wireless communications. Recently, beyond diagonal RIS (BD-RIS) has been proposed to reach higher performance than conventional RIS, at the expense of higher circuit complexity. Multiple BD-RIS architectures have been developed with the goal of reaching a favorable trade-off between performance and circuit complexity. However, the fundamental limits of this trade-off are still unexplored. In this paper, we fill this gap by deriving the expression of the Pareto frontier for the performance-complexity trade-off in BD-RIS. Additionally, we characterize the optimal BD-RIS architectures reaching this Pareto frontier.
Matteo Nerini, Bruno Clerckx
2023-05-31T12:06:47Z
http://arxiv.org/abs/2305.19773v2
Pareto Frontier for the Performance-Complexity Trade-off in Beyond Diagonal Reconfigurable Intelligent Surfaces ###### Abstract Reconfigurable intelligent surface (RIS) is an emerging technology allowing to control the propagation environment in wireless communications. Recently, beyond diagonal RIS (BD-RIS) has been proposed to reach higher performance than conventional RIS, at the expense of higher circuit complexity. Multiple BD-RIS architectures have been developed with the goal of reaching a favorable trade-off between performance and circuit complexity. However, the fundamental limits of this trade-off are still unexplored. In this paper, we fill this gap by deriving the expression of the Pareto frontier for the performance-complexity trade-off in BD-RIS. Additionally, we characterize the optimal BD-RIS architectures reaching this Pareto frontier. Beyond diagonal reconfigurable intelligent surface (BD-RIS), Pareto frontier, performance-complexity trade-off. ## I Introduction Reconfigurable intelligent surface (RIS) has recently gained a lot of popularity as a technology able to make the propagation environment smart and reconfigurable in wireless networks [1, 2, 3]. A RIS is composed of a large number of electrically tunable reflective elements that can be controlled to provide a passive beamforming gain. Due to its ultra-low power consumption, low profile, and low cost, RIS is expected to efficiently enhance the performance and coverage of future wireless communications. In a conventional RIS architecture, also known as single-connected, each element is independently controlled by a tunable impedance component [4]. This results in conventional RIS having a diagonal scattering matrix, also commonly known as a phase shift matrix. To improve the capabilities of RIS, beyond diagonal RIS (BD-RIS) has been proposed as a generalization of conventional RIS, in which the scattering matrix is not limited to being diagonal [5]. The key novelty introduced in BD-RIS is the presence of tunable impedance components interconnecting the RIS elements, adding further flexibility to the RIS at the expense of additional circuit complexity. The single-connected RIS architecture has been first generalized in [4]. By interconnecting some or all the RIS elements to each other, group- and fully-connected RIS architectures have been proposed, respectively [4]. Group- and fully-connected RISs have been globally optimized in closed form assuming continuous reflection coefficients in [6], while they have been optimized using discrete reflection coefficients in [7]. Besides, BD-RIS have been modeled using graph theory in [8], where BD-RIS architectures have been described through graphs capturing the presence of tunable impedance components between the RIS elements. As a result of this graph theoretical modeling, two low-complexity BD-RIS architectures have been proposed, namely forest- and tree-connected RISs [8]. BD-RIS has been studied in several contexts showing significant performance gains over conventional RIS. In [9], a BD-RIS model has been developed unifying different BD-RIS working modes (reflective/transmissive/hybrid) and different BD-RIS architectures (single/group/fully-connected). In [10], multi-sector BD-RIS has been introduced to efficiently enable full-space coverage, where the elements are divided into multiple sectors, each covering a narrow region of space. Non-diagonal RIS [11] and dynamically group-connected RIS [12] have been proposed to outperform conventional RIS and group-connected RIS, respectively, thanks to their dynamic interconnections reconfigured on a per channel realization basis. Additionally, BD-RIS has proved to enlarge the coverage and improve the sum rate in rate splitting multiple access (RSMA) systems [13, 14], and to improve communication capacity and sensing precision in dual-function radar-communication (DFRC) systems [15]. When designing new BD-RIS architectures, the critical issue is the trade-off between performance and circuit complexity, given by the number of tunable impedance components in the BD-RIS architecture [8]. On the one hand, the single-connected RIS is the architecture with the lowest circuit complexity since there are no interconnections among the RIS elements. Due to its limited flexibility, the single-connected RIS can only achieve a reduced performance. On the other hand, the fully-connected RIS has the highest circuit complexity since each RIS element is connected to all others through a tunable impedance component, enabling the highest performance. Several BD-RIS architectures have been proposed to trade performance and complexity. However, the fundamental limits of this trade-off are still unexplored. To fill this gap, we investigate how to optimally trade the achievable performance and the circuit complexity in BD-RIS architectures. The contribution of this letter is twofold. _First_, we derive the Pareto frontier for the performance-complexity trade-off offered by BD-RIS in single-input single-output (SISO) systems. _Second_, we characterize the optimal BD-RIS architectures allowing to achieve this Pareto frontier. ## II System Model Consider a SISO communication system aided by an \(N\)-element RIS. The \(N\) elements of the RIS are connected to a \(N\)-port reconfigurable impedance network, with scattering matrix \(\mathbf{\Theta}\in\mathbb{C}^{N\times N}\). Defining \(x\in\mathbb{C}\) as the transmitted signal and \(y\in\mathbb{C}\) as the received signal, we have \(y=hx+n\), where \(h\in\mathbb{C}\) is the wireless channel and \(n\in\mathbb{C}\) is the additive white Gaussian noise (AWGN) at the receiver. Assuming that the direct link between the transmitter and the receiver is negligible compared to the RIS-aided link, the channel \(h\) writes as \[h=\mathbf{h}_{R}\mathbf{\Theta}\mathbf{h}_{T}, \tag{1}\] where \(\mathbf{h}_{R}\in\mathbb{C}^{1\times N}\) and \(\mathbf{h}_{T}\in\mathbb{C}^{N\times 1}\) refer to the channels from the RIS to the receiver and from the transmitter to the RIS, respectively [4]1. In this study, we assume independent and identically distributed (i.i.d.) Rayleigh fading channels \(\mathbf{h}_{R}\sim\mathcal{CN}\left(\mathbf{0},\mathbf{I}\right)\) and \(\mathbf{h}_{T}\sim\mathcal{CN}\left(\mathbf{0},\mathbf{I}\right)\). Footnote 1: Since \(\mathbf{h}_{R}\mathbf{\Theta}\mathbf{h}_{T}\) can always be co-phased with the direct link, our conclusions are not imputed by the direct link. In the case of a non-negligible direct link, the performance would merely be scaled up, depending on its strength. Thus, we neglect the direct link to gain fundamental insights not depending on its strength. When reconfiguring a RIS, the scattering matrix \(\mathbf{\Theta}\) is typically optimized to maximize the performance given by the received signal power \[P_{R}=P_{T}\left[\mathbf{h}_{R}\mathbf{\Theta}\mathbf{h}_{T}\right]^{2}, \tag{2}\] where \(P_{T}=\mathrm{E}[|x|^{2}]\) is the transmitted signal power. Considering passive RISs with lossless and reciprocal impedance networks, the matrix \(\mathbf{\Theta}\) is in general subject to the constraints \(\mathbf{\Theta}^{H}\mathbf{\Theta}=\mathbf{I}\) and \(\mathbf{\Theta}=\mathbf{\Theta}^{T}\)[16]. Furthermore, additional constraints on \(\mathbf{\Theta}\), limiting the received signal power, are present depending on the BD-RIS architecture [4, 8]. ## III Problem Formulation Conventional RIS, also known as single-connected RIS, is the least complex architecture achieving the lowest performance, given by \[\bar{P}_{R}^{\mathrm{Single}}=P_{T}\left(\sum_{n=1}^{N}\left|\left[\mathbf{h }_{R}\right]_{n}\left[\mathbf{h}_{T}\right]_{n}\right|\right)^{2}, \tag{3}\] since it includes only \(N\) tunable impedance components [4]. In contrast, tree-connected RIS is proved to be the least complex architecture achieving the performance upper bound \[\bar{P}_{R}^{\mathrm{Tree}}=P_{T}\left\|\mathbf{h}_{R}\right\|^{2}\left\| \mathbf{h}_{T}\right\|^{2}, \tag{4}\] with \(2N-1\) tunable impedance components [8]. In this letter, our goal is to determine the maximum performance achievable by BD-RIS architectures with circuit complexity \(C\in[N,2N-1]\), representing the number of tunable components2. Furthermore, we are interested in which BD-RIS architectures allow us to reach such a maximum performance, denoted as "optimal" BD-RIS architectures in the following. Footnote 2: In our analysis, we preclude BD-RISs with dynamic interconnections, since they require switches and hence additional circuit complexity. We begin by characterizing the maximum received signal power achievable by a given BD-RIS architecture. To this end, we consider the modeling of BD-RIS based on graph theory developed in [8]. According to [8], each BD-RIS architecture can be described through a graph \(\mathcal{G}\) capturing the presence of tunable impedance components between its RIS elements. We denote as \(G\) the number of connected components of such a graph \(\mathcal{G}\), where a connected component of a graph is defined as a connected subgraph that is not part of any larger connected subgraph [17]. Besides, \(N_{g}\geq 1\) is the number of RIS elements included in the \(g\)th component, with \(\sum_{g=1}^{G}N_{g}=N\). In agreement with previous work on BD-RIS [4]-[12], we refer to the connected components of \(\mathcal{G}\) as the "groups" of the corresponding BD-RIS architecture. According to [8], the maximum received signal power obtained by the BD-RIS associated with \(\mathcal{G}\) is given by \[\bar{P}_{R}=P_{T}\left(\sum_{g=1}^{G}\left\|\mathbf{h}_{R,g}\right\|\left\| \mathbf{h}_{T,g}\right\|\right)^{2}, \tag{5}\] where \(\mathbf{h}_{R,g}\in\mathbb{C}^{1\times N_{g}}\) and \(\mathbf{h}_{T,g}\in\mathbb{C}^{N_{g}\times 1}\) contain the \(N_{g}\) elements of \(\mathbf{h}_{R}\) and \(\mathbf{h}_{T}\) corresponding to the \(N_{g}\) RIS elements included into the \(g\)th group, respectively. In the case of i.i.d. fading channels, we can assume that each group includes adjacent RIS elements with no loss of generality, such that \(\mathbf{h}_{R}=[\mathbf{h}_{R,1},\ldots,\mathbf{h}_{R,G}]\) and \(\mathbf{h}_{T}=[\mathbf{h}_{T,1}^{T},\ldots,\mathbf{h}_{T,G}^{T}]^{T}\). Thus, the maximum received signal power \(\bar{P}_{R}\) achievable by a given BD-RIS architecture solely depends on \(G\) and the group sizes \(N_{1},\ldots,N_{G}\). To express the maximum received signal power \(\bar{P}_{R}\) achievable with a circuit complexity \(C\) as a function of \(C\), we introduce the following three results. First, we characterize the optimal BD-RIS architectures through the following lemma. **Lemma 1**.: _All the optimal BD-RIS architectures have a corresponding graph being acyclic, also known as a forest._ Proof.: Please refer to Appendix A. In other words, a BD-RIS architecture can be optimal only if its graph does not contain any cycle, i.e., a finite sequence of distinct edges joining a sequence of vertices, where only the first and last vertices are equal [17]. Second, we use the following result from graph theory [17]. **Lemma 2**.: _If a graph \(\mathcal{G}\) is a forest, then it has \(G=N-L\) connected components, where \(N\) is the number of vertices and \(L\) is the number of edges._ Proof.: Please refer to Appendix B. Third, by using Lemma 1 and Lemma 2, we can derive the following proposition. **Proposition 1**.: _An optimal BD-RIS architecture with \(N\) elements and circuit complexity \(C\), with \(C\in[N,2N-1]\), has a corresponding graph with \(G=2N-C\) connected components._ Proof.: Please refer to Appendix C. According to Proposition 1, given a circuit complexity \(C\), the number of groups \(G\) in the corresponding optimal BD-RIS is fixed. Thus, our problem is to find the group sizes \(N_{1},\ldots,N_{G}\) of the BD-RIS architecture that maximize the performance \(\mathrm{E}\left[\bar{P}_{R}\right]\), with fixed \(G\). The corresponding optimization problem is given by \[\max_{N_{1},\ldots,N_{G}}~{}\mathrm{E}\left[\bar{P}_{R}\right] \tag{6}\] \[\mathrm{s.t.}~{}~{}N_{g}\geq 1,\,\forall g,~{}\sum_{g=1}^{G}N_{g}=N, \tag{7}\] where \(G=2N-C\) is fixed depending on the circuit complexity \(C\). ## IV Pareto Frontier To solve problem (6)-(7), we assume \(P_{T}=1\) with no loss of generality and, making use of the i.i.d. channels assumption, we write the average received signal power (5) as \[\mathrm{E}\left[\bar{P}_{R}\right]=\sum_{g=1}^{G}\mathrm{E}\left[ \left\|\mathbf{h}_{R,g}\right\|^{2}\right]^{2}\\ +\sum_{g_{1}\neq g_{2}}\mathrm{E}\left[\left\|\mathbf{h}_{R,g_{1 }}\right\|\right]^{2}\mathrm{E}\left[\left\|\mathbf{h}_{R,g_{2}}\right\|\right] ^{2}. \tag{8}\] Using the moment of the \(\chi_{2N_{g}}\) distribution, we have that \(\mathrm{E}[\left\|\mathbf{h}_{R,g}\right\|]=\Gamma(N_{g}+1/2)/\Gamma(N_{g})\) and \(\mathrm{E}[\left\|\mathbf{h}_{R,g}\right\|^{2}]=N_{g}\), for \(g=1,\ldots,G\), where \(\Gamma(\cdot)\) refers to the gamma function. Thus, we can write \[\mathrm{E}\left[\bar{P}_{R}\right]=\sum_{g=1}^{G}N_{g}^{2}\\ +\sum_{g_{1}\neq g_{2}}\left(\frac{\Gamma\left(N_{g_{1}}+1/2 \right)}{\Gamma\left(N_{g_{1}}\right)}\right)^{2}\left(\frac{\Gamma\left(N_{g _{2}}+1/2\right)}{\Gamma\left(N_{g_{2}}\right)}\right)^{2}. \tag{9}\] The expression of \(\mathrm{E}\left[\bar{P}_{R}\right]\) in (9) can be now simplified by using the relationship \[\left(\frac{\Gamma\left(M+1/2\right)}{\Gamma\left(M\right)}\right)^{2}=M- \frac{1}{4}+\frac{1}{32M}+\mathcal{O}\left(\left(\frac{1}{M}\right)^{2}\right), \tag{10}\] given by the Laurent series expansion at \(M=\infty\)[18]. Remarkably, the function \((\Gamma(M+1/2)/\Gamma(M))^{2}\) is well approximated by \(M-1/4+1/(32M)\) for any positive integer \(M\), despite the series being computed at \(M=\infty\). To show this, in Fig. 1, we report the relative error \[\delta=\frac{\left(\frac{\Gamma(M+1/2)}{\Gamma(M)}\right)^{2}-\left(M-\frac{1 }{4}+\frac{1}{32M}\right)}{\left(\frac{\Gamma(M+1/2)}{\Gamma(M)}\right)^{2}}, \tag{11}\] for the first positive integers. Since \(\delta\) is significantly small for any positive integer \(M\), we can approximate (9) as \[\mathrm{E}\left[\bar{P}_{R}\right]=\sum_{g=1}^{G}N_{g}^{2}\\ +\sum_{g_{1}\neq g_{2}}\left(N_{g_{1}}-\frac{1}{4}+\frac{1}{32N_{ g_{1}}}\right)\left(N_{g_{2}}-\frac{1}{4}+\frac{1}{32N_{g_{2}}}\right). \tag{12}\] By developing the product and reorganizing the terms, we get \[\mathrm{E}\left[\bar{P}_{R}\right]=\sum_{g=1}^{G}N_{g}^{2}+\sum_{ g_{1}\neq g_{2}}\left(N_{g_{1}}N_{g_{2}}+\frac{1}{16}-\frac{N_{g_{1}}}{4}- \frac{N_{g_{2}}}{4}\right.\\ \left.+\frac{N_{g_{1}}}{32N_{g_{2}}}+\frac{N_{g_{2}}}{32N_{g_{1}} }-\frac{1}{128N_{g_{1}}}-\frac{1}{128N_{g_{2}}}+\frac{1}{1024N_{g_{1}}N_{g_{2} }}\right). \tag{13}\] Recalling that \(N_{g}\geq 1\), \(\forall g\), the term \(1/(1024N_{g_{1}}N_{g_{2}})\) in (13) is negligible since it is at least 1024 times smaller than the term \(N_{g_{1}}N_{g_{2}}\). Thus, omitting \(1/(1024N_{g_{1}}N_{g_{2}})\) and completing the computations, we obtain \[\mathrm{E}\left[\bar{P}_{R}\right]=N^{2}+\frac{G\left(G-1\right) }{16}-\frac{N\left(G-1\right)}{2}\\ +\sum_{g_{1}\neq g_{2}}\left(\frac{N_{g_{1}}}{32N_{g_{2}}}+\frac{ N_{g_{2}}}{32N_{g_{1}}}-\frac{1}{128N_{g_{1}}}-\frac{1}{128N_{g_{2}}}\right). \tag{14}\] Observing that \[\sum_{g_{1}\neq g_{2}}\frac{N_{g_{1}}}{32N_{g_{2}}}=\sum_{g_{1} \neq g_{2}}\frac{N_{g_{2}}}{32N_{g_{1}}}=\sum_{g=1}^{G}\frac{N-N_{g}}{32N_{g}}, \tag{15}\] \[\sum_{g_{1}\neq g_{2}}\frac{1}{128N_{g_{1}}}=\sum_{g_{1}\neq g_{2 }}\frac{1}{128N_{g_{2}}}=\sum_{g=1}^{G}\frac{G-1}{128N_{g}}, \tag{16}\] we can eventually rewrite \(\mathrm{E}\left[\bar{P}_{R}\right]\) as \[\mathrm{E}\left[\bar{P}_{R}\right]=N^{2}+\frac{G-1}{16}\left(G-8 N\right)\\ -\frac{G}{16}+\frac{4N-G+1}{64}\sum_{g=1}^{G}\frac{1}{N_{g}}. \tag{17}\] We notice from (17) that maximizing \(\mathrm{E}\left[\bar{P}_{R}\right]\) is equivalent to maximize \(\sum_{g=1}^{G}1/N_{g}\). Thus, problem (6)-(7) can be equivalently expressed as \[\max_{N_{1},\ldots,N_{G}}~{}\sum_{g=1}^{G}\frac{1}{N_{g}} \tag{18}\] \[\mathrm{s.t.}~{}~{}N_{g}\geq 1,~{}\forall g,~{}\sum_{g=1}^{G}N_{g}=N, \tag{19}\] which is solved in the following proposition. Fig. 1: Relative error \(\delta\) for the first 10 positive integers \(M\). **Proposition 2**.: _The solution to problem (18)-(19) is given by_ \[N_{1}=N_{2}=\ldots=N_{G-1}=1, \tag{20}\] \[N_{G}=N-G+1, \tag{21}\] _up to a permutation of the group sizes._ Proof.: Please refer to Appendix D. Given the optimal group sizes \(N_{1},\ldots,N_{G}\) provided by Proposition 2, we can derive in closed form the expression of the desired Pareto frontier. Specifically, plugging (20) and (21) into (9), we obtain \[\mathrm{E}\left[\bar{P}_{R}\right] =G{-}1{+}{\left(N-G+1\right)}^{2}{+}{\left(G-1\right)}\left(G-2 \right)\Gamma\left(3/2\right)^{4}\] \[+2\left(G-1\right)\left(\frac{\Gamma\left(N-G+3/2\right)\Gamma \left(3/2\right)}{\Gamma\left(N-G+1\right)}\right)^{2}, \tag{22}\] giving the maximum performance achievable with a BD-RIS architecture having \(G\) groups. Finally, recalling that \(G=2N-C\), the expression of the maximum performance achievable with a circuit complexity \(C\in[N,2N-1]\) is given by \[\mathrm{E}\left[\bar{P}_{R}\right] =\left(C-N\right)^{2}+C\] \[+\left(2N-C-1\right)\left(2N-C-2\right)\Gamma\left(3/2\right)^{4}\] \[+2\left(2N-C-1\right)\left(\frac{\Gamma\left(C-N+3/2\right) \Gamma\left(3/2\right)}{\Gamma\left(C-N+1\right)}\right)^{2}, \tag{23}\] representing the Pareto frontier of the performance-complexity trade-off offered by BD-RISs. ## V Numerical Results In Fig. 2, we report the Pareto frontier given by (23), delimiting the region of feasible BD-RIS architectures with \(N=64\) elements. This frontier is compared with the performance-complexity trade-off achieved by the BD-RIS architectures recently proposed [4, 8]. More precisely, we report group- and forest-connected RISs, both achieving a performance \[\mathrm{E}\left[\bar{P}_{R}^{\mathrm{Group}}\right]=\frac{N^{2}}{G}+G\left(G- 1\right)\left(\frac{\Gamma\left(N/G+1/2\right)}{\Gamma\left(N/G\right)} \right)^{4}, \tag{24}\] and with complexity \(C^{\mathrm{Group}}=N(N/G+1)/2\) and \(C^{\mathrm{Forest}}=2N-G\), respectively. Note that (24) is obtained by setting \(N_{g}=N/G\), \(\forall g\), in (9). The forest-connected RISs in Fig. 2 have group sizes 2, 4, 8, and 16, while the group-connected RISs have group sizes 4, 8, and 16 since forest- and group-connected RISs with group sizes 2 are equivalent. Besides, we report fully- and tree-connected RISs, both achieving \[\mathrm{E}\left[\bar{P}_{R}^{\mathrm{Fully}}\right]=N^{2}, \tag{25}\] and with complexity \(C^{\mathrm{Fully}}=N(N+1)/2\) and \(C^{\mathrm{Tree}}=2N-1\), respectively, where (25) is derived by setting \(G=1\) in (24). Finally, we report the single-connected RIS architecture, achieving a performance given by \[\mathrm{E}\left[\bar{P}_{R}^{\mathrm{Single}}\right]=N+N\left(N-1\right) \Gamma\left(3/2\right)^{4}, \tag{26}\] and with complexity \(C^{\mathrm{Single}}=N\), where (26) is derived by setting \(G=N\) in (24). We make the following remarks. _First_, on the one hand, the single-connected RIS is the least complex architecture, achieving the lowest performance due to its limited architecture. On the other hand, the tree-connected RIS allows us to reach the performance upper bound, with the lowest possible complexity. _Second_, forest-connected RISs approach the Pareto frontier, but they are slightly suboptimal. This is because the groups in forest-connected RISs are equally sized, i.e., they all have group size \(N/G\). However, the optimal group sizes are not all equal, as given by (20)-(21). _Third_, the fully-connected (resp. group-connected) RIS achieves the same performance as the tree-connected (resp. forest-connected) RIS, but with higher circuit complexity. Thus, in SISO systems, fully- and group-connected RISs are highly suboptimal. Note that the exact shape of the Pareto frontier depends on the channel distribution. Specifically, with Rician or correlated channels, the gain of the tree-connected over the single-connected RIS decreases, and less complex BD-RIS architectures are expected to approach the performance upper bound [4, 7]. ## VI Conclusion We derive the Pareto frontier for the performance-complexity trade-off in BD-RISs. This frontier provides the BD-RIS architectures that can be optimally used to bridge Fig. 2: Pareto frontier for the performance-complexity trade-off achieved by BD-RISs, with \(N=64\). between the single-connected RIS and the tree-connected RIS. The presented fundamental results are expected to drive the development of novel BD-RIS architectures in future works. ### _Proof of Lemma 1_ Consider a BD-RIS with circuit complexity \(C\) with a corresponding graph \(\mathcal{G}\) that is not a forest, i.e., it has at least one cycle [17]. According to (5), the received signal power achievable by a BD-RIS architecture solely depends on its number of groups \(G\), and their sizes \(N_{1},\ldots,N_{G}\). Thus, by removing one edge from a cycle in \(\mathcal{G}\), the resulting BD-RIS architecture has complexity \(C-1\) but achieves the same performance as the original one since its graph still has \(G\) connected components with sizes \(N_{1},\ldots,N_{G}\). Thus, the original BD-RIS architecture is not optimal. ### _Proof of Lemma 2_ Assume a graph \(\mathcal{G}\) to be a forest with \(G\) connected components, with the \(g\)th component including \(N_{g}\) vertices, with \(\sum_{g=1}^{G}N_{g}=N\). Since \(\mathcal{G}\) is a forest, each connected component is a tree and the \(g\)th component includes \(N_{g}-1\) edges [17]. Thus, the number of edges in \(\mathcal{G}\) is given by \[L=\sum_{g=1}^{G}\left(N_{g}-1\right)=N-G, \tag{27}\] proving that \(G=N-L\). ### _Proof of Proposition 1_ A BD-RIS architecture with \(N\) elements and \(C\) tunable impedance components has a graph with \(L=C-N\) edges since \(N\) tunable impedance components connect each RIS element to ground [8]. Besides, we know from Lemma 1 and Lemma 2 that the graph of the optimal BD-RIS architecture with \(N\) elements and complexity \(C\) is a forest, having \(G=N-L\) connected components. This proves that \(G=2N-C\). ### _Proof of Proposition 2_ The proof is conducted by induction on the number of groups \(G\). As the base case, we consider \(G=2\), where problem (18)-(19) boils down to \[\min_{N_{1},N_{2}} N_{1}N_{2} \tag{28}\] \[\mathrm{s.t.} N_{1}\geq 1,\;N_{2}\geq 1,\;N_{1}+N_{2}=N. \tag{29}\] The solution to this problem is clearly given by \(N_{1}=1\) and \(N_{2}=N-1\), or vice versa. The proposition is consequently verified for the case \(G=2\). As the induction step, we prove that if the proposition is valid for \(G-1\) groups, it also holds for \(G\) groups. To this end, we rewrite problem (18)-(19) as \[\max_{N_{1},\ldots,N_{G}} \sum_{g=1}^{G-1}\frac{1}{N_{g}}+\frac{1}{N_{G}} \tag{30}\] \[\mathrm{s.t.} N_{g}\geq 1,\;\forall g,\;\;\sum_{g=1}^{G-1}N_{g}=N-N_{G}. \tag{31}\] By the induction hypothesis, we have \[N_{1}=N_{2}=\ldots=N_{G-2}=1, \tag{32}\] \[N_{G-1}=N-N_{G}-G+2. \tag{33}\] Using (32) and (33), problem (30)-(31) can be simplified as \[\min_{N_{G-1},N_{G}} N_{G-1}N_{G} \tag{34}\] \[\mathrm{s.t.} N_{G-1}\geq 1,\;N_{G}\geq 1,\] (35) \[N_{G-1}+N_{G}=N-G+2, \tag{36}\] where the only unknown are \(N_{G-1}\) and \(N_{G}\). By solving this problem as done for the base case, we obtain \(N_{G-1}=1\) and \(N_{G}=N-G+1\), proving the induction step.
2309.06009
Content Reduction, Surprisal and Information Density Estimation for Long Documents
Many computational linguistic methods have been proposed to study the information content of languages. We consider two interesting research questions: 1) how is information distributed over long documents, and 2) how does content reduction, such as token selection and text summarization, affect the information density in long documents. We present four criteria for information density estimation for long documents, including surprisal, entropy, uniform information density, and lexical density. Among those criteria, the first three adopt the measures from information theory. We propose an attention-based word selection method for clinical notes and study machine summarization for multiple-domain documents. Our findings reveal the systematic difference in information density of long text in various domains. Empirical results on automated medical coding from long clinical notes show the effectiveness of the attention-based word selection method.
Shaoxiong Ji, Wei Sun, Pekka Marttinen
2023-09-12T07:08:22Z
http://arxiv.org/abs/2309.06009v1
# Content Reduction, Surprisal and Information Density Estimation for Long Documents ###### Abstract Many computational linguistic methods have been proposed to study the information content of languages. We consider two interesting research questions: 1) how is information distributed over long documents, and 2) how does content reduction, such as token selection and text summarization, affect the information density in long documents. We present four criteria for information density estimation for long documents, including surprisal, entropy, uniform information density, and lexical density. Among those criteria, the first three adopt the measures from information theory. We propose an attention-based word selection method for clinical notes and study machine summarization for multiple-domain documents. Our findings reveal the systematic difference in information density of long text in various domains. Empirical results on automated medical coding from long clinical notes show the effectiveness of the attention-based word selection method. ## 1 Introduction Long document comprehension is an arduous task in human language understanding. Information redundancy is becoming prevalent with the digitalization of individuals' records and the generation of massive user content. Natural language encodes information with words and syntax. From the viewpoint of information theory Shannon (1948), language transmits information over a bandwidth-limited noisy channel. Redundant information in long documents increases the cognitive load of readers, hinders the processing of texts, and probably affects the classification performance, especially for complex examples, in downstream domains. A rational language user tends to use information-dense phrases Levy and Jaeger (2006). The redundancy also increases the length of sequences, leading to extra computational costs for neural text encoders. Redundancy is linked to a reduced form of original content without sacrificing comprehension or cognition. For example, given a news categorization task, the sentence "The state of medical health records, and what deep learning can do to help", we can infer this category is about health and technology even with some key phrases such as "medical health records" and "deep learning" who have low word probabilities (high surprisal) in Figure 0(a). Information redundancy in long documents has been observed as a critical problem. Taking health text as an example, text redundancy in Electronic Health Records (EHR) has been widely recognized Wrenn et al. (2010). We illustrate the word probabilities of three examples of clinical notes in Figure 0(b) using a pretrained BERT base model Devlin et al. (2019), where informative words and less informative words (tend to be redundant) are distributed at the two ends of the plots with large densities. Electronic clinical notes suf Figure 1: Illustrations of a) a text snippet and its surprisal modeled by the pretrained language model, and b) word probability distributions of three examples of clinical notes fer from information redundancy mainly due to copy-and-paste in clinical notes. Moreover, different expressions exist for the same thing in the clinical context. A study on 23,630 clinical notes shows that 46% and 36% of text are copied and imported, respectively (Wang et al., 2017). Massive redundant information in clinical notes can lead to clinicians' burnout and increase medical code working hours (Montgomery et al., 2019). More worsening, it can lead to other harms, such as inconsistency in texts and error propagation during decision-making. Human reading comprehension can be robust to errors and redundancy (Hahn et al., 2019), while a robust neural text encoding model that can achieve human-level comprehension is a challenging research topic. Recent contextualized language models such as BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) have achieved exciting performance on many natural language processing tasks. Like human comprehension from prior experience or education, pretrained models have prior exposure to specific training corpora. However, most of them are limited to processing the short sequence due to the quadratic complexity of the self-attention mechanism. For example, BERT is pretrained with a length of 128. Several efficient transformers have been proposed to solve the complexity issue to some extent (Tay et al., 2022). Nevertheless, an interesting question is how language models pretrained with short sequences transfer to long document representation learning. This paper studies the problem of long document encoding. We view text understanding as an abstraction process. Specifically, we mimic the abstraction via two concrete text processing approaches, i.e., attention-based word selection and (abstractive) machine summarization. We investigate two aspects: 1) content reduction to shorten long documents via attention-based word selection and automated text summarization with pretrained models; 2) information density estimation for original and content-reduced texts via a pretrained language model, surprisal model, entropy, uniform information density, and lexical density. Our contributions are as follows. * We investigate the systematic difference in information density in different domains (i.e., clinical texts, movie reviews, and news articles) before and after content reduction. * We propose a simple pipeline-based method powered by the label attention mechanism to select informative words from lengthy clinical notes and perform automated medical coding. * Our findings show that content reduction redistributes the information density of less standard text, such as clinical notes and movie reviews, and the information density reflects the downstream classification performance. Our empirical results also show that attention-based word selection can improve the performance of medical coding from clinical notes. ## 2 Information Density Estimation Similar to the definition of mass density in physics, information density in computational linguistics measures the human-readable information encoded per linguistic unit. One common metric to measure information density is the lexical density (Section 2.4), which describes the proportion of content words in a given corpus (Kalinauskaite, 2018). Psycholinguistic experiments have shown a link between information density and other issues such as readability and memory (Howcroft and Demberg, 2017). Generally, more grammatical words give less information, and lexical words such as nouns and verbs are more informative. We investigate the long document embeddings through the lens of information-theoretic estimation. Several measurements inspired by the information theory are adopted in this study, including surprisal model (Section 2.1), entropy (Section 2.2), and uniform information density (Section 2.3). ### Surprisal Model The surprisal model of human language processing describes the surprisal of a word given its prefix. Intuitively, cognitive efforts involved with text understanding should be proportional to word surprisal. The lexical-based surprisal measure (Eq. 1) in psycholinguistic evaluation (Hale, 2001; Levy, 2008) is defined as the negative logarithmic conditional probability given preceding words of \(w_{k+1}\) (or its so-called context). The surprisal is calculated as: \[S=-\log P\left(w_{k+1}\mid w_{1}\dots w_{k}\right), \tag{1}\] where \(w_{k}\) is the \(k\)-th word. The surprisal score values the amount of surprise. A higher surprisal value means a word difficult to process or comprehend. An error word should be more surprising than the correct word. For example, the misspelled word _artial_ and letter-transposed word _atrila_ produce more surprisal and are more difficult to comprehend than the correct word _atrial_. Demberg et al. (2013) summarized two ways to estimate surprisal in psycholinguistic evaluation, i.e., lexical surprisal and structural surprisal. Lexical surprisal further considers two levels of word and part-of-speech, while structural surprisal depends on the syntax of sentence prefixes. Given a sentence \(u=\left[w_{1},\ldots,w_{t},\ldots,w_{n}\right]\) and a pretrained contextualized language model such as BERT parameterized by \(\boldsymbol{\theta}\), we can calculate the conditional probability of \(t\)-th word \(w_{t}\) by applying the softmax transform on the \(t\)-th hidden representation \(\mathbf{h}_{t}\) as \[p_{\boldsymbol{\theta}}\left(w_{t}\mid w_{<n}\right)=\mathrm{softmax}\left( \mathbf{Wh}_{t}+\mathbf{b}\right), \tag{2}\] where \(\mathbf{W}\in\mathbb{R}^{|\mathcal{S}|\times d_{h}}\), \(\mathbf{b}\in\mathbb{R}^{|\mathcal{S}|}\), and \(|\mathcal{S}|\) is the vocabulary size of the target corpus \(\mathcal{S}\). Accordingly, there are two approaches to computing sentence-level surprisal. We use the n-gram model. For example, the 3-gram model is defined as \[P(s)=P\left(w_{1}\right)\times P\left(w_{2}\mid w_{1}\right) \times P\left(w_{3}\mid w_{2}w_{1}\right)\times\] \[\prod_{i=4}^{n}P\left(w_{n}\mid w_{n-1}w_{n-2}w_{n-3}\right).\] Noise in text, like typos or errors, can degrade the context of a word, leading to increased surprise and increasing the difficulty of comprehension Hahn et al. (2019). We investigate the surprisal level of texts from different domains, long documents, particularly to understand the behavior of neural text encoders. ### Entropy Entropy estimate has been studied in many ways. Genzel and Charniak (2002) conducted a \(n\)-gram entropy estimate in three different ways, i.e., a \(n\)-gram probabilistic model, a probabilistic model induced by a statistical parser, and a non-parametric estimator. The authors proposed the constancy rate principle governing language generation. However, their local entropy estimate ignored the context. Bentz and Alikaniotis (2016) used entropy to measure the average information content of natural languages and conducted a quantitive analysis to investigate the systematic difference in word entropies across different languages. Inspired by these two works, we estimate entropy by utilizing pretrained contextualized language models to consider the context information and study the systematic difference in word entropies for long documents and their summaries across different domains. The entropy of text \(s\) is defined as \[H(s)=-\sum_{i=1}^{n}P\left(w_{i}\right)\log\left(P\left(w_{i}\right)\right), \tag{3}\] where \(P(w_{i})\) is the probability of word \(w_{i}\). \(P(w_{i})\) can be approximated as \(P(w_{i})=\frac{f_{i}}{\sum_{j=1}^{n}f_{j}}\) from the frequency viewpoint and \(f_{i}=\mathrm{freq}(w_{i})\) is the frequency of word \(w_{i}\). Our study approximates it as the conditional probability generated by a pre-trained language model given its context. ### Uniform Information Density The uniform information density (UID) hypothesis asserts that information encoding aims to transmit messages in a uniform way during the language production Jaeger (2006, 2010). The intuition behind UID is to maximize the information transmission and minimize comprehension difficulty. The UID hypothesis aligns with the principle of language production, i.e., to avoid information overloading or being uninformative. The context plays an important role in the information density of sentences. If the context is considered, the information density of sentences is uniform; otherwise, it experiences an increase with the sentence number in local measures of entropy Genzel and Charniak (2002). Meister et al. (2021) quantifies the linguistic uniformity by defining the UID as \[\mathrm{UID}^{-1}(u)=\frac{1}{n}\sum_{i=1}^{n}\Delta\left(S\left(u_{i}\right), \mu_{c}\right) \tag{4}\] where \(\mu_{c}\) is an average information rate and \(\Delta(;\dot{})\) is a per-unit distance metric. From this viewpoint, UID can be regarded as a measure on how uniform the sentence conveys its meaning. We investigate if the embeddings of long documents from pretrained language models adhere to the uniform information density hypothesis. ### Lexical Density We first use lexical readability to examine how difficult a document is to understand. We apply the Flesch reading ease score that was introduced for reading ease evaluation Kincaid et al. (1975). It is formulated as: \[206.835-1.015\left(\frac{\text{total words}}{\text{total sentences}}\right)-84.6 \left(\frac{\text{total syllables}}{\text{total words}}\right),\] where the coefficients come from user study. A higher score means easier to read. A score of 100 indicates the text is effortless to read, while a score ranging from 0 to 10 means the text is complicated to comprehend and needs professional knowledge. We transfer the readability test to some specific domains and provide a reference for lexical density estimation. We study lexical richness, which basically measures to what extent different words are used in the text. Many lexical richness measures calculate the proportion of unique words to evaluate the lexical diversity [13]. The widely used type-token ratio is calculated by the number of types divided by the number of tokens. We use one of its variants called the Herdan lexical richness measure proposed by Herdan (1960). Herdan lexical richness is defined as: \[C=\frac{\log V(N)}{\log N},\] where \(N\) is the number of tokens and \(V\) is the number of types. ## 3 Content Reduction We cast long document understanding as a generation process that comprehends the long documents and digests key messages as latent states. As a result, the understanding process generates some short versions of the original text but preserves the subject matter of original long documents. Specifically, we instantiate the generation process by two concrete instances, i.e., attention-based word selection and abstractive text summarization. Attention-based word selection in the previous section is similar to extractive text summarization. However, it is not trained with a reference dataset with extraction-based summaries. ### Attention-based Word Selection We propose a simple and efficient pipeline-based word selection method powered by the label attention mechanism that prioritizes essential information in the hidden representation relevant to medical codes. The label attention uses dot product to calculate the attention score matrix \(\mathbf{A}\in\mathbb{R}^{n\times m}\) as: \[\mathbf{A}=\mathrm{Softmax}(\mathbf{H}\mathbf{U}), \tag{5}\] where \(\mathbf{H}\in\mathbb{R}^{n\times h}\) is the hidden features, \(\mathbf{U}\in\mathbb{R}^{h\times m}\) is the parameter matrix of the query, and \(m\) is the number of medical codes. We use mean pooling to obtain the attention vector \(\mathbf{a}\in\mathbb{R}^{n}\) for word selection, i.e., \(\mathbf{a}=\mathrm{MeanPooling}(\mathbf{A})\). Given a threshold or \(q\)-th quantile of the pooled attention score, we selected words whose attention scores meet the selection criteria, and other words in a text are filtered out. This pipeline can be extended to various text feature extractors that capture sequential dependency and utilize label-aware representations from the label attention mechanism. ### Text Summarization Automated text summarization transforms lengthy documents into shortened paragraphs while preserving the overall meaning. Abstractive summarization summarizes the text differently rather than extracting some key sentences from the document. We utilize two advanced abstractive summarization models, i.e., pretrained BART [11] that is trained by learning to predict the arbitrarily corrupted text and T5 [12] based on a text-to-text framework. These two representative models have shown superior performance on several text summarization benchmarks. However, there exists one limitation to this study. As the reference dataset with human summarization is not available, we can not tell which machine summarization model is the best. ## 4 Results and Analyses ### Tasks and Datasets We conduct experiments on a long document classification task with three public datasets from different domains. A statistical summary of datasets is shown in Table 1. Medical CodingMedical coding is a multi-label multi-class classification task that takes clinical notes from electronic health records as inputs and predicts medical codes of standard disease classification systems [11]. We use clinical notes from the MIMIC-III dataset [13] \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & Avg. Length & Train & Validation & Test \\ \hline BBC News & 419 & 1,424 & 356 & 445 \\ IMDB & 698 & 1,553 & 350 & 1,791 \\ MIMIC-III & 1,883 & 8,066 & 1,573 & 1,729 \\ \hline \hline \end{tabular} \end{table} Table 1: A statistical summary of datasets 2016) and adopt the data split of top-50 codes from Mullenbach et al. (2018) that assigns frequently used ICD-9 codes to discharge summaries. We use the BERT text encoder as the neural backbone to get text representations and learn label-aware features with a label attention mechanism to boost the performance of medical coding. News Topic ClassificationThe BBC News dataset Greene and Cunningham (2006) contains news articles in BBC News from 2004-2005. It is used for news topic classification in five topical areas, i.e., business, entertainment, politics, sport, and technology. Notice that we use our data split as there is no standard data partition. Movie Review Sentiment AnalysisThe IMDB movie review dataset Maas et al. (2011) has movie reviews posted on the IMDB website. As the average length of this dataset is relatively short, we select long reviews from the training and testing sets of the original data. Then, we split an additional validation set for the IMDB long review data. ### Results of Attention-based Selection for Medical Coding We present the results of the attention-based word selection method for medical coding. Specifically, we use this method to obtain selected texts whose sequence length is shorter than the original text. For the text encoder and code classifier, we adopt a recent medical coding model that utilizes recalibrated feature aggregation and multitask learning with focal loss Sun et al. (2023). After the word selection, we input the shortened text into a BERT-based medical coding model. We compare this pipeline-based model with the following models. The first category is the convolutional or recurrent neural network-based models. They are CAML Mullenbach et al. (2018) that uses a text convolutional neural network and label attention mechanism, GatedCNN-NCI Ji et al. (2021) that adopts gated convolutions and a note-code interaction module, and JointLAAT Vu et al. (2021) that utilizes bidirectional long short-term memory networks and a structured attention mechanism. The second category is based on BERT-based classifiers. We compare the truncated and hierarchical BERT Ji et al. (2021) with three domain adaptive BERT models, i.e., PubMedBERT Gu et al. (2020), BioBERT Lee et al. (2020) and Clinical-BERT Alsentzer et al. (2019). The third category is enhanced BERT-based models. MDBERT Zhang and Jankowski (2022) considers three-level hierarchical encoding. Two variants of MDBERT are also compared. MDBERT-SBERT removes sentence BERT, and MDBERT+avg uses model ensemble. Our method achieves better performance than simple BERT-based classifiers and comparable performance than simple MDBERT without sentence BERT, although slightly worse than the ensemble-based method (MDBERT+avg). Different word selection strategies also affect the performance of our method. Selection with the \(q\)-th quantile (\(q=0.875\)) is more flexible in selecting informative words and achieves better performance than the variant that only selects words with a fixed threshold for all documents. We choose the threshold that can obtain a content-reduced text with an average length of 250. We compare the model performance by applying a scaling factor to the embeddings of selected words to verify the effect of attention-based word selection further. Specifically, the word embeddings multiplied by the scaling factor is penalized as a restricted input signal to the model. Intuitively, we use the scaled word embeddings to control the strength of selected words. A higher scaling factor means that the embeddings of selected words weigh more in the input text. Results in Figure 2 show that selected words with rich information are contributing more to representing the content, and the predictive performance is getting better with the increase of the scaling factor. Then, we apply the scaling factor to both selected words and those not selected. Table 3 shows the performance drops after applying a scaling factor of 0.1. The results reveal that penalizing the signal strength of selected \begin{table} \begin{tabular}{l c c|c c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{AUC-ROC} & \multicolumn{2}{c}{F1} & \multirow{2}{*}{P@5} \\ & Macro & Micro & Macro & Micro & \\ \hline CAML & 87.5 & 90.9 & 53.2 & 61.4 & 60.9 \\ GatedCNN-NCI & 91.5 & 93.8 & 62.9 & 68.6 & 65.3 \\ JointLAAT & 92.5 & 94.6 & 66.1 & 71.6 & 67.1 \\ \hline PubMedBERT & 82.1 & 84.4 & 52.6 & 57.3 & 55.7 \\ BioBERT & 81.8 & 84.3 & 50.5 & 55.4 & 54.5 \\ ClinicalBERT & 82.3 & 85.3 & 50.6 & 56.9 & 55.7 \\ \hline MDBERT-SBERT & 91.1 & 93.1 & 64.4 & 68.1 & 64.3 \\ MDBERT & 91.8 & 93.6 & 65.9 & 69.2 & 65.4 \\ MDBERT+avg & 92.8 & 94.6 & 67.2 & 71.7 & 67.4 \\ \hline Ours (fixed) & 90.6 & 92.9 & 58.2 & 65.3 & 64.0 \\ Ours (\(q\) quantile) & 91.6 & 93.5 & 64.6 & 68.5 & 64.6 \\ \hline \hline \end{tabular} \end{table} Table 2: Attention word selection for medical coding on MIMIC-III dataset. “+avg” means averaging-based model ensemble. “fixed” and “\(q\) quantile” denote that we select words by a fixed threshold and \(q\) quantile. words with the scaling factor leads to a more significant performance drop than downweighting those not selected words. These two studies indicate that attention-based word selection can extract important words to the model prediction. ### Results of Classification on Abstractive Summary Text summarization can be a way to alleviate the reader's workload by automatically extracting critical information from the text. We test the performance of text classification on the abstractive summary. Experimental results show it is hard to achieve better performance on summaries than the original texts. In several cases, the performance drop can even be 10%. We do not report those negative results in detail to avoid verbosity. ### Analysis of Surprisal, Entropy, and Uniform Information Density SurprisalWe draw the kernel density estimation plot for document-level mean surprisal in Figure 3. The figure also shows histograms normalized to the same scale as the density curves. The smooth density estimation curve is generated by summing the Gaussians of individual points of word probabilities. The figures show that long original documents tend to have a higher mean surprisal than attention-selected texts and summaries. T5 summarization model significantly reduces the surprisal of MIMIC-III clinical notes and IMDB movie reviews. These results also align with the performance of downstream document classification tasks. With close surprisal distributions (Figure 2(c)), the downstream classification performance on the original text and summaries of BBC News is very close. In contrast, the performance on IMDB summary drops up to 10% when compared with the performance on original texts, which can partly be explained by the redistributed surprisal as shown in Figure 2(b). EntropyEntropy describes the amount of information required to represent an event randomly drawn from the distribution. We estimate the document-level entropy to understand the information required to represent the text encoded by a language model. Figure 4 shows the kernel density estimation plots for document-level entropy of three datasets. We can clearly see that the original texts contain more information than summaries of all three datasets and content-reduced text via attention selection in the MIMIC-III dataset. Uniform Information DensityUniform information density describes how uniformly the information is distributed through the communication channel. Figure 5 shows the kernel density estimation of UID of three datasets. In MIMIC-III, the UID distributions of the original text and attention-selected text overlap (Figure 4(a)), and the corpus mean UID of attention-selected text is slightly bigger than the original text. Thus, we can conclude that attention selection can extract informative parts from the original text and maintain the level of uniformity. Figure 4(b) shows the summarization model generated IMDB review text with information density more uniformly distributed, while Figure 4(c) shows the overlapped UID of BBC News. Figure 4(d) of corpus-level UID further verifies summarization models generate texts with more uniform information density. ### Analysis of Lexical Density We conduct a quantitative analysis of the lexical structure of the original texts and content-reduced texts and summaries. Many metrics have been used to evaluate the complexity of language. We choose two from those widely used metrics. They are lexical readability and lexical richness which measure how readable and diverse the text is. We conduct \begin{table} \begin{tabular}{l c c|c c|c} \hline \hline \multirow{2}{*}{Scaling} & \multicolumn{2}{c}{AUC-ROC} & \multicolumn{2}{c|}{F1} & \multirow{2}{*}{P@5} \\ & Macro & Micro & Macro & Micro & \\ \hline Non-selected Words & -2.9 & -2.1 & -10.6 & -7.5 & -3.3 \\ Selected Words & -6.0 & -5.0 & -19.9 & -16.6 & -11.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance drop on MIMIC-III dataset when applying a scaling factor of 0.1 on selected and non-selected words Figure 2: Predictive performance on MIMIC-III dataset with different scaling factors applied on the embeddings of selected words this analysis to examine the change in lexical complexity before and after the content reduction via attention selection and machine summarization. We illustrate the lexical readability of each instance in the MIMIC-III dataset (Figure 5(a)) and the corpus mean of three datasets (Figure 5(c)) measured by the Flesch reading ease score. We use the instance indices of sorted scores of the original dataset to plot each instance. Figure 5(a) shows that MIMIC-III clinical notes are extremely hard to comprehend with negative readability scores. With content reduction via attention-based selection and summarization, the readability scores of MIMIC-III clinical notes improve significantly but are still very hard to read. As for text in the domains of newswire and movie review, abstractive summarization generates texts that are slightly easier or the same as easy to read. Movie reviews contain several documents hard to read. Formal language in the news is more readable than clinical notes and user-generated movie reviews, which aligns with the finding of the Flesch test. The Flesch test tends to give a higher score for text with easy words. Figure 5(b) and Figure 5(d) illustrate the Herdan lexical richness of each instance in the MIMIC-III dataset and all three datasets at the corpus level. We can see that summarization with the BART model increases lexical richness by a considerable margin in IMDB and BBC News. Furthermore, attention-based word selection improves the richness of MIMIC-III clinical notes (Figure 5(b)). T5 tends to generate less affluent summaries. Content reduction increases the lexical density to some extent, especially for less standard text such Figure 4: Kernel density estimation plots for document-level entropy Figure 5: Uniform information density. (a-c) document-level kernel density estimation plots and (d) mean UID in corpus level Figure 3: Kernel density estimation plots for document-level mean surprisal as clinical notes. However, the behavior of different summarization models varies. Considering the performance boost brought by the attention-based word selection, we summarize that a simple pipeline method that condenses the lexical density of noisy texts (e.g., clinical notes) benefits the downstream classification task to some extent. More investigation is needed for the text summarization method via transfer learning. ### Discussion, Limitations and Future Work This study sheds light on the investigation of an attention-based word selection method for clinical notes and information density estimation on various summarization texts. Extractive summarization by attention selection has shown good downstream performance on the medical coding task. However, some critical issues remain unexplored; for example, what if attention-based word selection filters out negations and breaks the syntactic structure? Besides, statistical significance cannot be thoroughly evaluated given the limited number of training instances and domain data. We leave these unexplained problems for future work. We tried some amendments for some existing limitations. There are no ground-truth values of surprisal. An alternative we used is to approximate it via pre-trained language models. Also, we cannot directly evaluate the quality of word selection and summarization due to the lack of a reference dataset. Instead, we evaluate it through downstream classification tasks. ## 5 Related Work Processing long documents with redundant information is burdensome. Many efforts have been made to estimate the redundancy in clinical notes and study the potential risks of redundancy in a retrospective manner. Wrenn et al. (2010) quantified the redundancy in the clinical document by measuring the amount of new information and showed information duplication between document types. Zhang et al. (2011) studied several methods for measuring the redundancy in clinical texts. In the clinical domain, vocabulary and errors are relatively rare compared with generic texts. Searle et al. (2021) showed that clinical text is less efficient in encoding information than open-domain text from the perspective of information theory and observed that some clinical notes in the MIMIC database could be 97-98% redundant. Levy and Jaeger (2006) investigated the possibility of uniformity maximization of information density through syntactic reduction. Meister et al. (2021) revisited the uniform information density hypothesis and interpreted the hypothesis as the regression to the mean information of a language. Information density estimation is an important research task. Several works have been done to automatically measure text information density, such as from the perspective of lexical and syntactic features Kalinauskaite (2018). Horn et al. (2013) used open information extraction system to extract facts and applied factual density, calculated as the number of facts divided by the document size, to measure the informativeness of web documents. ## 6 Conclusion Long document processing is challenging for many reasons, such as the difficulty of capturing long-term dependency and noises in the long text. This paper studies the encoding of long documents via information density estimation and empirical analyses on content reduction. We systematically show the difference in information density between original long documents and content-reduced texts. We improve the performance of automated medical coding by using selected words as inputs when compared with simple baselines that use the same neural backbone. We validate that careful word selection can obtain words that can redistribute the distribution of word probability and entropy. Our study takes a positive step towards understanding language model-based long document encoding. Figure 6: Lexical readability measured by Flesch reading ease score and Herdan lexical richness of MIMIC-III at instance level and all three datasets at corpus level ### Limitations As an empirical study, this paper did not outperform the state-of-the-art method, such as the ensemble-based hierarchical model (Zhang and Jankowski, 2022). Our analyses focus on standard self-attention-based transformer networks and masked language models. Recent efficient transformers and pretrained models with other language modeling objectives are not considered in this study. We leave them in the future work.
2309.11575
Distilling Adversarial Prompts from Safety Benchmarks: Report for the Adversarial Nibbler Challenge
Text-conditioned image generation models have recently achieved astonishing image quality and alignment results. Consequently, they are employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the web, they also produce unsafe content. As a contribution to the Adversarial Nibbler challenge, we distill a large set of over 1,000 potential adversarial inputs from existing safety benchmarks. Our analysis of the gathered prompts and corresponding images demonstrates the fragility of input filters and provides further insights into systematic safety issues in current generative image models.
Manuel Brack, Patrick Schramowski, Kristian Kersting
2023-09-20T18:25:44Z
http://arxiv.org/abs/2309.11575v1
# Distilling Adversarial Prompts from Safety Benchmarks: ###### Abstract Text-conditioned image generation models have recently achieved astonishing image quality and alignment results. Consequently, they are employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the web, they also produce unsafe content. As a contribution to the _Adversarial Nibbler_ challenge, we distill a large set of over \(1{,}000\) potential adversarial inputs from existing safety benchmarks. Our analysis of the gathered prompts and corresponding images demonstrates the fragility of input filters and provides further insights into systematic safety issues in current generative image models. Warning: This paper contains sexually explicit imagery, discussions of pornography, and other content that some readers may find disturbing, distressing, and/or offensive. ## 1 Introduction Next to text-generative models, image-generative models are becoming increasingly prevalent and seeing growing adoption in commercial services such as stock imagery and graphic design. Due to large-scale unsupervised learning, they retain general knowledge implicitly present in the data and are able to generate high-fidelity images that are faithful interpretations of users' prompts. However, this training setup, which utilizes large-scale unfiltered data (Schuhmann et al., 2022; Birhane et al., 2021), also leads to degenerated and biased behavior (Schramowski et al., 2023), calling for mitigation strategies and the moderation of generative models in deployed systems. Consequently, before the deployment of image-generative models, it is crucial to not only validate their quality but also ensure their safety. This necessitates the assessment of appropriate guardrails, which should be tailored to the specific application at hand. Previous work in this domain has primarily relied on anecdotal evidence, lacking quantifiable measures that consider multiple models and architectures. In order to address this issue, Parrish et al. (2023) proposed the _Adversarial Nibbler_ challenge. The authors aim to curate an evaluation dataset of adversarial inputs against text-to-image models through a crowdsourcing effort. Here, we analyze existing benchmarking efforts on image generation safety to identify adversarial prompts suitable for _Adversarial Nibbler_. Indeed, Schramowski et al. (2023) proposed the _inappropriate image prompts_ (I2P) dataset1 but limited their evaluation to a single Stable Diffusion version (Rombach et al., 2022). Subsequent research of Brack et al. (2023) presented a more comprehensive analysis of inappropriate degeneration across 11 different models, all of which were capable of generating inappropriate content at scale. Consequently, the I2P dataset is a vital benchmark in assessing the effectiveness of concept erasure techniques (Gandikota et al., 2023; Heng and Soh, 2023; Kim et al., 2023; Chin et al., 2023). Footnote 1: [https://huggingface.co/datasets/AIML-TUDA/i2p](https://huggingface.co/datasets/AIML-TUDA/i2p) This report investigates the automatically scraped prompts of the I2P benchmark in more de Figure 1: Exemplary adversarial prompt distilled from the I2P benchmark. The coded expression of a _‘james bond movie gadget’_ yields an image of a firearm being held by a child. (Best viewed in color) tail. Specifically, we identify over \(1{,}000\) prompts eliciting the generation of inappropriate content, although they were not blocked by currently deployed input filters. Consequently, this set of derived prompts can be used as adversarial inputs for evaluating corresponding guardrails. Our analysis of this prompt set provides valuable insights into the subjectivity of safety and the fragility of automatic input filters. Importantly, we identify concise terms and prompt structures that often seem benign but create unsafe images. ## 2 Experimental Analysis The I2P benchmark consists of over \(4{,}700\) real-world user prompts scraped from the initial Stable Diffusion discord. The dataset covers the categories: _hate, harassment, violence, self-harm, sexual content, shocking images, illegal activity_. Each prompt is annotated with a probability of generating inappropriate material based on images generated with Stable Diffusion. The generated images were automatically assessed on their inappropriateness using the Q16 (Schramowski et al., 2022) and NudeNet 2 classifiers. While these prompts are disproportionately likely to generate inappropriate content, the underlying hosting solution for Stable Diffusion was not subject to any input filters. Consequently, a large portion of these prompts will explicitly contain inappropriate concepts and thus not qualify for adversarial purposes. Footnote 2: [https://github.com/notAI-tech/NudeNet](https://github.com/notAI-tech/NudeNet) Thus, as a first pre-processing step, we aim to extract the prompts that appear benign from the dataset. To this end, we checked all prompts against currently deployed guardrails for popular image generation models. Specifically, here, we used a list of 800 banned words3 of the popular Midjourney4 image generation model. Footnote 3: [https://decentralizedcreator.com/list-of-banned-words-in-midjourney-discord/](https://decentralizedcreator.com/list-of-banned-words-in-midjourney-discord/) Footnote 4: [https://www.midjourney.com/home/](https://www.midjourney.com/home/) Overall, 34% of I2P prompts would have been blocked by Midjourney's prompt filter, with further details shown in Fig. 2. In general, prompts with a higher probability of producing inappropriate content--as measured for Stable Diffusion--also contain banned words more frequently (Fig. 1(a)). This observation supports the intuition that a decent percentage of prompts with high inappropriate likelihoods contain explicit mentions of related concepts. Additionally, there exists a significant discrepancy between the number of banned prompts per category (Fig. 1(b)). The percentage of blocked prompts is almost 4x higher for _sexual_ than for _hate_. This difference can be attributed to a clear focus of the ban-list on sexually charged terms, as discussed below. We argue that those prompts, which are reasonably likely to generate inappropriate material--here \(\geq\) 50%--and are not caught by the deployed input filter, are good candidates for adversarial testing. In the case of the I2P benchmark, this leaves us with roughly \(1{,}100\) prompts which we share with the community5. We present an example of an adversarial input from this set in Fig. 1. Footnote 5: Anonymous link: [https://figshare.com/s/3a67fb80511575c0fd93](https://figshare.com/s/3a67fb80511575c0fd93) Figure 2: Analysis of prompts contained in the I2P dataset, blocked by the Midjourney input filter. ## 3 Observations Subsequently, we provide more detailed insights into the set of candidate prompts derived above. Subjectivity of (Un-)Safety.A closer look at the collected prompts and generated images highlights the subjectivity of what is considered inappropriate or unsafe. The definition of safety can differ based on context, setting, cultural and social predisposition, and individual factors. For example, a significant portion of prompts produce decidedly disturbing images (cf. Fig. 2(a)). However, the comparatively narrow definition of safety in the _Adversarial Nibbler_ challenge would probably not consider it unsafe, while the authors of the I2P benchmark included disturbing material in their definition of inappropriateness. Fragility of Prompt Filters.The remaining prompts clearly demonstrate the severe limitations of ban-list based input filters. We identified several simple misspellings of prohibited words bypassing filters while still being able to produce unsafe material. Additionally, we observed multiple cases where related terms were not included in the filter. For example, the ban list contains _'nude'_ but not _'nudity'_, _'no shirt'_ but not _'shirtless'_, and _'blood'_ but not _'bleeding'_. Other concepts prominent in our prompt set were outright missing on the list, including _'violent'_, _'robbery'_ or _'murder'_. Lastly, we found multiple concepts semantically correlated to prohibited terms not included in the ban-list, such as _'fighting'_ or _'riot'_. These observations highlight the ineffectiveness of input ban-lists as sole safety guardrails since designing holistic filters proves virtually impossible. At best, they can be beneficial as the first step of a more comprehensive system of safety measures. Contextual Inappropriateness.Additionally, we discovered a large set of prompts producing images that are generally safe when considered in isolation but might be inappropriate in a larger context. Specifically, over 100 prompts--equalling roughly 10% of the considered set--pertain to persons of public interest, most of them related to Figure 4: Demonstration of easily generated, sexually explicit imagery. Note that even if the prompts appear benign, they are highly likely to generate sexually explicit content. Corresponding prompts are shown above the images. Explicit nudity was censored by the authors using black masks. (Best viewed in color) Figure 3: Examples demonstrating the subjectivity of unsafe images which is largely dependent on context as well as individual factors. Corresponding prompts are shown above the images. (Best viewed in color) Donald Trump (80 prompts). Most generated images depict these persons in situations potentially harmful to their public image. We present some examples in Fig 2(b) showing Donald Trump as a homeless person or in jail. Patterns of Unasafety.Lastly, we discovered multiple systematic issues leading to unsafe imagery. Unfortunately, our results confirm observations of previous work that sexually explicit imagery of women is remarkably easy to produce with seemingly safe prompts (Birhane et al., 2021; Schramowski et al., 2023). We showcase some examples in Fig. 4. Benign adjectives like _'attractive'_ and _'gorgeous'_ in combination with non-erotic terms such as _'torso'_ or _'roundings'_ appear to be so closely correlated with explicit material that they reliably generate nude women. Furthermore, we observed encoded racial fetishes (Zheng, 2016; Schramowski et al., 2023), sexualizing Asian women specifically. Lastly, a common pattern in the construction of prompts was the usage of certain artists who commonly create art that might be considered unsafe in other contexts. For example, prompts as simple as _'photograph by vanessa beecroft'_ will yield naked women and display anorexia for the vast majority of generated images. Even when embedded in longer prompts, these triggers can be easily utilized to enforce unsafe concepts within the generation. ## 4 Conclusion In this work, we investigated the usability of automatically crawled prompts from safety benchmarks for adversarial evaluations. We demonstrated that large numbers of potentially adversarial prompts can be extracted from datasets like I2P (Schramowski et al., 2023). Our detailed analysis of the distilled prompts highlights the fragility of input filtering and motivates further research on designing and evaluating safe generative systems. ## Acknowledgments We gratefully acknowledge support by the German Center for Artificial Intelligence (DFKI) project "SAINT", the Federal Ministry of Education and Research (BMBF) project "AISC " (GA No. 01IS22091), and the Hessian Ministry for Digital Strategy and Development (HMinD) project "AI Innovationlab" (GA No. S-DIW04/0013/003). This work also benefited from the ICT-48 Network of AI Research Excellence Center "TAILOR" (EU Horizon 2020, GA No 952215), the Hessian Ministry of Higher Education, and the Research and the Arts (HMWK) cluster projects "The Adaptive Mind" and "The Third Wave of AI", and benefited from the National High-Performance Computing project for Computational Engineering Sciences (NHR4CES).
2309.15713
Tunneling effect between radial electric wells in a homogeneous magnetic field
We establish a tunneling formula for a Schr\"odinger operator with symmetric double-well potential and homogeneous magnetic field, in dimension two. Each well is assumed to be radially symmetric and compactly supported. We obtain an asymptotic formula for the difference between the two first eigenvalues of this operator, that is exponentially small in the semiclassical limit.
Léo Morin
2023-09-27T15:09:45Z
http://arxiv.org/abs/2309.15713v1
# Tunneling effect between radial electric wells in a homogeneous magnetic field ###### Abstract. We establish a tunneling formula for a Schrodinger operator with symmetric double-well potential and homogeneous magnetic field, in dimension two. Each well is assumed to be radially symmetric and compactly supported. We obtain an asymptotic formula for the difference between the two first eigenvalues of this operator, that is exponentially small in the semiclassical limit. ## 1. Introduction This article is devoted to the spectral analysis of electromagnetic Schrodinger operators with symmetries. Without magnetic fields, it is known that symmetries of the potential induce tunneling. This translates into a spectral gap between the two first eigenvalues that becomes exponentially small in the semiclassical limit. This effect was studied in [11, 12] where the spectral gap was estimated (see also the books [8, 4]). When adding a homogeneous magnetic field to this model, first general results were obtained in [13] (for weak magnetic fields) and [14] (upper bounds on the spectral gap). The problem was recently reconsidered in [5], where some upper and lower bounds on the spectral gap are obtained. These bounds were improved in [9], giving a sharp exponential decay rate. In this paper we consider a potential with two symmetric, radial, and compactly supported wells, as in [5, 9]. We prove a sharp estimate on the spectral gap in presence of a constant magnetic field. As explained in these two references, the spectral gap is given by an integral term measuring the interaction between the wells, the _hopping coefficient_. However, we suggest here another approach to estimate this coefficient, which is closer to the original spirit of [11]. This approach provides us with a shorter proof and better estimates. A similar strategy was recently implemented to prove a purely magnetic tunneling formula, between radial magnetic wells [7]. The case of multiple potential wells was also considered in [13] (without magnetic field) and recently adapted to the magnetic case in [10]. In these articles it is explained how to reduce the problem to many double-well interactions. Therefore, the result we present below should also have applications to that setting. We consider the following Schrodinger operator acting in the plane \(\mathbf{R}^{2}\), \[\mathcal{H}_{h}=(-ih\nabla-\mathbf{A})^{2}+V, \tag{1.1}\] where \(V\) is a double-well potential, and \(\mathbf{A}\) is a vector potential generating a uniform magnetic field of strength \(B>0\), i.e. \(\nabla\times\mathbf{A}=B\). Without loss of generality, we choose the gauge \[A(x,y)=(0,Bx). \tag{1.2}\] The double-well potential is a sum of two disjoint single wells, \[V(x,y)=v(x-L,y)+v(x+L,y), \tag{1.3}\] separated by a distance \(2L>0\). On the single well we make the following assumptions. **Assumption 1**.: _We assume that_ * \(v\) _is smooth, compactly supported, non-positive, and radial._ * \(v\) _admits a unique minimum_ \(v_{0}<0\) _reached at_ \(0\)_, and it is non-degenerate._ Let \(a>0\) denote the radius of the support of \(v\), i.e. the smallest positive number such that \(\operatorname{supp}(v)\subset\overline{B}(0,a)\). We exploit the radiality of the single well problem. We denote by \(\varphi\) the ground state of the single well Hamiltonian, \[\mathcal{H}_{h}^{sw}=(-ih\nabla-\mathbf{A}^{\operatorname{rad}})^{2}+v,\] in radial gauge \(\mathbf{A}^{\operatorname{rad}}(x,y)=\frac{B}{2}(-y,x)\). Then \(\varphi\) is radial, as explained in [9, Section 2.2], and the function \(u(|X|)=\varphi(X)\) satisfies the equation \[-h^{2}\frac{1}{r}\partial_{r}(r\partial_{r}u)+\Big{(}\frac{B^{2}r^{2}}{4}+v(r )\Big{)}u=\mu_{h}u. \tag{1.4}\] where \(\mu_{h}\) is the eigenvalue of \(\varphi\). Hence, the study of the single-well problem is reduced to a radial Schrodinger equation with effective potential \(v_{B}(r)=\frac{B^{2}r^{2}}{4}+v(r)\). As recalled in [9], this is a very standard setting. The decay of \(\varphi\) can be optimally measured in terms of the Agmon distance, \[d(r_{1},r_{2})=\int_{r_{1}}^{r_{2}}\sqrt{\frac{B^{2}r^{2}}{4}+v(r)-v_{0}} \mathrm{d}r. \tag{1.5}\] Moreover, we have explicit WKB approximations for \(\varphi\) (see Lemma 4 below). However, the double well problem _is not_ directly reduced to a Schrodinger equation with potential \(v_{B}\). If that was the case, the spectral gap would have an exponential rate of decay given by \(S_{0}=2d(0,L)\), as in the non-magnetic situation [11]. Here, consistently with [9], we prove that the spectral gap is much smaller. **Theorem 1**.: _Let \(v\) be a single well potential satisfying Assumption 1, and let \(\mathcal{H}_{h}\) be as in (1.1), with \(V\) given by (1.3). Also assume that \(L>(1+\frac{\sqrt{3}}{2})a\). Then the spectral gap between the two smallest eigenvalues of \(\mathcal{H}_{h}\) satisfies_ \[\lambda_{2}(h)-\lambda_{1}(h)=2C(L,B,v)h^{\frac{1}{2}}e^{-\frac{S}{h}}(1+o(1))\] _as \(h\to 0\), for some constant \(C(L,B,v)>0\), and with_ \[S=2d(0,L)+\int_{0}^{L}\sqrt{\frac{B^{2}(2L-r)^{2}}{4}-v_{0}}-\sqrt{\frac{B^{2 }r^{2}}{4}-v_{0}}\mathrm{d}r. \tag{1.6}\] _The constant \(C(L,B,v)\) can be computed explicitly, even though we have no simple interpretation, see (4.15)._ Theorem 1 is the first result establishing an asymptotic formula for the spectral gap of \(\mathcal{H}_{h}\). First results in this direction appeared in a general framework in [13], when the magnetic field is weak. Then first bounds on \(S\) were obtained in [14, 5]. Recently the sharp exponential decay rate (1.6) was found in [9], where it is proven that \[h\ln(\lambda_{2}(h)-\lambda_{1}(h))\sim-S.\] Another related problem is the tunneling between purely magnetic wells, when there is no potential and the confinement is generated by a non-homogeneous magnetic field. This problem was studied in [7] in the case of radial wells, using a similar strategy. Finally, other magnetic tunneling estimates have recently been established in the case of Neumann boundary conditions [3], vanishing magnetic fields [1], or discontinuous magnetic fields [6]. See also [2] in a one-dimensional setting. _Remark 1_.: The additional integral term in \(S\) is a purely magnetic effect, with no analogue in the non-magnetic case, thus making this model especially interesting. It is directly related to oscillations of the eigenfunctions. Indeed, the eigenfunctions \(\varphi_{\ell}\) and \(\varphi_{r}\) generated by the left and right well respectively are related by a magnetic translation (see (3.1)), \[\varphi_{\ell}(x,y)=e^{\frac{i}{h}BLy}\varphi_{r}(x+2L,y). \tag{1.7}\] Thus, they do not oscillate at the same frequency and the relative oscillation is fast as \(h\to 0\). This rapidly oscillating phase appears in the hopping coefficient, making it smaller than what one could expect (see Section 4). _Remark 2_.: The condition \(L>(1+\frac{\sqrt{3}}{2})a\) is a limit of our strategy, which relies entirely on the radiality of the single well. Indeed, the problem centered on the left well is only radial up to distance \(2L-a\), and not \(2L\). This slight difference makes our decay estimates on the eigenfunctions non-optimal (see Lemma 7). To overcome this problem, we would need to understand very precisely the non-radial situation. _Remark 3_.: We recover the bounds on \(S\) stated in [5, 9] as follows. The result [9, Theorem 1.2] can be translated as \[d(0,2L-a)+d(0,a)\leq S\leq d(0,2L), \tag{1.8}\] which follows from \[S-d(0,2L)=\int_{0}^{L}\sqrt{\frac{B^{2}r^{2}}{4}+v-v_{0}}-\sqrt{\frac{B^{2}r^{ 2}}{4}-v_{0}}\mathrm{d}r\leq 0, \tag{1.9}\] and \[S-\left(d(0,2L-a)+d(0,a)\right)=\int_{0}^{a}\sqrt{\frac{B^{2}(2L-r)^{2}}{4}-v _{0}}-\sqrt{\frac{B^{2}r^{2}}{4}-v_{0}}\mathrm{d}r\geq 0. \tag{1.10}\] Moreover, the main result from [5] is \[BL^{2}-BLa\leq S\leq BL^{2}+2\sqrt{|v_{0}|}L+\gamma_{0}, \tag{1.11}\] which follows from (1.8) with \(\gamma_{0}=\int_{0}^{a}\sqrt{v-v_{0}}\mathrm{d}r\). _Strategy._ In Section 2 we describe the single well problem. Most of this section is standard since the ground state solves a radial Schrodinger equation without magnetic field. In Section 3 we prove that the spectral gap is given by the hopping coefficient. Our proof is inspired by the non-magnetic situation in [4, 11, 8], and is somewhat simpler that the one of [5, 9]. Finally, in Section 4 we estimate the hopping coefficient, thus proving Theorem 1. ## 2. The single well problem Let \(\varphi\) be the normalized ground state for the single well problem in radial gauge, \[(-ih\nabla-\mathbf{A}^{\mathrm{rad}})^{2}\varphi+v\varphi=\mu_{h}\varphi, \tag{2.1}\] with \[\mathbf{A}^{\mathrm{rad}}(x,y)=\frac{B}{2}(-y,x). \tag{2.2}\] We first collect the following basic facts about \(\varphi\). **Proposition 2**.: _The ground state energy \(\mu_{h}\) of \((-ih\nabla-\mathbf{A}^{\mathrm{rad}})^{2}\) satisfies_ \[\mu_{h}=v_{0}+h\sqrt{B^{2}+2v^{\prime\prime}(0)}+\mathcal{O}(h^{2}).\] _The associated eigenfunction \(\varphi\) is radial and real-valued. Moreover, the first excited eigenvalue \(\mu_{h,1}\) satisfies_ \[\mu_{h,1}=v_{0}+3h\sqrt{B^{2}+2v^{\prime\prime}(0)}+\mathcal{O}(h^{2}).\] A detailed proof of Proposition 2 is given in [9, Section 2.3]. It follows from a harmonic approximation of \(v\) near its minimum. Indeed, when \(v\) is quadratic, the problem becomes more explicit and we can prove that the minimizer is radial. The harmonic approximation gives radiality when \(v\) has a non-degenerate minimium. The function \(\varphi\) being radial, equation (2.1) can be rewritten in polar coordinates, using the notation \(\varphi(X)=u(|X|)\), \[-h^{2}\frac{1}{r}\partial_{r}\big{(}r\partial_{r}u\big{)}+\frac{B^{2}r^{2}}{4} u+vu=\mu_{h}u. \tag{2.3}\] This is a radial Schrodinger equation, with effective potential \(v_{B}(r)=\frac{B^{2}r^{2}}{4}+v(r)\). From this observation you deduce Proposition 2, by harmonic approximation of \(v_{B}\) near its minimum (see [8, Chapter 2] for instance). Moreover, the ground state \(\varphi\) has the following WKB approximation ([8, Theorem 2.3.1]). **Proposition 3**.: _The ground state \(\varphi\) of the single well problem has the following WKB approximation,_ \[\Big{|}e^{\frac{d(0,|X|)}{h}}\varphi(X)-h^{-\frac{1}{2}}K(|X|)\Big{|}=\mathcal{ O}(h^{\frac{1}{2}}),\] _uniformly on any compact, where \(d\) is the Agmon distance introduced in (1.5), and_ \[K(r)=\sqrt{\frac{B^{2}+2v^{\prime\prime}(0)}{4\pi}}\exp\Big{(}-\int_{0}^{r} \frac{v_{B}^{\prime}(s)}{4(v_{B}(s)-v_{0})}+\frac{1}{2s}-\frac{\sqrt{B^{2}+2v ^{\prime\prime}(0)}}{2\sqrt{v_{B}(s)-v_{0}}}\mathrm{d}s\Big{)}. \tag{2.4}\] As observed in [5, Equation (2.9)], the function \(\varphi\) also has an explicit integral formula outside the support of \(v\), since equation (2.3) is related to some special functions. **Lemma 4**.: _The normalized ground state \(\varphi\) of the single well problem, solution of (2.1), is radial. Moreover, it has the following integral formula for \(|X|>a\),_ \[\varphi(X)=C_{h}\exp\Big{(}-\frac{B|X|^{2}}{4h}\Big{)}\int_{0}^{\infty}\exp \Big{(}-\frac{B|X|^{2}t}{2h}\Big{)}t^{\alpha-1}(1+t)^{-\alpha}\mathrm{d}t, \tag{2.5}\] _where \(\alpha=\frac{1}{2}-\frac{\mu_{h}}{2Bh}\), and \(C_{h}\) is a normalization constant. Moreover,_ \[C_{h}\sim h^{-1}K(L)\sqrt{\frac{f^{\prime\prime}(t_{L})}{2\pi}}t_{L}^{1-\nu} (1+t_{L})^{\nu}e^{\frac{d(L)-d(0,L)}{h}}\quad\text{as}\quad h\to 0, \tag{2.6}\] _where \(\tilde{d}(L)=\int_{0}^{L}\sqrt{\frac{B^{2}r^{2}}{4}-v_{0}}\mathrm{d}r\), the Agmon distance \(d(0,L)\) was defined in (1.5), \(t_{L}\) and \(f^{\prime\prime}(t_{L})\) are given in (2.12), and \(\nu=\frac{1}{2}-\frac{\sqrt{B^{2}+2v^{\prime\prime}(0)}}{2B}\)._ Proof.: Note that \(\varphi\) satisfies equation (2.3) which can be solved by special functions outside the support of \(v\) (It is related to Kummer functions, see [5]). To estimate \(C_{h}\), we combine the WKB approximation from Proposition 3, \[\varphi(L)=h^{-\frac{1}{2}}K(L)e^{-\frac{d(0,L)}{h}}(1+o(1)), \tag{2.7}\] with an estimate of (2.5) using the Laplace method. Indeed, using that \[\alpha=\frac{|v_{0}|}{2Bh}+\nu+o(1), \tag{2.8}\] we find \[\varphi(L)=C_{h}\int_{0}^{\infty}\exp\Big{(}-\frac{f(t)}{h}\Big{)}t^{\nu-1}(1 +t)^{-\nu}\mathrm{d}t\,\big{(}1+o(1)\big{)}, \tag{2.9}\] with \[f(t)=\frac{BL^{2}}{4}(1+2t)+\frac{|v_{0}|}{2B}\ln\Big{(}\frac{1+t}{t}\Big{)}. \tag{2.10}\] The first and second derivatives of \(f\) are \[f^{\prime}(t)=\frac{BL^{2}}{2}-\frac{|v_{0}|}{2B}\frac{1}{t(1+t)},\qquad f^{ \prime\prime}(t)=\frac{|v_{0}|}{2B}\frac{2t+1}{t^{2}(1+t)^{2}}. \tag{2.11}\] In particular, \(f\) has a unique critical point \(t_{L}>0\), which is also a global minimum, \[t_{L}=\frac{1}{2}\Big{(}-1+\sqrt{1+\frac{4|v_{0}|}{B^{2}L^{2}}}\,\Big{)},\quad \text{with}\quad f^{\prime\prime}(t_{L})=\frac{B^{2}L^{3}}{2|v_{0}|}\sqrt{B^{2 }L^{2}+4|v_{0}|}. \tag{2.12}\] By the Laplace method, we deduce \[\varphi(L)=C_{h}\sqrt{\frac{2\pi h}{f^{\prime\prime}(t_{L})}}e^{-\frac{f(t_{L })}{h}}t_{L}^{\nu-1}(1+t_{L})^{-\nu}\big{(}1+o(1)\big{)}. \tag{2.13}\] Also note that \(f(t_{L})=\tilde{d}(L)\). We combine this with (2.7) to get the estimate on \(C_{h}\). ## 3. The double well problem In order to study the double well problem, we follow the strategy of [11] and compute the matrix of \(\mathcal{H}_{h}\) in a basis \((\varphi_{\ell},\varphi_{r})\), where \(\varphi_{\ell}\) (resp. \(\varphi_{r}\)) is the ground state generated by the left well (resp. the right well). More precisely, we define \(\varphi_{\ell}\) and \(\varphi_{r}\) as the normalized solutions to \[(-ih\nabla-\mathbf{A})^{2}\varphi_{\ell}+v(x+L,y)\varphi_{\ell} =\mu_{h}\varphi_{\ell}, \tag{3.2}\] \[(-ih\nabla-\mathbf{A})^{2}\varphi_{r}+v(x-L,y)\varphi_{r} =\mu_{h}\varphi_{r}, \tag{3.1}\] respectively. Note that \(\varphi_{\ell}\) and \(\varphi_{r}\) are related to the radial solution \(\varphi\) of the single-well problem (2.1) as follows. Let us denote by \(\mathbf{A}^{\ell}\) and \(\mathbf{A}^{r}\) the radial gauges centered at \((-L,0)\) and \((L,0)\) respectively, namely \[\mathbf{A}^{\ell}(x,y)=\frac{B}{2}(-y,x+L),\qquad\mathbf{A}^{r}(x,y)=\frac{B} {2}(-y,x-L).\] We define the two functions \(\sigma_{\ell}\) and \(\sigma_{r}\) by \[\sigma_{\ell}(x,y)=\frac{B}{2}y(L-x),\qquad\sigma_{r}(x,y)=-\frac{B}{2}y(L+x). \tag{3.3}\] They satisfy \[\nabla\sigma_{\ell}=\mathbf{A}^{\ell}-\mathbf{A},\qquad\nabla\sigma_{r}= \mathbf{A}^{r}-\mathbf{A},\qquad\text{and}\qquad\sigma_{\ell}-\sigma_{r}=BLy. \tag{3.4}\] Then \(\varphi_{\ell}\) and \(\varphi_{r}\) are related to \(\varphi\) by a magnetic translation, \[\varphi_{\ell}(x,y)=e^{-\frac{i\sigma_{\ell}}{h}}\varphi(x+L,y),\qquad\varphi_ {r}(x,y)=e^{-\frac{i\sigma_{r}}{h}}\varphi(x-L,y). \tag{3.5}\] The difference between the two smallest eigenvalues of \(\mathcal{H}_{h}\) can be estimated using \(\varphi_{\ell}\) and \(\varphi_{r}\) through the hopping coefficient, as stated in the following theorem, which can also be found in [5, Section 4]. **Theorem 5**.: _The two smallest eigenvalues of \(\mathcal{H}_{h}\) satisfy_ \[\lambda_{1} =\mu_{h}-|w_{h}|+\mathcal{O}\big{(}h^{-3}e^{-\frac{2d(0,2L-a)}{h }}\big{)},\] \[\lambda_{2} =\mu_{h}+|w_{h}|+\mathcal{O}\big{(}h^{-3}e^{-\frac{2d(0,2L-a)}{h }}\big{)},\] _where_ \[w_{h}=\int_{\mathbf{R}^{2}}v(x+L,y)\varphi_{\ell}(x,y)\overline{\varphi_{r}(x, y)}\mathrm{d}x\mathrm{d}y. \tag{3.6}\] The proof of Theorem 5 is a standard application of the Helffer-Sjostrand strategy (see [11, Theorem 3.9] or [7, 8]). For the reader's convenience, we recall the main ideas in Sections 3.1 and 3.2 below. ### An approximation lemma First of all, the Agmon estimates give exponential localization of the eigenfunctions of \(\mathcal{H}_{h}\) inside the wells. This localization is enough to prove that the spectrum of the double-well operator is the superposition of the spectra of the one-well operators, modulo \(\mathcal{O}(h^{\infty})\) (as in [11, 8]). The proof of this standard result is omitted. **Lemma 6**.: _There exists \(c,h_{0}>0\) such that, for \(h\in(0,h_{0})\),_ \[|\lambda_{1}(h)-\mu_{h}|=\mathcal{O}(h^{\infty}),\quad|\lambda_{2}(h)-\mu_{h}| =\mathcal{O}(h^{\infty}),\] _and \(\lambda_{3}(h)-\lambda_{1}(h)\geq ch\)._ Let \(\Psi_{1}\), \(\Psi_{2}\) be the two first eigenfunctions of \(\mathcal{H}_{h}\), and \(\Pi\) the spectral projector on \(\operatorname{Ran}(\Psi_{1},\Psi_{2})\). The lemma below shows that the single well states \(\varphi_{\ell}\) and \(\varphi_{r}\) are close to this eigenspace. **Lemma 7**.: _There exists a \(C>0\) such that, for \(h\) small enough and \(j=\ell,r\) we have_ \[\|\varphi_{j}-\Pi\varphi_{j}\|\leq Ch^{-\frac{3}{2}}e^{-\frac{d(0,2L-a)}{h}}, \quad\|(-ih\nabla-\mathbf{A})(\varphi_{j}-\Pi\varphi_{j})\|\leq Ch^{-\frac{3}{ 2}}e^{-\frac{d(0,2L-a)}{h}}.\] Proof.: We focus on \(\varphi_{\ell}\), the estimates on \(\varphi_{r}\) being identical. Let \(\lambda_{3}\) be the third eigenvalue of \(\mathcal{H}_{h}\). By definition of \(\Pi\), we have the lower bound \[\lambda_{3}\|(I-\Pi)\varphi_{\ell}\|^{2}\leq\langle\mathcal{H}_{h}(I-\Pi) \varphi_{\ell},(I-\Pi)\varphi_{\ell}\rangle. \tag{3.7}\] On the other hand, since \(\mathcal{H}_{h}\) and \(\Pi\) commute, we can use the eigenvalue equation (3.1) to get \[\langle\mathcal{H}_{h}(I-\Pi)\varphi_{\ell},(I-\Pi)\varphi_{\ell}\rangle=\mu_ {h}\|(I-\Pi)\varphi_{\ell}\|^{2}+\langle(I-\Pi)v_{r}\varphi_{\ell},(I-\Pi) \varphi_{\ell}\rangle, \tag{3.8}\] with \(v_{r}(x,y)=v(x-L,y)\). Combining (3.7) and (3.8) we find \[\|(I-\Pi)\varphi_{\ell}\|\leq(\lambda_{3}-\mu_{h})^{-1}\|v_{r}\varphi_{\ell}\|. \tag{3.9}\] For the gradient estimate we start from \[\|(-ih\nabla-\mathbf{A})(I-\Pi)\varphi_{\ell}\|^{2}\leq\langle\mathcal{H}_{h} (I-\Pi)\varphi_{\ell},(I-\Pi)\varphi_{\ell}\rangle+2|v_{0}|\|(I-\Pi)\varphi_{ \ell}\|^{2}. \tag{3.10}\] We then use (3.8) and (3.9) to deduce \[\|(-ih\nabla-\mathbf{A})(I-\Pi)\varphi_{\ell}\|^{2}\leq\big{(}(\lambda_{3}- \mu_{h})^{-1}+(\mu_{h}+2|v_{0}|)(\lambda_{3}-\mu_{h})^{-2}\big{)}\|v_{r} \varphi_{\ell}\|^{2}. \tag{3.11}\] With Proposition 3 we finally bound \(\|v_{r}\varphi_{\ell}\|\) in (3.9) and (3.11), \[\|v_{r}\varphi_{\ell}\|\leq Ch^{-\frac{1}{2}}e^{-\frac{d(0,2L-a)}{h}}. \tag{3.12}\] The result follows since \(\lambda_{3}-\mu_{h}\geq ch\), by Lemma 6. ### Proof of Theorem 5 From Lemma 7 we deduce, for \(i,j\in\{\ell,r\}\), with \(\psi_{j}=\Pi\varphi_{j}\), \[\langle\psi_{i}|\psi_{j}\rangle =\langle\varphi_{i}|\varphi_{j}\rangle+\mathcal{O}\big{(}h^{-3}e^ {-\frac{2d(0,2L-a)}{h}}\big{)}, \tag{3.14}\] \[\langle\psi_{i}|\mathcal{H}_{h}\psi_{j}\rangle =\langle\varphi_{i}|\mathcal{H}_{h}\varphi_{j}\rangle+\mathcal{O} \big{(}h^{-3}e^{-\frac{2d(0,2L-a)}{h}}\big{)}, \tag{3.13}\] and also \[\langle\varphi_{\ell}|\mathcal{H}_{h}\varphi_{\ell}\rangle =\mu_{h}+\mathcal{O}\big{(}h^{-3}e^{-\frac{2d(0,2L-a)}{h}}\big{)}, \tag{3.16}\] \[\langle\varphi_{r}|\mathcal{H}_{h}\varphi_{r}\rangle =\mu_{h}+\mathcal{O}\big{(}h^{-3}e^{-\frac{2d(0,2L-a)}{h}}\big{)},\] (3.17) \[\langle\varphi_{\ell}|\mathcal{H}_{h}\varphi_{r}\rangle =\mu_{h}\langle\varphi_{\ell}|\varphi_{r}\rangle+w_{h}. \tag{3.15}\] We now use the orthonormal basis \[\hat{\psi}_{\ell}=\frac{1}{\|\psi_{\ell}\|}\psi_{\ell},\qquad\hat{\psi}_{r}= \frac{\psi_{r}-\langle\psi_{r},\hat{\psi}_{\ell}\rangle\hat{\psi}_{\ell}}{\| \psi_{r}-\langle\psi_{r},\hat{\psi}_{\ell}\rangle\hat{\psi}_{\ell}\|}. \tag{3.18}\] It follows from (3.13), (3.14), (3.15), (3.16) and (3.17) that the matrix of \(\mathcal{H}_{h}\) in the orthonormal basis \((\hat{\psi}_{\ell},\hat{\psi}_{\ell})\) is \[\begin{pmatrix}\langle\hat{\psi}_{\ell}|\mathcal{H}_{h}\hat{\psi}_{\ell}\rangle& \langle\hat{\psi}_{\ell}|\mathcal{H}_{h}\hat{\psi}_{r}\rangle\\ \langle\hat{\psi}_{r}|\mathcal{H}_{h}\hat{\psi}_{\ell}\rangle&\langle\hat{ \psi}_{r}|\mathcal{H}_{h}\hat{\psi}_{r}\rangle\end{pmatrix}=\begin{pmatrix} \mu_{h}&w_{h}\\ \overline{w}_{h}&\mu_{h}\end{pmatrix}+\mathcal{O}\big{(}h^{-3}e^{-\frac{2d(0,2L -a)}{h}}\big{)}. \tag{3.19}\] We deduce that the two first eigenvalues of \(\mathcal{H}_{h}\) are \[\lambda_{\pm}=\mu_{h}\pm|w_{h}|+\mathcal{O}\big{(}h^{-3}e^{-\frac{2d(0,2L-a)} {h}}\big{)}, \tag{3.20}\] and Theorem 5 follows. Note that, instead of \(\hat{\psi}_{j}\), one could also use a more symmetric orthonormalization as in [7]. The choice (3.18) is the same as in [5]. ## 4. Estimates on the hopping coefficient \(w_{h}\) We prove here the following estimate on the hopping coefficient, which governs the spectral gap by Theorem 5. **Theorem 8**.: _There exists a constant \(C(B,L,v)>0\) such that_ \[w_{h}=-C(B,L,v)h^{\frac{1}{2}}e^{-\frac{S}{h}}(1+o(1))\] _as \(h\to 0\), with_ \[S=2d(0,L)+\int_{0}^{L}\sqrt{\frac{B^{2}(2L-r)^{2}}{4}-v_{0}}-\sqrt{\frac{B^{2 }r^{2}}{4}-v_{0}}\mathrm{d}r. \tag{4.1}\] _The constant \(C(B,L,v)\) has a long but explicit expression, see (4.15)._ Contrary to the approach of [5], we use the representation of \(w_{h}\) as an explicit integral on a line separating the two wells, as in [11, Equation (2.25)]. Compared to the non-magnetic case, the novelty here is that the phase appearing in this integral takes complex values due to the magnetic flux. A similar effect was observed in the purely magnetic situation [7]. However for constant magnetic fields the estimate is simpler, since the resulting complex integral is Gaussian. We give the details of the proof in Section 4.1 below. Our main result, Theorem 1, follows from the reduction to the hopping coefficient Theorem 5, together with Theorem 8. We only need to ensure that the error from Theorem 5 is smaller than \(w_{h}\). We discuss this condition in Section 4.2. ### Proof of Theorem 8 Since \(v\) is supported in the ball of radius \(a\), and \(L>a\), the integral defining \(w_{h}\) can be restricted to the left half-plane, \(\Omega_{\ell}=\{(x,y)|x<0\}\), \[w_{h}=\int_{\Omega_{\ell}}v(x+L,y)\varphi_{\ell}\overline{\varphi_{r}} \mathrm{d}x\mathrm{d}y. \tag{4.2}\] Using the equations (3.1) and (3.2) satisfied by \(\varphi_{\ell}\) and \(\varphi_{r}\) on \(\Omega_{\ell}\), we find \[w_{h}=\int_{\Omega_{\ell}}\varphi_{\ell}\cdot\overline{(-ih\nabla-\mathbf{A})^ {2}\varphi_{r}}\mathrm{d}x\mathrm{d}y-\int_{\Omega_{\ell}}(-ih\nabla-\mathbf{ A})^{2}\varphi_{\ell}\cdot\overline{\varphi_{r}}\mathrm{d}x\mathrm{d}y. \tag{4.3}\] A partial integration yields \[w_{h}=ih\int_{\partial\Omega_{\ell}}\varphi_{\ell}\overline{(-ih\nabla- \mathbf{A})\varphi_{r}}\cdot\mathbf{n}+\overline{\varphi_{r}}(-ih\nabla- \mathbf{A})\varphi_{\ell}\cdot\mathbf{n}, \tag{4.4}\] where \(\mathbf{n}=(1,0)\) is the outward normal to \(\Omega_{\ell}\). Now due to our choice of gauge, \(A_{1}=0\) and only remains \[w_{h}=h^{2}\int_{\mathbf{R}}\overline{\varphi_{r}(0,y)}\partial_{x}\varphi_{ \ell}(0,y)-\varphi_{\ell}(0,y)\overline{\partial_{x}\varphi_{r}(0,y)}\mathrm{ d}y. \tag{4.5}\] We recall that \(\varphi_{\ell}\) and \(\varphi_{r}\) are related to the radial solution \(\varphi\) of the single-well problem by \[\varphi_{\ell}(x,y)=e^{-\frac{i\varphi_{\ell}}{h}}\varphi(x+L,y),\qquad\varphi _{r}(x,y)=e^{-\frac{i\varphi_{r}}{h}}\varphi(x-L,y). \tag{4.6}\] We now use these relations, together with the integral representation formula in Lemma 2.5, to calculate \(w_{h}\). First of all, \[\varphi_{j}(0,y)=C_{h}e^{-\frac{i\sigma_{j}}{h}}\int_{0}^{\infty}e^{-\frac{B(L^{2 }+y^{2})}{4h}(1+2t)}t^{\alpha-1}(1+t)^{-\alpha}\mathrm{d}t, \tag{4.7}\] and \[\partial_{x}\varphi_{\ell}(0,y) =C_{h}e^{-\frac{i\sigma_{\ell}}{h}}\int_{0}^{\infty}e^{-\frac{B( L^{2}+y^{2})}{4h}(1+2t)}\Big{(}-\frac{i}{h}\partial_{x}\sigma_{\ell}-\frac{BL}{2h}( 1+2t)\Big{)}t^{\alpha-1}(1+t)^{-\alpha}\mathrm{d}t,\] \[\partial_{x}\varphi_{r}(0,y) =C_{h}e^{-\frac{i\sigma_{r}}{h}}\int_{0}^{\infty}e^{-\frac{B(L^{2 }+y^{2})}{4h}(1+2t)}\Big{(}-\frac{i}{h}\partial_{x}\sigma_{r}+\frac{BL}{2h}(1+ 2t)\Big{)}t^{\alpha-1}(1+t)^{-\alpha}\mathrm{d}t.\] We insert this in (4.5), \[w_{h}=-h^{2}C_{h}^{2}\int_{\mathbf{R}\times\mathbf{R}_{+}^{2}} \Big{(}\frac{i}{h}(\partial_{x}\sigma_{r}+\partial_{x}\sigma_{\ell})+\frac{BL }{h}(1+t+s)\Big{)}e^{\frac{i}{h}(\sigma_{r}-\sigma_{\ell})} \tag{4.8}\] \[e^{-\frac{BL(2+y^{2})}{2h}(1+t+s)}(ts)^{\alpha-1}(1+t)^{-\alpha} (1+s)^{-\alpha}\mathrm{d}y\mathrm{d}t\mathrm{d}s.\] In (4.8) we use \(\sigma_{r}-\sigma_{\ell}=-BLy\), \[w_{h}=-hC_{h}^{2}BL\int_{\mathbf{R}\times\mathbf{R}_{+}^{2}}e^{-\frac{B(L^{2}+ y^{2})}{2h}(1+t+s)-\frac{iBLy}{h}}\frac{(1+t+s-iy/L)(ts)^{\alpha-1}}{(1+t)^{ \alpha}(1+s)^{\alpha}}\mathrm{d}y\mathrm{d}t\mathrm{d}s.\] Here the \(y\)-integral is Gaussian: inside the exponential we have \[\frac{B(L^{2}+y^{2})}{2}(1+t+s)+iBLy=\frac{B(1+t+s)}{2}\big{(}y+\frac{iL}{1+t+ s}\big{)}^{2}+\frac{BL^{2}}{2}\big{(}1+t+s+\frac{1}{1+t+s}\big{)},\] and this complex-centered Gaussian is integrated as \[\int_{\mathbf{R}}\exp\Big{(}-\frac{B(1+t+s)}{2h}\big{(}y+\frac{iL}{1+t+s} \big{)}^{2}\Big{)}\mathrm{d}y=\sqrt{\frac{2\pi h}{B(1+t+s)}}, \tag{4.9}\] and \[\int_{\mathbf{R}}\frac{-iy}{L}\exp\Big{(}-\frac{B(1+t+s)}{2h}\big{(}y+\frac{iL }{1+t+s}\big{)}^{2}\Big{)}\mathrm{d}y=\frac{-1}{1+t+s}\sqrt{\frac{2\pi h}{B(1+ t+s)}}. \tag{4.10}\] Thus, \[w_{h}=-h^{\frac{3}{2}}C_{h}^{2}\sqrt{2\pi BL^{2}}\int_{\mathbf{R}_{+}^{2}}\exp \Big{(}-\frac{BL^{2}}{2h}\big{(}1+t+s+\frac{1}{1+t+s}\big{)}\Big{)}\frac{ \omega(s,t)(ts)^{\alpha-1}}{(1+t)^{\alpha}(1+s)^{\alpha}}\mathrm{d}t\mathrm{d}s,\] with \(\omega(s,t)=(1+t+s)^{1/2}-(1+t+s)^{-3/2}\geq 0\). We replace \(\alpha=\frac{|v_{0}|}{2Bh}+\nu+o(1)\) as \(h\to 0\), \[w_{h}=-h^{\frac{3}{2}}C_{h}^{2}\sqrt{2\pi BL^{2}}\int_{\mathbf{R}_{+}^{2}}e^{- \frac{g(s,t)}{h}}\frac{\omega(s,t)(ts)^{\nu-1}}{(1+t)^{\nu}(1+s)^{\nu}} \mathrm{d}t\mathrm{d}s\,(1+o(1)), \tag{4.11}\] with \[g(s,t)=\frac{BL^{2}}{2}\Big{(}1+t+s+\frac{1}{1+t+s}\Big{)}+\frac{|v_{0}|}{2B} \Big{(}\ln\big{(}\frac{1+t}{t}\big{)}+\ln\big{(}\frac{1+s}{s}\big{)}\Big{)}. \tag{4.12}\] The function \(g\) has a unique critical point, which is also a global minimum, at \(t=s=t_{\star}\), with \[t_{\star}=\frac{1}{2}\sqrt{N}-\frac{1}{2}+\frac{1}{2}\sqrt{1+N},\qquad N=\frac{ |v_{0}|}{B^{2}L^{2}}. \tag{4.13}\] Moreover, \[g(t_{\star},t_{\star})=BL^{2}\Big{(}\sqrt{1+N}+N\ln\big{(}\frac{1+\sqrt{1+N}}{ \sqrt{N}}\big{)}\Big{)}=\int_{0}^{2L}\sqrt{\frac{B^{2}r^{2}}{4}+|v_{0}|} \mathrm{d}r. \tag{4.14}\] Using the Laplace method, the integral (4.11) can be estimated as \[w_{h}\sim-h^{5/2}C_{h}^{2}\frac{(2\pi)^{3/2}\sqrt{BL^{2}}}{|g^{\prime\prime}(t_{ \star},t_{\star})|^{\frac{1}{2}}}\frac{\omega(t_{\star},t_{\star})t_{\star}^{2 \nu-2}}{(1+t_{\star})^{2\nu}}\exp\Big{(}-\frac{1}{h}\int_{0}^{2L}\sqrt{\frac{B^ {2}r^{2}}{4}+|v_{0}|}\mathrm{d}r\Big{)}, \tag{4.15}\] when \(h\to 0\). We insert the estimate (2.6) on \(C_{h}\), and we find an explicit constant \(C(B,L,v)>0\) such that \[w_{h}\sim-C(B,L,v)h^{\frac{1}{2}}\exp\Big{(}-\frac{1}{h}\int_{0}^{2L}\sqrt{ \frac{B^{2}r^{2}}{4}+|v_{0}|}\mathrm{d}r+\frac{2}{h}\tilde{d}(L)-\frac{2}{h}d (0,L)\Big{)}. \tag{4.16}\] We find the exponential rate (4.1) since \[\int_{0}^{2L}\sqrt{\frac{B^{2}r^{2}}{4}+|v_{0}|}\mathrm{d}r-2\tilde{d}(L)=\int _{0}^{L}\sqrt{\frac{B^{2}(2L-r)^{2}}{4}-v_{0}}-\sqrt{\frac{B^{2}r^{2}}{4}-v_{ 0}}\mathrm{d}r.\] ### Condition on the error terms To ensure the error terms from Theorem 5 to be smaller than \(w_{h}\), we need \(S<2d(0,2L-a)\). We recall formula (4.1) for \(S\), which can be rewritten as \[S=d(0,2L)+d(0,L)-\tilde{d}(L)=2d(0,2L)-\tilde{d}(2L). \tag{4.17}\] Thus, \[S-2d(0,2L-a)=2d(2L-a,2L)-\tilde{d}(2L)=d(2L-a,2L)-\tilde{d}(2L-a).\] Hence the condition \(S<2d(0,2L-a)\) is satisfied as soon as \[d(2L-a,2L)<\tilde{d}(2L-a). \tag{4.18}\] We now show that (4.18) is true as soon as \(L>(1+\frac{\sqrt{3}}{2})a\). First, we bound the left-hand side by \[d(2L-a,2L)\leq\int_{2L-a}^{2L}\frac{Br}{2}+\sqrt{|v_{0}|}\mathrm{d}r=\sqrt{|v _{0}|}a+\frac{Ba}{4}(4L-a), \tag{4.19}\] and the right-hand side \[\tilde{d}(2L-a)\geq\int_{0}^{a}\sqrt{|v_{0}|}\mathrm{d}r+\int_{a}^{2L-a}\frac {Br}{2}\mathrm{d}r=\sqrt{|v_{0}|}a+BL(L-a). \tag{4.20}\] Thus, a sufficient condition for (4.18) to hold is \[BL(L-a)>BLa-\frac{Ba^{2}}{4},\] which is equivalent to \(L>(1+\frac{\sqrt{3}}{2})a\). ## Acknowledgements The author thanks Soren Fournais, Bernard Helffer, Ayman Kachmar, and Nicolas Raymond for many enlightening discussions, and for encouraging this work. This work is funded by the European Union. Views and opinions expressed are however those of the author only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
2309.03617
NeuroCodeBench: a plain C neural network benchmark for software verification
Safety-critical systems with neural network components require strong guarantees. While existing neural network verification techniques have shown great progress towards this goal, they cannot prove the absence of software faults in the network implementation. This paper presents NeuroCodeBench - a verification benchmark for neural network code written in plain C. It contains 32 neural networks with 607 safety properties divided into 6 categories: maths library, activation functions, error-correcting networks, transfer function approximation, probability density estimation and reinforcement learning. Our preliminary evaluation shows that state-of-the-art software verifiers struggle to provide correct verdicts, due to their incomplete support of the standard C mathematical library and the complexity of larger neural networks.
Edoardo Manino, Rafael Sá Menezes, Fedor Shmarov, Lucas C. Cordeiro
2023-09-07T10:19:33Z
http://arxiv.org/abs/2309.03617v1
# NeuroCodeBench: a plain C neural network benchmark for software verification ###### Abstract Safety-critical systems with neural network components require strong guarantees. While existing neural network verification techniques have shown great progress towards this goal, they cannot prove the absence of software faults in the network implementation. This paper presents _NeuroCodeBench_ - a verification benchmark for neural network code written in plain C. It contains 32 neural networks with 607 safety properties divided into 6 categories: maths library, activation functions, error-correcting networks, transfer function approximation, probability density estimation and reinforcement learning. Our preliminary evaluation shows that state-of-the-art software verifiers struggle to provide correct verdicts, due to their incomplete support of the standard C mathematical library and the complexity of larger neural networks. ## 1 Introduction In contrast to classic software development, neural networks are crafted via a long process of trial and error that terminates when their predictive performance reaches a satisfactory level [7, 43]. The iterative and performance-driven nature of this process leaves neural networks vulnerable on many fronts [23]: poor performance on out-of-distribution [18] and adversarial inputs [37], misspecification of the neural architecture and training process [24], invocation of broken and deprecated libraries [35], outright software bugs [20]. Unfortunately, many of these vulnerabilities are not easy to catch early in the development process and may remain hidden until after deployment. The most successful techniques for guaranteeing the functional correctness of neural networks operate at a high level of abstraction, where finite precision and other implementation details are not considered [27, 46, 36]. Although efforts to debug the actual implementation of neural networks exist, they are based on automatic testing and thus cannot prove correctness for all inputs [39, 20, 17]. This lack of guarantees is especially concerning for safety-critical systems since common software vulnerabilities [12] (e.g., arithmetic overflows, invalid memory accesses) can make the networks produce wrong results, expose sensitive data or corrupt the system they are executed on. While off-the-shelf software verifiers can be used to check neural network code [42, 33], there has been no systematic attempt at assessing their performance on such tasks. Typically, state-of-the-art verification tools (e.g., CPAChecker [5], ESBMC [19], CBMC [28], UAutomizer [21]) are compared on SV-COMP [4] - the largest software verification competition with over 15'000 C programs ranging from hand-crafted code to real-world software (e.g., drivers, Linux kernel). However, this competition lacks a dedicated benchmark for either neural networks or mathematical libraries (e.g., math.h). This paper presents _NeuroCodeBench_ - a reasoned benchmark of neural network code in plain C. It is designed to exercise the capabilities of existing software verifiers without overwhelming them with excessively large instances. More specifically, it contains 32 neural networks with 607 safety properties in SV-COMP format divided into the following 6 categories: maths library, activation functions, error-correcting networks, transfer function approximation, probability density estimation and reinforcement learning. The last two categories are converted to C code from the VNN-COMP'22 suite [36], whereas the rest are entirely new. As a demonstration, we run the leading tools of SV-COMP 2023 in reachability, falsification and floating point arithmetic [4]. Our preliminary results show that these verifiers have incomplete support of the math.h library and struggle on larger neural networks. Lastly, we make _NeuroCodeBench_ publicly available at [31] and [30]. ## 2 The Benchmark ### Design Requirements In designing _NeuroCodeBench_, we target two main requirements. First, our benchmark must be representative of existing neural network code. Mainstream libraries like PyTorch [3] and Tensorflow [9] have an opaque multi-language interpreted structure that can be easily tested [20, 17], but does not lend itself to automated software verification. For this reason, we opt for micro-controller frameworks, where the source code of the network is fully available. We use two existing tools for converting high-level neural network specifications to standalone C code: onnx2c[45] and keras2c[14, 15]. Second, our benchmark must contain safety properties whose verdict is known, with reasonably balanced sets of safe and unsafe verdicts. Existing works rely on the verdicts of a single tool [42, 33] and thus are not a reliable source of information. Here, we establish the ground-truth verdict of our 607 safety properties in three ways (see Table 1): _A Priori_ verdicts come from the specific mathematical structure of the functions and networks we verify; _Brute Force_ verdicts come from exhaustive exploration of all possible floating point inputs; _VNN-COMP'22_ verdicts come from the independently-run neural network verification competition [36]. For the latter, we only keep unsafe properties if we can reproduce the corresponding counterexamples. ### Benchmark Description Math Library.Typically, neural networks rely on 32-bit floating point operations1 and invoke the corresponding functions in the math.h library. More specifically, most activation functions depend on \begin{table} \begin{tabular}{|c|c|c|c|} \hline Benchmark Category & Safe & Unsafe & Ground Truth \\ \hline math\_functions & 33 & 11 & A Priori \\ activation\_functions & 40 & 16 & A Priori \\ hopfield\_nets & 47 & 33 & A Priori \\ poly\_approx & 48 & 48 & Brute Force \\ reach\_prob\_density & 22 & 13 & VNN-COMP’22 \\ reinforcement\_learning & 103 & 193 & VNN-COMP’22 \\ \hline \end{tabular} \end{table} Table 1: Overview of _NeuroCodeBench_. The “Unsafe” column comprises all properties for which a counterexample exists. The “Ground Truth” column reports the source of our verdicts. exponential, logarithm, error function, absolute value, and max function (see activation_functions category). Similarly, positional encodings depend on sine and cosine [29], while Euclidean distances and vector normalisation depend on the square root [8]. In this light, it is worth checking whether software verifiers correctly handle calls to math.h. We write benchmarks that depend on the following functions: acosf, asinf, cosf, erff, expf, fabsf, logf, sinf and sqrtf. Since their semantics are platform-specific, we assume compliance with the IEEE 754 standard for 32-bit floating point [25] and the C99 standard for math.h[26]. We provide 44 safety properties (see Table 1) that check for a wide range of behavior: output bounds, monotonicity, periodicity, symmetry, function inversion and derivatives. Activation Functions.Most of the non-linear behaviour in neural networks is concentrated in the activation layers [8]. These contain fairly restricted sets of activation functions whose implementation should be verified for correctness. Our benchmark includes the most popular ones [38, 22]: Elu, Gelu, Logistic, ReLU, Softmax, Softplus, Softsign and TanH. In turn, their definition depends on the functions erff, expf, expmf, fabsf, fmaxf, log1pf and tanhf. While most activation functions are univariate, the Softmax accepts multivariate inputs. To keep our verification instances small, we limit the size of Softmax input vectors to 2 and 4. Error-Correcting Networks.For a long time, it has been known that certain types of recurrent neural networks can act as error-correcting decoders [1, 11]. The main idea is encoding a sequence of \(d\) bits into a vector \(x\in\{\pm 1\}^{d}\), and letting the neural network flip the sign of the incorrect entries. Here, we choose Hopfield networks with Hebbian weights since their properties are well understood [1]. Specifically, we build networks reconstructing a single pattern \(x=(1,\ldots,1)\). We vary the pattern length in \(d\in\{4,8,16,32,64\}\) and the number of recursions in \(t\in[1,4]\). For compatibility with keras2c[14, 15], we use Softsign and TanH activations (see Table 2) rather than traditional sign activations [1]. Our safety properties check whether the network can reconstruct \(x\) when the first \(d/2-1\) entries can take any value in \([-1,1]\). Due to the network symmetry, we establish the ground truth by checking the extreme inputs \(x\) and \(x^{\prime}=(-1,\ldots,1)\), where \(x^{\prime}_{i}=-1\) for all \(i\in[1,d/2-1]\). Transfer Function NetworksIn several engineering areas, neural networks are used to approximate the transfer function of electrical components [47, 32]. Here, we emulate this process by defining a hypothetical polynomial component \(f(x)=0.125x^{4}-0.25x^{3}-0.75x^{2}+x+0.5\) with oscillating transfer function. Then, we create a training set by uniformly sampling \(f(x)\) in \(x\in[-2,3]\) and train 16 different feedforward ReLU networks \(\hat{f}(x)\). The smallest has four layers with four neurons each, and the largest has a single hidden layer with 1024 neurons (see poly_approx category in Table 2). We formally verify the approximation quality by measuring the difference between \(\hat{f}(x)\) and \(f(x)\) for each possible 32-bit floating point value in \([-2,3]\). With this information, we write 96 robustness properties (see Table 1). Specifically, we check the input domain in a small interval \(\mathcal{X}\) of size 0.1 \begin{table} \begin{tabular}{|r|c|c|c|c|c|c|c|} \hline Neural Network Category & Inputs & Outputs & Layers & Neurons & Activations & Architecture & Conversion \\ \hline hopfield\_nets & 4–64 & 4–64 & 1 & 4–64 & Softsign/TanH & Recurrent & keras2c \\ poly\_approx & 1 & 1 & 1–4 & 16–1024 & ReLU & Feedforward & keras2c \\ reach\_prob\_density & 3–14 & 3–14 & 2–3 & 64–192 & ReLU & Feedforward & onnx2c \\ reinforcement\_learning & 4–8 & 2–8 & 2 & 128–512 & ReLU & Feedforward & onnx2c \\ \hline \end{tabular} \end{table} Table 2: Neural networks in _NeuroCodeBench_. The “Layers” and “Neurons” columns refer to the hidden layers only. The networks in hopfield_nets have a number of iterations between 1 and 4. around the worst approximation error. There, we make sure that the error is always below a given threshold \(\epsilon\geq|f(x)-\hat{f}(x)|,\forall x\in\mathcal{X}\). We select six decreasing values of \(\epsilon\) for each network: three make the property hold and three yield a counterexample. VNN-COMP NetworksSince its first edition in 2020, the International Verification of Neural Networks Competition (VNN-COMP) publishes all its benchmarks [36]. These benchmarks do not contain implementation details since they target a higher level of abstraction (real number arithmetic, no memory model). To provide a reference implementation, we propose the following conversion process: we translate the networks from ONNX format [13] to C with onnx2c[45], and the safety properties from VNN-LIB [44] to a minimal main() function with pre- and post-conditions. Among all categories of the 2022 edition [36], we select two that contain relatively small neural networks (see Table 2): reach_prob_density are networks that approximate probability densities [34], reinforcement_learning are control networks trained via reinforcement learning [40]. ### Preliminary Evaluation Here, we use _NeuroCodeBench_ to evaluate six software verifiers which achieved top-places2 in the _Reachability_, _Falsification_ and _Floats_ categories of SV-COMP 2023 [4]. We keep our experimental setup as similar to SV-COMP as possible: we use the benchmarking tool BenchExec [6] with 2 CPU cores, 6GB of RAM and 900 seconds of total CPU time per verifier for each verification task. Footnote 2: We omit VeriAbs [2] and VeriAbsL [16] due to licence restrictions. We omit BRICK [10] due to technical issues. We omit cooperative verifiers for clarity. We run PeSCo [41] with the CPAChecker binary from SV-COMP 2023. Our preliminary results in Figure 1 show that all six verifiers produce a large ratio of _incorrect-to-correct_ verdicts. One of the likely reasons is incomplete support of math.h functions, which appear in the first three categories of Table 1. Indeed, CBMC, ESBMC, CPAChecker and UAutomizer produce many math-related warnings in their output, even when their verdict is correct or unknown. At the same time, approximately half of the unknown verdicts are due to timeouts on the larger neural networks of _NeuroCodeBench_, which suggests that the verifiers struggle with their complexity. ## 3 Conclusions and Future Work _NeuroCodeBench_ is a challenging benchmark of neural network code in plain C. Our preliminary analysis demonstrates that state-of-the-art verifiers cannot produce correct verdicts on most of our safety properties. In the future, we plan to provide complete operational models for the math.h library, whose absence impacts existing verifiers. Furthermore, we plan to contribute _NeuroCodeBench_ to SV-COMP and draw the attention of that community to the challenges of verifying neural code. Figure 1: Results of state-of-the-art software verifiers on _NeuroCodeBench_ after 900 seconds.
2309.16568
Localized polariton states in a photonic crystal intercalated by a transition metal dichalcogenide monolayer
Beyond the extensively studied microcavity polaritons, which are coupled modes of semiconductor excitons and microcavity photons, nearly 2D semiconductors placed in a suitable environment can support spatially localized exciton-polariton modes. We demonstrate theoretically that two distinct types of such modes can exist in a photonic crystal with an embedded transition metal dichalcogenide (TMD) monolayer and derive an equation that determines their dispersion relations. The localized modes of two types occur in the zeroth- and first-order stop-bands of the crystal, respectively, and have substantially different properties. The latter type of the localized modes, which appear inside the light cone, can be described as a result of coupling of the TMD exciton and an optical Tamm state of the TMD-intercalated photonic crystal. We suggest an experiment for detecting these modes and simulate it numerically.
Yuliy V. Bludov, Carlos Fernandes, Nuno M. R. Peres, Mikhail I. Vasilevskiy
2023-09-28T16:24:55Z
http://arxiv.org/abs/2309.16568v1
Localized polariton states in a photonic crystal intercalated by a transition metal dichalcogenide monolayer ###### Abstract Beyond the extensively studied microcavity polaritons, which are coupled modes of semiconductor excitons and microcavity photons, nearly 2D semiconductors placed in a suitable environment can support spatially localized exciton-polariton modes. We demonstrate theoretically that two distinct types of such modes can exist in a photonic crystal with an embedded transition metal dichalcogenide (TMD) monolayer and derive an equation that determines their dispersion relations. The localized modes of two types occur in the zeroth- and first-order stop-bands of the crystal, respectively, and have substantially different properties. The latter type of the localized modes, which appear inside the light cone, can be described as a result of coupling of the TMD exciton and an optical Tamm state of the TMD-intercalated photonic crystal. We suggest an experiment for detecting these modes and simulate it numerically. Transition metal dichalcogenide Surface polariton Localized mode ## 1 Introduction Since the discovery of two-dimensional (2D) one-atomically thick material graphene in 2004 [1], in the area of electronics it was great demand for the similar materials, but with bandgaps in their electronic spectrum. After several years, several 2D semiconductors were found, such as the transition metal dichalcogenide (TMD) family [2, 3] and phosphorene [4]. In particular, TMDs possess bandgaps of width corresponding to the optical range of wavelengths and they behave as 2D semiconductors. The 2D nature of the TMDs leads to reduced dielectric screening and, consequently, strong Coulomb interaction between electrons and holes, which results in the formation of tightly bound excitons [5]. The excitonic luminescence of these materials is of practical interest and can be enhanced and even the lasing regime can be achieved by incorporating the 2D layer into an appropriate photonic structure [6, 7, 8]. Generally, the optical spectra of TMDs are characterized by the presence of two excitonic transitions, referred to as type A and type B [9]. If TMD layer is embedded into a microcavity (MC), these excitons can effectively couple to MC photons, forming MC exciton-polaritons (EPs) [10, 11, 12, 13, 14, 15, 16]. This exciton-light coupling can be described by the 2D optical conductivity of a TMD layer, \[\sigma_{TMD}\left(\omega\right)=\sigma_{0}\sum_{j=A,B}\frac{P_{j}}{\gamma_{j}+ i\omega_{j}-i\omega}\,, \tag{1}\] which takes into account the aforementioned excitonic transitions [14]. Its real part exhibits sharp peaks at the excitonic transition frequencies, \(\omega_{A}\) and \(\omega_{B}\) (\(\omega_{A}<\omega_{B}\)) and in Eq. (1) \(P_{A\left(B\right)}\) stands for longitudinal-transverse splitting of exciton A (B). Also, \(\gamma_{A\left(B\right)}\) are damping parameters and \(\sigma_{0}=e^{2}/\left(4\hbar\right)\) the quantum of conductivity. The imaginary part of the conductivity changes sign in the vicinity of the excitonic transition frequencies [see Fig. 1(a)]. As a result, the TMD is characterized by a negative imaginary part of the conductivity in two frequency ranges, \(\omega<\omega_{A}\) and \(\omega_{*}<\omega<\omega_{B}\) (white regions in Fig. 1), while \(\mathrm{Im}(\sigma_{TMD})\) is positive for \(\omega_{A}<\omega<\omega_{*}\) and \(\omega>\omega_{B}\) (grey regions in Fig. 1). Here \(\omega_{*}=\left(P_{A}\omega_{B}+P_{B}\omega_{A}\right)/\left(P_{A}+P_{B}\right)\). If TMD is cladded by two semi-infinite dielectric media with dielectric constants \(\varepsilon_{1}\) and \(\varepsilon_{2}\), it is able to sustain _surface_ EPs with a dispersion relation, \(\omega(k_{y})\), determined by the equation [14] \[k_{z}^{(1)}+k_{z}^{(2)}+\frac{4\pi\omega}{c^{2}}\sigma_{TMD}\left(\omega\right) =0\,, \tag{2}\] where \(k_{z}^{(m)}=\sqrt{\kappa^{2}\varepsilon_{m}-k_{y}^{2}}\) and \(k_{y}\) are out-of-plane and in-plane components of the wave vector, respectively, in the medium with dielectric constant \(\varepsilon_{m}\) (\(m=1,2\)), \(\kappa=\omega/c\), and \(c\) is the velocity of light in vacuum. Neglecting damping, \(k_{z}^{(1)}\) and \(k_{z}^{(2)}\) are purely imaginary for surface EPs, in contrast with the MC polaritons. As an example, the dispersion of surface EPs in a single layer of \(\mathrm{MoS}_{2}\) is depicted in Fig. 1(b). There are two such modes [type A and type B, depicted in Fig. 1(b) by solid blue and pink lines, correspondingly], which exist in the frequency ranges where imaginary part of TMD conductivity is negative (white domains in Fig. 1), as it happens for s-polarized plasmon-polaritons in graphene [17]. Notice that both A and B-type surface EP modes occur outside of the light cone, their dispersion curves bifurcate from the less steep light line, \(\omega=ck_{y}/\sqrt{\max\left(\varepsilon_{1},\varepsilon_{2}\right)}\), and asymptotically approach the exciton transition frequencies, \(\omega_{A}\) and \(\omega_{B}\), at large values of \(k_{y}\). Alike other evanescent waves, surface EPs cannot be excited directly by external propagating electromagnetic (EM) waves. As mentioned above, the efficiency of the exciton-light coupling can be enhanced considerably if the TMD layer is placed in an optical resonance system, such as Fabry-Perot [11, 14], micropillar [10, 18] or Tamm-plasmon [12, 13] microcavity, on top of a specially prepared metasurface [15, 8] or just near a plasmonic surface [19]. In this way, the strong coupling regime can be achieved, which offers a range of potential applications (see references in the recent papers [8, 18]). The presence of a MC confining the light allows for the existence of the "_bulk_"-type MC polaritons with real \(k_{z}^{(1)}\), \(k_{z}^{(2)}\)[20], which coexist with the _surface_ EPs [14]. However, it may be not a MC but rather a heterostructure formed by two finite PCs (or Bragg reflectors, BR, in other words) where the latter type of modes exist. The existence of lossless electromagnetic (EM) interface modes at the boundary of such a heterostructure was demonstrated in Ref. [21], where they were named optical Tamm states; later the term passed to designate mostly localized (in one direction) EM modes formed in the gap between a BR and a metal surface, coupled to metal plasmons [22, 23, 24]. In this paper we demonstrate that a TMD monolayer embedded in a photonic crystal (PC) is able to sustain localized EP eigenstates, similar to other types of "defects" in PCs, which break the translational symmetry along the PC axis [25, 26, 27, 28, 29, 30]. These states are characterized by a well-defined real in-plane component of the \(k\)-vector, i.e. they correspond to evanescent waves coupled to the TMD excitons. Such modes can be excited directly by external light if the number of periods in the PC is not too large. Moreover, using oblique incidence of light and measuring absorbance, one can probe the dispersion relation of these modes as demonstrated by our numerical simulation. Figure 1: (a) Frequency dependence of the imaginary part of the \(\mathrm{MoS}_{2}\) optical conductivity for \(\gamma_{A}=\gamma_{B}=0\); (b) Dispersion relation of surface EPs (solid blue and pink lines), supported by a single \(\mathrm{MoS}_{2}\) layer located at the interface between two dielectrics with \(\varepsilon_{1}=2\) and \(\varepsilon_{2}=12\). In both panels the frequency regions with positive Re\(\sigma\) are shadowed. In panel (b) the light lines \(\omega=ck_{y}/\sqrt{\varepsilon_{m}}\quad(m=1,2)\) are depicted by orange dashes. Here and throughout the paper we consider the following parameters for the \(\mathrm{MoS}_{2}\) conductivity (1): \(P_{A}=0.2530\,\mathrm{eV}\), \(\omega_{A}=1.93715\,\mathrm{eV}\), \(P_{B}=0.2517\,\mathrm{eV}\), \(\omega_{B}=2.10327\,\mathrm{eV}\). ## 2 Electromagnetic eigenmodes Let us consider a photonic crystal with a period \(D\), composed of two alternating dielectric layers [along \(z\)-axis, see Fig. 2(a)], a layer with the dielectric constant \(\varepsilon_{1}\) and thickness \(d_{1}\) (which occupies spatial domains \(nD<z<nD+d_{1}\)) and a layer with dielectric constant \(\varepsilon_{2}\) and thickness \(d_{2}=D-d_{1}\) [arranged at spatial domains \(nD+d_{1}<z<(n+1)D\)]. Here \(n\) stands for the PC cell's number. We also suppose that the TMD layer is placed at the boundary between two dielectrics (plane \(z=0\)). If the EM field is uniform in \(x\)-direction (\(\partial/\partial x\equiv 0\)), Maxwell's equations for a TE-polarized wave (containing components \(E_{x}\), \(H_{y}\), and \(H_{z}\)) read: \[ik_{y}H_{z}^{(m,n)}-\frac{\partial H_{y}^{(m,n)}}{\partial z}=-i \kappa\varepsilon_{m}E_{x}^{(m,n)}, \tag{3}\] \[\frac{\partial E_{x}^{(m,n)}}{\partial z}=i\kappa H_{y}^{(m,n)}, \quad-ik_{y}E_{x}^{(m,n)}=i\kappa H_{z}^{(m,n)}\,. \tag{4}\] Here the spatial and temporal dependence of the EM field of the form \(\propto\exp\left(ik_{y}y-i\omega t\right)\) is assumed. Notice that the composite superscript \((m,n)\) in Eqs. (3)-(4) is prescribed to the EM field components defined in the \(n\)-th unit cell of the PC, in its part filled with the dielectric \(\varepsilon_{m}\). After solving Maxwell's equations in each slab and applying boundary conditions (see Supplemental Document for details), it is possible to relate the tangential components of the field across the PC period via the unit cell transfer matrix \(\hat{T}_{12}\), i.e., \[\left(\begin{array}{c}H_{y}^{(1,n+1)}\left(\left(n+1\right)D\right)\\ E_{x}^{(1,n+1)}\left(\left(n+1\right)D\right)\end{array}\right)=\hat{T}_{12} \left(\begin{array}{c}H_{y}^{(1,n)}\left(nD\right)\\ E_{x}^{(1,n)}\left(nD\right)\end{array}\right). \tag{5}\] The dispersion relation of electromagnetic waves in a _perfect infinite PC_ can be obtained by applying the Bloch theorem. It can be represented in terms of eigenvalues, \(\lambda_{\pm}\), and eigenvectors, \(\left(h_{y}^{(\pm)}\quad e_{x}^{(\pm)}\right)^{T}\), of the matrix \(\hat{T}_{12}\) (see Eq. (S2) of Supplemental Document). Namely, the following equation holds: \[\exp\left(iq_{\pm}D\right)=\lambda_{\pm}\,. \tag{6}\] The eigenvalues, \(\lambda_{+}\) and \(\lambda_{-}\), determine the dispersion relations for forward- and backward-propagating waves, respectively. Moreover, as the matrix \(\hat{T}_{12}\) is unimodular, the pair of eigenvalues possess the property \(\lambda_{+}\lambda_{-}=1\), which implies the relation \(q_{-}=-q_{+}\). The Bloch wavevector can be either real or imaginary (if damping is neglected), which depends on whether the chosen pair of \(\omega\) and \(k_{y}\) belongs to the allowed or forbidden (also called stop-band) part of the PC's spectrum. Namely, inside Figure 2: (a) Schematic representation of a photonic crystal with embedded TMD layer; (b,c) Relation between the frequency \(\omega\) and the Bloch wavevector of a forward-propagating wave, \(q_{+}\), for two values of \(k_{y}\). Red solid (dashed blue) lines correspond to real (imaginary) parts of \(q_{+}\). (d) Frequencies of allowed bands’ edges (solid green lines) plotted against in-plane wavevector, \(k_{y}\) and two light lines depicted by orange dashes. The domains corresponding to allowed bands are shadowed green. The following PC parameters were used: \(\varepsilon_{1}=2\), \(d_{1}=140\) nm, \(\varepsilon_{2}=12\), \(d_{2}=70\) nm. the allowed bands [green-shadowed domains in Figs. 2(b) and 2(c)], the Bloch wavevector of forward-propagating wave is purely real and positive, i.e., \(\mathrm{Re}\left(q_{+}\right)\geq 0\), \(\mathrm{Im}\left(q_{+}\right)=0\). In contrast, inside the stop-bands [white domains in Figs. 2(b) and 2(c)], the Bloch wavevector can be treated as a complex value with the real part \(\mathrm{Re}\left(q_{+}\right)=0,\,\pi/D\) and a positive imaginary part \(\mathrm{Im}\left(q_{+}\right)>0\). The latter implies the evanescent character of forward-propagating wave inside the stop-band and its decrease in the positive direction of \(z\)-axis. Accordingly, \(\mathrm{Im}\left(q_{-}\right)<0\) for a backward-propagating wave inside the stop-band, i.e. its amplitude decreases in the negative direction of \(z\)-axis. Let us now turn to a _PC intercalated by a TMD layer_. The fields in each part of it, at planes \(z=nD\) with \(n\) standing for an integer, either positive or negative, can be represented as follows (see Eq. (S3) of Supplemental Document): \[\left(\begin{array}{c}H_{y}^{\left(2,n-1\right)}\left(nD\right)\\ E_{x}^{\left(2,n-1\right)}\left(nD\right)\end{array}\right)=A_{-}\left( \begin{array}{c}h_{y}^{\left(-\right)}\\ e_{x}^{\left(-\right)}\end{array}\right)\exp\left(iq_{-}nD\right),\,n\leq 0,\] \[\left(\begin{array}{c}H_{y}^{\left(1,n\right)}\left(nD\right)\\ E_{x}^{\left(1,n\right)}\left(nD\right)\end{array}\right)=A_{+}\left( \begin{array}{c}h_{y}^{\left(+\right)}\\ e_{x}^{\left(+\right)}\end{array}\right)\exp\left(iq_{+}nD\right),\,n\geq 0.\] Figure 3: (a) Dispersion curves (solid blue and pink lines) and (b)–(d) spatial profiles of EM field components \(H_{y}(z)\) and \(E_{x}(z)\) (depicted by blue and red lines, respectively) corresponding to the localized EP eigenstate in the PC with TMD. In (a), allowed bands, their edges and light lines are depicted as in Fig. 2, while the regions of positive and negative \(\mathrm{Im}\sigma_{TMD}\) are shown as in Fig. 1. Panels (b)–(d): spatial profiles of the EM field components corresponding to the parameters (\(\omega\) and \(k_{y}\)) marked by filled circles in panel (a). The A (B) and A’ (B’) modes are explained in the text. Such a representation avoids the appearance of waves exponentially growing towards \(z=\pm\infty\). Substituting these equations into boundary conditions across the TMD layer, \[\left(\begin{array}{c}H_{y}^{\left(1,0\right)}\left(0\right)\\ E_{x}^{\left(1,0\right)}\left(0\right)\end{array}\right)=\hat{B}\left(\begin{array} []{c}H_{y}^{\left(2,-1\right)}\left(0\right)\\ E_{x}^{\left(2,-1\right)}\left(0\right)\end{array}\right), \tag{7}\] where \[\hat{B}=\left(\begin{array}{cc}1&-\frac{4\pi}{c}\sigma_{TMD}\left(\omega \right)\\ 0&1\end{array}\right)\] is the boundary condition matrix, it is possible to obtain the following (implicit) dispersion relation \[\frac{h_{y}^{\left(+\right)}}{e_{x}^{\left(+\right)}}-\frac{h_{y}^{\left(- \right)}}{e_{x}^{\left(-\right)}}+\frac{4\pi}{c}\sigma_{TMD}\left(\omega\right)=0 \tag{8}\] for the localized eigenstate supported by TMD embedded into the PC (compare to Eq. (2)). The spectrum of the perfectly periodic PC [see Fig. 2(d)] contains a low-frequency stop-band, which vanishes at \(k_{y}=0\) (we shall call it "zeroth-order"), and higher order stop-bands whose width remains finite at \(k_{y}=0\). This fact is crucial for the existence of two different types of localized eigenstates supported by the inserted TMD layer, whose spectra are shown in Fig. 3(a). For the particular parameters of Fig. 3(a), the spectrum contains four distinct modes: (i) two (type A and type B) in the zeroth stop-band, and (ii) two (type A' and type B') within the first stop-band. The properties of the A and B modes are similar to those of the surface EPs supported by TMD cladded by two semi-infinite dielectrics, described by Eq. (2) and briefly discussed in the Introduction [see Fig. 1(b)]. Yet, there is one distinctive feature: in the PC, the A and B modes bifurcate from the edge of the first allowed band and can exist on the left of the less steep light line, \(\omega=ck_{y}/\sqrt{\max\left(\varepsilon_{1},\varepsilon_{2}\right)}\). The modes inside the first stop-band, named A' and B', have remarkably different properties. They do not approach asymptotically the exciton transition frequencies but rather cross them. Moreover, for large \(k_{y}\) they occur in the frequency ranges where the imaginary part of TMD's conductivity is _positive_ (shadowed in Fig. 3(a)). In the frequency range far from \(\omega_{A}\) and \(\omega_{B}\), \(\mathrm{Im}\sigma_{TMD}\) is small and, as a result, the localized eigenstate appears close to the edge of the allowed band of the PC. At the same time, the spatial profile is rather weakly localized in the vicinity of the TMD layer (actually, the same happens to the "usual" surface EPs; examples are shown in Fig. 3(b) and 3(d) for the type B' and type A modes, respectively). In contrast, for the frequencies near \(\omega_{A}\) and \(\omega_{B}\), the absolute value of TMD's conductivity is high. As a consequence, the localized eigenstate lies deeply inside the stop-band and its spatial profile is strongly localized [see Fig. 3(c)]. The A' and B' modes, lying within the gap of the PC crystal spectrum, possess another interesting property. Their dispersion curves lie (partially) on the left of the steepest of the two light lines of the dielectrics constituting the PC. Both facts are characteristic of the lossless optical Tamm states (OTS), first described for a heterostructure of two semi-infinite PCs [21]. If the two halves of the structure were identical, there would be no OTS unless the full translation symmetry is broken in some other way like introducing the TMD layer. It also leads to the coupling between the optical mode and the exciton and their anti-crossing as described for a conventional quantum well [31] and also for 2D semiconductors in a structure where one of the BRs was replaced by a metallic mirror [13]. The "primed" modes of Fig. 3 are weakly localized OTS-type modes with _real_ Bloch wavevector and the frequency in the vicinity of the PC band edge. They become increasingly excitonic in nature as \(k_{y}\) increases. We shall demonstrate it further in the next section. ## 3 Diffraction of light on a TMD-intercalated PC As discussed above, the localized eigenstates A' and B' lie on the left of the line \(\omega=ck_{y}/\sqrt{\min\left(\varepsilon_{1},\varepsilon_{2}\right)}\) and even the vacuum light line in some frequency range. It means that these modes can be coupled to external propagating EM waves, i.e. excited directly by light without using a prism or a grating. Let us consider a PC with a finite (and relatively small) number of periods, hosting the TMD layer. An example of such a structure is shown in Fig. 4(a) where the TMD layer is embedded into a truncated PC containing \(2N\) elementary cells intercalated with a TMD layer (\(N\) cells before and \(N\) cells after the TMD layer). Light with frequency \(\omega\) falls on the surface of the truncated PC at an angle of incidence \(\theta\), which determines the transverse wavevector component \(k_{y}\). The amplitudes of the incident, \(E_{x}^{\left(i\right)}\), reflected, \(E_{x}^{\left(r\right)}\), and transmitted, \(E_{x}^{\left(t\right)}\), waves in such a truncated PC with the TMD layer inside can be related via the transfer-matrix of the whole structure, \(\hat{T}_{tot}\) (see Supplemental Document for details), namely: \[\left(\begin{array}{c}E_{x}^{(t)}\\ 0\end{array}\right)=\hat{T}_{tot}\left(\begin{array}{c}E_{x}^{(t)}\\ E_{x}^{(r)}\end{array}\right)\,. \tag{9}\] From this equation, the amplitudes of the reflected and transmitted waves can be expressed through the matrix elements of \(\hat{T}_{tot}\) as follows: \[E_{x}^{(r)}=-\frac{\left(\hat{T}_{tot}\right)_{21}}{\left(\hat{T}_{tot} \right)_{22}}E_{x}^{(i)},\hskip 28.452756ptE_{x}^{(t)}=\frac{1}{\left(\hat{T}_{ tot}\right)_{22}}E_{x}^{(i)}. \tag{10}\] The absorbance (\(A\)) of the structure can be calculated as the difference between the Poynting vectors' \(z\)-components corresponding to the incident, reflected and transmitted waves, \[A=1-\frac{\left|\left(\hat{T}_{tot}\right)_{21}\right|^{2}+1}{\left|\left( \hat{T}_{tot}\right)_{22}\right|^{2}}, \tag{11}\] and is depicted in Figs. 4(b) and 4(c). In these plots, the maximal absorption points on the \((\omega,\,k_{y})\) plane reveal the characteristic anti-crossings between the horizontal lines representing the A and B excitons and the OTS dispersion curve (accompanying the edge of the PC stop-band). They coincide with the dispersion curves of the A' and B' modes shown by white lines. The small discrepancy between them can be accounted for the photonic crystal truncation (compare Figs. 4(c) and 4(b), calculated for different \(N\)). The intensity of the excitonic absorption on the left of the avoided crossing point is modulated also due to the small number of periods in the PCs. The localized eigenstates are clearly observed in the absorption spectra and can be probed by means of angle-resolved spectroscopy [13, 31] if the frequency and transverse wavevector matching conditions are fulfilled. When the incident light couples to the A' or B' eigenmodes, the incident energy is transferred into the latter and finally dissipated through the exciton in the TMD layer. How robust are the reported phenomena against the factors that differ real-world things from theoretical models, such as e.g. fluctuations of the PC-constituting dielectric layers? We addressed this question by calculating the absorbance of TMD-intercalated PCs with Gaussian-distributed random thicknesses of the layers, \(d_{1}\) and \(d_{2}\), with mean values \(\overline{d_{1}}\) and \(\overline{d_{2}}\) as for the perfect structure and standard deviations \(\sigma_{1}\) and \(\sigma_{2}\) (see Sec. 2A of Supplemental Document for details). These results are shown in Figs. 4(c) and 4(e), demonstrating that the absorbance of the disordered structure (green lines) is decreased compared to the truncated perfect PC (blue lines), for the A' mode. At the same time, the modes' anti-crossing is still visible for \(\sigma_{1}/\overline{d_{1}}=\sigma_{2}/\overline{d_{2}}\)=2.5%. In the Supplemental Document, we present maps similar to panels (b) and (d) of Fig. 4, calculated for stronger disorder (\(\sigma/\overline{d}\)=10%), which show that the avoided crossing cannot be resolved anymore. We also notice that it cannot be resolved for the B' mode already with small fluctuations of layers' thicknesses. Somewhat unexpectedly, the intensity of this mode is enhanced by the disorder. ## 4 Conclusions We predicted theoretically the existence of two types of localized exciton-polariton eigenstates in 1D photonic crystal with embedded TMD layer. If the excitonic transition frequencies of the TMD layer lie within the first stop-band of the PC for zero transverse wavevector, the spectrum of localized EP states consists of: (i) two (A and B) modes in the zeroth-order stop-band, and (ii) two (A' and B') in the first stop-band. The properties of the modes (i) are similar to those of the surface EPs supported by a single TMD cladded by two semi-infinite dielectrics, while modes (ii) are related to the optical Tamm states [21, 22, 31], although the analogy is only partial in both cases. The A' and B' modes result from the coupling of the TMD exciton to the Tamm-type state of light in the photonic crystal with the translational symmetry broken by the inserted TMD layer. Coupling of such a photonic state localized by two reflectors to elementary excitations in a 2D material has recently been described for the case of graphene plasmons [32]. The Tamm-type modes predicted here can be effectively coupled to propagating light waves falling directly on the surface of the PC with a relatively small number of unit cells. In the presented simulated example [Fig. 4(c)] we demonstrated the feasibility of excitation of such localized eigenstate in an experimentally attainable structure with the dielectric constants \(\varepsilon_{1}=2.13\) and \(\varepsilon_{2}=4.04\), corresponding to a SiO\({}_{2}\)/Si\({}_{3}\)N\({}_{4}\) BR used in Ref. [33] (these and other inexpensive dielectrics are commonly used for making reasonable Bragg reflectors [34]). Methods similar to the suggested experiment were already used [24, 31, 33] for demonstration of optical Tamm states, even though one of the BRs was replaced by another type of mirror, such as a metal or an organic dye film. Our simulations for the structure with natural disorder (fluctuations of layers' thicknesses) demonstrate that the characteristic avoided crossing can be observed for the A' mode if the relative dispersion of the thicknesses is below \(\approx 5\%\). Therefore, we hope that this article may stimulate experiments aimed at observing this new type of localized exciton-polaritons. ## Acknowledgements Y.V.B., N.M.R.P., and M.I.V. acknowledge support from the European Commission through the project "Graphene-Driven Revolutions in ICT and Beyond"- Core 3 (Ref. No. 881603) and the Portuguese Foundation for Science and Technology (FCT) in the framework of the Strategic Funding UIDB/04650/2020. Authors also acknowledge FEDER Figure 4: (a) Schematics of a truncated PC, intercalated with a TMD layer. The arrows show the incident, reflected and transmitted beams; (b, d) Absorbance (depicted by color map) _versus_ frequency and in-plane wavevector \(k_{y}=(\omega/c)\sin\theta\) for the structure shown in (a), containing \(N=5\) (b) or \(N=15\) (d) unit cells. Other parameters of panel (b) are the same as in Figs. 2 and 3, while for panel (d) \(\varepsilon_{1}=2.13\), \(d_{1}=80\) nm, \(\varepsilon_{2}=4\), \(d_{2}=130\) nm. Dispersion curves for localized eigenstates in infinite crystal with the same parameters are depicted by white lines. In all cases \(\gamma_{A}=\gamma_{B}=4\) meV. Panels (c) and (e) show the absorbance plotted against \(\omega\) for fixed values of \(k_{y}[6\,\mu\)m for \(N=5\) in (c) and \(k_{y}=6.5\,\mu\)m in (e)], for truncated perfect PC (blue lines) and for the structure with layers’ thickness disorder (green lines). Relative dispersion of layers’ thicknesses for the latter is \(\sigma_{1}/\overline{d_{1}}=\sigma_{2}/\overline{d_{2}}=0.025\). and FCT for support through projects POCI-01-0145-FEDER-028114 and PTDC/FIS-MAC/28887/2017. The authors declare no conflicts of interest.
2310.03760
Investigating Deep Neural Network Architecture and Feature Extraction Designs for Sensor-based Human Activity Recognition
The extensive ubiquitous availability of sensors in smart devices and the Internet of Things (IoT) has opened up the possibilities for implementing sensor-based activity recognition. As opposed to traditional sensor time-series processing and hand-engineered feature extraction, in light of deep learning's proven effectiveness across various domains, numerous deep methods have been explored to tackle the challenges in activity recognition, outperforming the traditional signal processing and traditional machine learning approaches. In this work, by performing extensive experimental studies on two human activity recognition datasets, we investigate the performance of common deep learning and machine learning approaches as well as different training mechanisms (such as contrastive learning), and various feature representations extracted from the sensor time-series data and measure their effectiveness for the human activity recognition task.
Danial Ahangarani, Mohammad Shirazi, Navid Ashraf
2023-09-26T14:55:32Z
http://arxiv.org/abs/2310.03760v1
Investigating Deep Neural Network Architecture and Feature Extraction Designs for Sensor-based Human Activity Recognition ###### Abstract The extensive ubiquitous availability of sensors in smart devices and the Internet of Things (IoT) has opened up the possibilities for implementing sensor-based activity recognition. As opposed to traditional sensor time-series processing and hand-engineered feature extraction, in light of deep learning's proven effectiveness across various domains, numerous deep methods have been explored to tackle the challenges in activity recognition, outperforming the traditional signal processing and traditional machine learning approaches. In this work, by performing extensive experimental studies on two human activity recognition datasets, we investigate the performance of common deep learning and machine learning approaches as well as different training mechanisms (such as contrastive learning), and various feature representations extracted from the sensor time-series data and measure their effectiveness for the human activity recognition task. _Keywords: human activity recognition; deep learning; contrastive learning; sensors; pretraining_ ## I Introduction The recent advancements in human activity recognition have given rise to a wide range of applications, which include smart homes [1], efficient manufacturing environments [4, 5], and patient activity monitoring for healthcare applications [3]. Activity recognition plays a crucial role in human life by capturing people's behaviors through data, enabling computing systems to monitor, analyze, and assist them in their daily activities. Due to the availability of various sensors such as accelerometers and gyroscopes (i.e., inertial measurement units or IMUs) in most off-the-shelf smart devices, as opposed to video-based approaches [6], recent approaches for human activity recognition have relied on such sensors [7], which introduce fewer privacy issues. Earlier works on human activity recognition have leveraged signal processing techniques [8] and hand-engineered feature extraction methods [9]. Furthermore, traditional machine learning methods have also been widely adopted for human activity recognition in prior works. However, recent works have proposed various deep learning-based architectures that outperform the aforementioned works by extracting more complicated features from the input times-series data [2, 10, 11, 24, 13, 23, 22]. Considering the prior research on human activity recognition, we briefly summarize the involved challenges as follows: 1. **Deep Model Architecture Design**: There exists a wide range of complex deep learning architectures (such as feed-forward, convolutional [13], recurrent [14], residual [12], etc.). As each architecture has its own benefits and disadvantages, designing a model architecture that performs well for all human activity recognition datasets is challenging. 2. **Effective Time-Series Feature Extraction:** Prior works often consider time-series features to identify different activities. However, as shown in [2, 10] spectral or statistical features could also serve as additional inputs to enhance the model's capabilities for more accurate human activity recognition. Therefore, there is a need to investigate the performance of different models given various types of features extracted from the sensor data to provide a clear understanding of their effectiveness. 3. **Efficient Model Training Mechanism:** Common human activity approaches rely on the traditional classification model training through the cross-entropy loss function. However, there exist other pretraining techniques including contrastive learning [15] or the triplet loss [16] that could further push the limits of the human activity model to generate better results. In this work, we aim to perform extensive experimental studies on two human activity recognition datasets to measure the effectiveness of common deep learning architectures, feature extraction methods, and model learning techniques for human activity recognition. The rest of this paper is organized as follows. We first review the related work in Sec. II provide the details of the datasets and the preprocessing steps in Sec. III followed by the feature extraction and problem statement in Sec. IV. Then, we explore the studied model architectures and different learning mechanisms in Sec. V. We then present our experimental studies in Sec. V and conclude the paper in Sec. VI. ## II Related Work Recent works on human activity recognition has been focusing on machine learning and deep neural networks due to their high accuracy for complicated tasks compared to hand-engineered works [8, 9]. For instance, the proposed method in [22], used the long short-term memory (LSTM) layers to extract the temporal information in the sensor time-series. Similarly, [21] added the attention mechanism on top of the LSTM layers to enhance the important feature extraction. Moreover, the model proposed in [13] is based on 1-dimensional convolutional neural network that extracts the temporal information from the sensor data in a more efficient way. The proposed model in [18] leverages two LSTM layers to process the time-series data in two direction to enhance the temporal information extraction of the model. The authors of [14] leveraged the residual connections to augment the training of the human activitiy recognition model. Furthermore, to improve the training quality of the model, the proposed method in [2] incorporates the contrastive learning loss function [15] in addition to the commonly used cross-entropy loss function, which enhances the representation learning of the model. ## III Datasets, Data Preprocessing, and Feature Extraction ### _Datasets_ We briefly summarize the datasets studied as follows. **Dataset 1** (DS1): The first dataset studied is collected by [17], which consists of 7,498 records and consists of six human activity classes as follows: going downstairs, walking upstairs, jogging, standing, walking, and sitting. This dataset contains the time-series data collected from the accelerometer sensor by 36 different users. **Dataset 2** (DS2): The second dataset studied is provided by [11], which has a total of 39,168 records and consists of the following human activity classes: walking, bike riding, going upstairs, going downstairs, jogging, and riding in bux/taxi. This dataset contains the time-series data collected by 48 different users from both the accelerometer and gyroscope sensors. ### _Data Preprocessing_ Having the recorded time-series data of the accelerometer and gyroscope sensors along the x, y, and z axes, and with length \(L\) = i.e., in \(\mathbb{R}^{L}\), we perform the following pre-processing steps to prepare them for model processing. **Segmentation:** Given the time-series data, we divide them into multiple segments with a sliding window of size \(S\) (150 in this study), where each window has 70% of overlap with the previous window. **Noise Filtering:** We use the moving average method to filter the noises caused by the vibrations that occurred while recording the sensor data. Specifically, we slide a window of size \(M=10\) and calculate the average of all the values within the window to eliminate the noise. **Normalization:** Finally, since the scale of the values varies across different sensors, we leverage the min-max normalization to normalize each sensor axes (e.g., accelerometer along the y-axis) to have values in [0,1] interval. ## IV Feature Extraction and Problem Statement ### _Feature Extraction_ We extract three different features from the time-series segments as described below: **Temporal Features:** The most widely studied feature for human activity recognition is the temporal features [11, 2, 18]. Basically, each resultant segment after the pre-processing is considered as the temporal features, which are then commonly processed by recurrent neural networks to extract the temporal information within them. Since here we are focusing on the the two accelerometer and gyroscope sensors each producing values along three of the x, y, anx z axes, the temporal features for DS2 would have a dimension of \(\mathbb{R}^{S\times 6}\), where \(S\) is size of the time-series segment as stated earlier. The temporal features would have a size \(\mathbb{R}^{S\times 3}\) for DS1 as it only contains the sensor data for the three axes of accelerometer. **Statistical Features:** Rather than processing the time-series segments, we can apply statistical functions (such as minimum, maximum, average, standard deviation, etc.) on Fig. 1: Visualization of the temporal and spectral feature for one example from the jogging human activity class. each axis of each of the accelerometer and gyroscope sensors to extract statistical features. Such features have been shown by [10] to be highly effective for similar sensor time-series classification tasks. In this work, we consider the four minimum, maximum, average, and standard deviation functions to extract statistical features from each of the 6 axis of the accelerometer and gyroscope sensors. Thus, the resultant statistical features would have a dimension in \(\mathbb{R}^{24}\) for DS2 and \(\mathbb{R}^{12}\) for DS1. **Spectral Features:** Finally, to capture more complicated patterns and extract far more advanced features, recent studies [2, 10], inspired by audio processing feature extraction methods [19], have proposed to extract spectral features from the time-series segments. Specifically, the continuous wavelet transform (CWT) function is applied on each sensor axis with different scales and the resultant features are combine to create multi-dimensional features. In this work, we leverage the Morlet wavelet function with 50 different scales to apply CWT on the time-series segment [20]. Therefore, the spectral features would have their dimension in \(\mathbb{R}^{50\times S\times 6}\) for DS2 \(\mathbb{R}^{50\times S\times 3}\) for DS1, where \(S\) is the length of the segment as before. To better demonstrate the extracted features, we visualize the temporal features and their correponding spectral features for the accelerometer time-series values along the x, y, and z axis of an example record belonging to the jogging human activity class in Fig. 1. We can observe that generally, the jogging exhibits a repeated pattern in the accelerometer time-series values due the nature of this activity. ### _Problem Statement_ Given the sensor data recorded with the accelerometer and gyroscope sensor, the task of the human activity recognition model is to predict the correct human activity class. As stated above, in this work, the studied datasets consist of 6 different human activity classes. ## V Model Architectures and Training Mechanisms In this section, we provide the details of the deep model architectures and the training mechanisms explored for the experimental studies. ### _Model Architectures_ Considering the fact that the temporal and spectral features could be either be processed by recurrent or convolutional neural networks, we have adopted the various model architectures from the literature [2, 10, 11, 12, 13, 14] to consider for our experiments. Besides, since traditional machine learning models are also widely studied for human activity recognition, we consider the common machine learning models for human activity recognition as baselines. **Traditional Machine Learning Models:** We study support vector machine (SVM), K-nearest neighbor (KNN), gradient boosting decision tree (GBDT), logistic regression (LR), decision tree (DT), random forest (RF), AdaBoost, Gaussian Naive Bayes (GaussianNB), and Multi-layer perceptron (MLP) as the most commonly used machine learning models for human activity recognition. The input to these models is the statistical features. For SVM, we use the linear kernel function. For KNN we set the number of the neighbors to 5. For RF we set the number of estimators to 100. Finally, for MLP we use two hidden layers. **ResNet**[14]: We adapt the residual connection proposed in [24] to design a network based on convolutional neural networks. Specifically, we use 4 residual blocks each having two convolutional and two residual layers. The input to this model is the spectral features. **Transformers**[23]: Recently, transformers [23] have shown to be very effective in various domains. Thus, we have designed a neural network architecture based on the transformers that process the temporal features to identify different human activities. For this model, we leverage two transformer layers each having 8 heads. **LSTM**[13, 22]: According to [13, 22], We design a network based on long-short term memory that processes the temporal features. We leverage one LSTM layer with 64 hidden units for this model. **BiLSTM**[18]: To better capture the temporal information, we process them in both the forward and backward direction and leverage the combination of the features extracted from both directions to classify the human activities. We leverage 2 BiLSTM layers each having 64 hidden units for this model. **LSTM-Attention**[21]: We augment the LSTM network previously stated with the attention mechanisms to measure the effects of such designs for human activity recognition. For this model, we leverage 2 LSTM layers with the attention mechanism where each has 64 hidden units **CNN1D**[13]: Recurrent neural networks are often slow and involve high computation overheads. Thus, we have designed a network architecture based on 1-dimensional convolutional neural networks (CNN1D) to process the temporal features for human activity recognition. For this model, we stack two 1-dimensional convolutional layers with 64 and 32 filters, respectively. Besides, we set the kernel size of the convolutional layers to 3. **MRNet**[2, 10]: Inspired by prior studies, the combination of all the temporal, statistical, and spectral features could be effective for higher classification accuracy. Therefore, we have adapted the network proposed in [2, 10] that first processes the temporal, statistical, and spectral features with sub-networks based on recurrent, fully connected, and convolutional neural networks. Then, we concatenate the output of all three networks and use them to predict the human activity class. For all the models above, we use the rectified linear unit (ReLU) function as the activation function. Besides, we add three fully connected layers with 256, 128, and 6 hidden units as their last layer to perform human activity classification for a total of 6 different classes. Similarly, all the classification layers leverage the ReLU activation function while the last layer uses the softmax function to generate the probability values for each human activity class. ### _Training Mechanisms_ **Cross Entropy Classification Loss:** The most commonly used loss function for model training is the cross entropy loss formulated as follows: \[\ell_{CE}=-\sum\limits_{i=1}^{x}{{{plog(\hat{p})}}}\;, \tag{1}\] where \(p\) is the probability of the correct human activity class and \(\hat{p}\) is the probability of the correct class generated by the model. \(Z~{}=~{}6\) is the total number of classes in this study. **Supervised Contrastive Learning Loss:** Contrastive learning has been recently adopted for model pretraining in various tasks [15]. Here we adopt the supervised variant of the contrastive learning that leverages the label information from the dataset to generate distinguishable embeddings for each human activity class. The supervised contrastive learning loss is formulated as follows: \[\ell_{CL}=\sum\limits_{i\in\textbf{Q}}{\frac{-1}{|\textbf{{A}}(\textbf{{i}})|} \sum\limits_{\textbf{{a}}\in\textbf{{A}}(\textbf{{i}})}log\frac{{{\textit{exp }}(\textbf{{e}}_{i}\textbf{{e}}_{p}/\textbf{{r}})}}{{{\Sigma_{\textbf{{b}}} \in\textbf{{A}}(\textbf{{i}})}\textit{exp}(\textbf{{e}}_{i}\textbf{{e}}_{p}/ \textbf{{r}})}}}}\;, \tag{2}\] where \(Q\) is the set of all the data records, \(\textbf{{e}}_{i}\) is the embedding of the \(i\)-th data record, \(\tau\) is the temperature parameter, \(\textbf{{A}}(\textbf{{i}})\) is the set of all the other data records with the same class as the \(i\)-th record. **Triplet Loss:** Similarly, the triplet loss [16] aims to generate similar embeddings for data records belonging to the same class. The triplet loss is formulated as follows: \[\ell_{TL}=max(d(\textbf{{e}}_{\textit{{a}}},\textbf{{e}}_{p})-d(\textbf{{e}}_ {\textit{{a}}},\textbf{{e}}_{n})+m,0), \tag{3}\] where \(\textbf{{e}}_{\textit{{a}}}\), \(\textbf{{e}}_{p}\), \(\textbf{{e}}_{n}\) are the embeddings of the anchor, positive (same class as the anchor), and negative (different class than the anchor), respectively, and \(m\) is the margin controlling the distance between the embeddings. Besides, \(d(\cdot)\) represents the distance function such as the Euclidean distance. Based on the above, we train different models using the cross-entropy loss without pretraining. On the other hand, we can first pretrain the model based on either the contrastive or the triplet loss functions, and then continue the training based on the cross-entropy loss function. ## VI Experimental Studies In this section, we first review the parameter settings, and then present and discuss the experimental studies. **Parameters:** For all the models, we use the Adam optimizer to train them with a learning rate of 0.001. We train the models using the cross-entropy for 50 iterations. For pretraining, we set the number of the iterations to 10. Besides, we set temperature parameter of the contrastive learning to \(\tau=0.07\). Moreover, we leverage 70% of the data for training, 10% for validation, and 20% for evaluation. **Performance Results:** We first train the model based on the cross-entropy loss function on DS1 and DS2 and illustrate the results in Table. 1 We can see that the high accuracy on DS1 is achieved by the ResNet and LSTM model. Although the performance achieved by these two models is very close, we realized that since ResNet takes advantage of the residual connections and is based on lightweight convolutional neural networks, compared to LSTM, the training procedure was much shorter as the model converged faster. Furthermore, we can observe that MRNet has outperf \begin{table} \begin{tabular}{|l|c|c|} \hline Model & DS1 & DS2 \\ \hline SVM & 0.779 & 0.569 \\ \hline KNN & 0.935 & 0.798 \\ \hline GBDT & 0.892 & 0.784 \\ \hline LR & 0.763 & 0.555 \\ \hline DT & 0.874 & 0.759 \\ \hline RF & 0.929 & 0.850 \\ \hline AdaBoost & 0.446 & 0.683 \\ \hline GaussianNB & 0.781 & 0.538 \\ \hline MLP & 0.775 & 0.603 \\ \hline ResNet & 0.954 & 0.535 \\ \hline Transformers & 0.878 & 0.840 \\ \hline LSTM & 0.953 & 0.873 \\ \hline BiLSTM & 0.954 & 0.874 \\ \hline LSTMAttention & 0.931 & 0.870 \\ \hline CNN1D & 0.939 & 0.828 \\ \hline MRNet & 0.970 & 0.552 \\ \hline \end{tabular} \end{table} TABLE. 1 CROSS-ENTROPY ACCURACY (%) \begin{table} \begin{tabular}{|l|c|c|} \hline Model & DS1 & DS2 \\ \hline ResNet & 0.882 & 0.872 \\ \hline Transformers & 0.852 & 0.813 \\ \hline LSTM & 0.923 & 0.857 \\ \hline BiLSTM & 0.915 & 0.856 \\ \hline LSTMAttention & 0.870 & 0.826 \\ \hline CNN1D & 0.919 & 0.820 \\ \hline MRNet & 0.854 & 0.723 \\ \hline \end{tabular} \end{table} TABLE. 2: SUPERVISED CONTRASTIVE LEARNING ACCURACY (%) and LSTM models by combining the temporal, spectral, and statistical features, and further supports the idea proposed in [2, 10]. On the other hand, we can see that the recurrent neural models such as LSTM and BiLSTM have achieved much higher performance on DS2 compared to the other models, which shows that such models are more suitable for learning on large datasets. Besides, we can observe that the traditional machine learning models such as SVM, AdaBoost or GaussianNB have generally achieved low performance due to their limited computation capabilities. Next, we perform the experiments by pretraining the models based on the supervised contrastive learning and the triplet loss and show the results in Tables. 2 and 3, respectively. We can observe no performance improvements on DS1 and an improvement less than 1% on DS2 for some of the models based on these pretraining methods. In summary, the BiLSTM model with the temporal features as its input and the cross-entropy loss function has the highest accuracy among all the deep learning models. We illustrate the confusion matrices of the BiLSTM model for DS1 and DS2 in Fig. 2, which shows the high accuracy of this model for different human activity classes. ## VII Conclusion In this paper, we investigated the performance of different deep neural network architectures for human activity recognition given the temporal, statistical, and spectral features. Moreover, we explored different model designs based on residual connections, convolutional and recurrent layers, transformers, attention mechanisms, and traditional machine learning algorithms. Moreover, we trained the models based on three common learning algorithms and compared their performance according to our experiments on two large-scale human activity recognition datasets. According to our results, the combination of multiple features could lead to performance improvements while learning algorithms such as contrastive learning or the triplet loss could be less effective depending on the complications within the dataset.
2309.07465
Finite-time Cosmological Singularities and the Possible Fate of the Universe
Singularities in any physical theory are either remarkable indicators of the unknown underlying fundamental theory, or indicate a change in the description of the physical reality. In General Relativity there are three fundamental kinds of singularities that might occur, firstly the black hole spacelike crushing singularities, e.g. in the Schwarzschild case and two cosmological spacelike singularities appearing in finite-time, namely, the Big Bang singularity and the Big Rip singularity. In the case of black hole and Big Bang singularity, the singularity indicates that the physics is no longer described by the classical gravity theory but some quantum version of gravity is probably needed. The Big Rip is a future singularity which appears in the context of General Relativity due to a phantom scalar field needed to describe the dark energy era. Apart from the Big Rip singularity, a variety of finite-time future singularities, such as, sudden singularity, Big Freeze singularity, generalized sudden singularity, $w$-singularity and so on, are allowed in various class of cosmological models irrespective of their origin. The occurrence of these finite-time singularities has been intensively investigated in the context of a variety of dark energy, modified gravity, and other alternative cosmological theories. These singularities suggest that the current cosmological scenario is probably an approximate version of a fundamental theory yet to be discovered. In this review we provide a concrete overview of the cosmological theories constructed in the context of Einstein's General Relativity and modified gravity theories that may lead to finite-time cosmological singularities. We also discuss various approaches suggested in the literature that could potentially prevent or mitigate finite-time singularities within the cosmological scenarios.
Jaume de Haro, Shin'ichi Nojiri, S. D. Odintsov, V. K. Oikonomou, Supriya Pan
2023-09-14T06:46:37Z
http://arxiv.org/abs/2309.07465v2
# Finite-time Cosmological Singularities and the Possible Fate of the Universe ###### Abstract Singularities in any physical theory are either remarkable indicators of the unknown underlying fundamental theory, or indicate a change in the description of the physical reality. In General Relativity there are three fundamental kinds of singularities that might occur, firstly the black hole spacelike crushing singularities, e.g. in the Schwarzschild case and two cosmological spacelike singularities appearing in finite-time, namely, the Big Bang singularity and the Big Rip singularity. In the case of black hole and Big Bang singularity, the singularity indicates that the physics is no longer described by the classical gravity theory but some quantum version of gravity is probably needed. The Big Rip is a future singularity which appears in the context of General Relativity due to a phantom scalar field needed to describe the dark energy era. Apart from the Big Rip singularity, a variety of finite-time future singularities, such as, sudden singularity, Big Freeze singularity, generalized sudden singularity, \(w\)-singularity and so on, are allowed in various class of cosmological models irrespective of their origin. The occurrence of these finite-time singularities has been intensively investigated in the context of a variety of dark energy, modified gravity, and other alternative cosmological theories. These singularities suggest that the current cosmological scenario is probably an approximate version of a fundamental theory yet to be discovered. In this review we provide a concrete overview of the cosmological theories constructed in the context of Einstein's General Relativity and modified gravity theories that may lead to finite-time cosmological singularities. We also discuss various approaches suggested in the literature that could potentially prevent or mitigate finite-time singularities within the cosmological scenarios. ###### Contents * I Introduction * II Classification of Finite-time Singularities * II.1 Big Bang: What is it? * II.2 Finite-time Future Singularities * II.2 Type I Singularity * II.2 Type II Singularity * II.2 Type III Singularity * II.2 Type IV Singularity * II.2 Type V (\(w\)) Singularity * II.3 The Hamilton-Jacobi approach to cosmological singularities * III Finite-time Singularities in DE Models * III.1 Viscous Cosmologies * III.2 Interacting DE Models * III.2.1 IDE with constant EoS in DE * III.2.2 IDE with dynamical EoS in DE * III.2.3 Some Specific Interaction Models * IV Finite time singularities in Modified gravity theories * IV.1 Scalar-tensor Gravity * IV.2 Brans-Dicke Gravity * IV.3 The \(k-\)essence Model * IV.4 Scalar-Einstein-Gauss-Bonnet Gravity * IV.5 \(F(R)\) Theories of Gravity * IV.6.1 Occurrence of Singularities in Different Frames: \(F(R)\) Gravity * IV.6.2 Occurrence of Singularities in Different Frames: Unimodular \(F(R)\) Gravity * IV.6.3 \(F(G)\) Gravity * IV.6.4 Finite-time Singularities * IV.6.5 Gravity * IV.6.6 Gravity * IV.6.7 Finite-time Singularities * IV.6.8 Non-local Gravity * IV.6.9 Finite-time Singularities * IV.7 Non-minimal Maxwell-Einstein Gravity * IV.8 Semi-classical Gravity * V Singularities in Braneworld Models * VI Singularities in Matter Creation Models * VI.1 Constant Matter Creation Rate * VI.2 Variable Matter Creation Rate * VII Singularities in Loop Quantum Cosmology * VIII Cosmological Finite-time Singularities in Modified Gravity and in Interacting Multifluid Cosmology: Dynamical System * A Analysis of Finite-time Singularities in \(F(R)\) Gravity via the Autonomous \(F(R)\) Gravity Dynamical System * A.1 Finite-time Singularities of \(F(R)\) Cosmology and its Dynamical System * A.2 The Case of the Big Rip Singularity * A.3 The Cases of Type III, Type II and Type IV Singularities * A.4 The Case of non-vacuum \(F(R)\) Gravity * A.5 Finite-time Cosmological and Dynamical Systems Singularities in Interacting Multifluids Cosmology * A.5.1 Singularity Structure of Autonomous Dynamical Systems Using the Dominant Balances Technique * A.5.2 The Case of the Big Rip Singularity * A.5.3 The Cases of Type III, Type II and Type IV Singularities * A.5.4 The Case of non-vacuum \(F(R)\) Gravity * A.5.5 The Case of non-vacuum \(F(R)\) Gravity * A.5.6 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.5.7 Finite-time Singularities in \(F(R)\) Gravity * A.5.8 Finite-time Singularities in \(F(R)\) Gravity * A.5.9 Finite-time Singularities in \(F(R)\) Gravity * A.10 Finite-time Singularities in \(F(R)\) Gravity * A.11 Finite-time Singularities in \(F(R)\) Gravity * A.12 Finite-time Singularities in \(F(R)\) Gravity * A.13 Finite-time Singularities in \(F(R)\) Gravity * A.14 Finite-time Singularities in \(F(R)\) Gravity * A.15 Finite-time Singularities in \(F(R)\) Gravity * A.16 Finite-time Singularities in \(F(R)\) Gravity * A.17 Finite-time Singularities in \(F(R)\) Gravity * A.18 Finite-time Singularities in \(F(R)\) Gravity * A.19 Finite-time Singularities in \(F(R)\) Gravity * A.10 Finite-time Singularities in \(F(R)\) Gravity * A.11 Finite-time Singularities in \(F(R)\) Gravity * A.12 Finite-time Singularities in \(F(R)\) Gravity * A.13 Finite-time Singularities in \(F(R)\) Gravity * A.14 Finite-time Singularities in \(F(R)\) Gravity * A.15 Finite-time Singularities in \(F(R)\) Gravity * A.16 Finite-time Singularities in \(F(R)\) Gravity * A.17 Finite-time Singularities in \(F(R)\) Gravity * A.18 Finite-time Singularities in \(F(R)\) Gravity * A.19 Finite-time Singularities in \(F(R)\) Gravity * A.10 Finite-time Singularities in \(F(R)\) Gravity * A.11 Finite-time Singularities in \(F(R)\) Gravity * A.12 The Case of the Big Rip Singularity * A.13 The Cases of Type III, Type II and Type IV Singularities * A.14 The Case of non-vacuum \(F(R)\) Gravity * A.15 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.16 Finite-time Singularities in \(F(R)\) Gravity * A.17 Finite-time Singularities in \(F(R)\) Gravity * A.18 Finite-time Singularities in \(F(R)\) Gravity * A.19 Finite-time Singularities in \(F(R)\) Gravity * A.10 Finite-time Singularities in \(F(R)\) Gravity * A.11 Finite-time Singularities in \(F(R)\) Gravity * A.12 The Case of the Big Rip Singularity * A.13 The Cases of Type III, Type II and Type IV Singularities * A.14 The Case of non-vacuum \(F(R)\) Gravity * A.15 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.16 Finite-time Singularities in \(F(R)\) Gravity * A.17 Finite-time Singularities in \(F(R)\) Gravity * A.18 Finite-time Singularities in \(F(R)\) Gravity * A.19 Finite-time Singularities in \(F(R)\) Gravity * A.10 Finite-time Singularities in \(F(R)\) Gravity * A.11 Finite-time Singularities in \(F(R)\) Gravity * A.12 The Case of the Big Rip Singularity * A.13 The Cases of Type III, Type II and Type IV Singularities * A.14 The Case of non-vacuum \(F(R)\) Gravity * A.15 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.16 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.17 Finite-time Singularities in \(F(R)\) Gravity * A.18 Finite-time Singularities in \(F(R)\) Gravity * A.19 Finite-time Singularities in \(F(R)\) Gravity * A.110 Finite-time Singularities in \(F(R)\) Gravity * A.111 Finite-time Singularities in \(F(R)\) Gravity * A.112 Finite-time Singularities in \(F(R)\) Gravity * A.13 The Case of the Big Rip Singularity * A.14 The Case of non-vacuum \(F(R)\) Gravity * A.15 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.16 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.17 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.18 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.19 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.10 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.111 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.12 The Case of non-vacuum \(F(R)\) Gravity * A.13 The Case of non-vacuum \(F(R)\) Gravity * A.14 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.15 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.16 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.17 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.18 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.19 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.19 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.10 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.11 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.112 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.13 The Case of non-vacuum \(F(R)\) Gravity * A.14 The Case of non-vacuum \(F(R)\) Gravity * A.15 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.16 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.17 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.18 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.19 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.19 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.12 The Case of non-vacuum \(F(R)\) Gravity * A.13 The Case of non-vacuum \(F(R)\) Gravity * A.14 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.15 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.16 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.17 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.18 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.19 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.19 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.10 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.11 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.12 The Case of non-vacuum \(F(R)\) Gravity * A.13 The Case of non-vacuum \(F(R)\) Gravity * A.14 The Case of non-vacuum \(F(R)\) Gravity * A.15 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.16 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.17 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.18 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.19 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.10 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.11 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.11 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.11 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.12 The Case of non-vacuum \(F(R)\) Gravity * A.13 The Case of non-vacuum \(F(R)\) Gravity * A.14 The Case of non-vacuum \(F(R)\) Gravity * A.15 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.16 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.17 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.18 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.19 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.19 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.10 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.110 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.111 Finite-time Cosmological and Dynamical Systems Singularities in \(F(R)\) Gravity * A.112 The Case of non-vacuum \(F(R)\) Gravity 2. Dominant Balance Analysis of Multifluid Cosmology, Dynamical System Finite-time Singularities versus Physics 3. Consistent Truncation I 4. Consistent Truncation II 5. Phase Space Analysis of Multifluid Cosmological Model 6. Dynamical System Analysis of Exponential Quintessence DE Models 7. Singularity Structure of the Dynamical System Describing Swampland DE Models 8. Dynamical System of Interacting Multifluids in LQC 9. Dominant Balance Analysis of the three-fluid Cosmological Dynamical System 9. The Choice of the DE EoS and its Physical Implications 10. The Avoidance of Finite-time Future Singularities 11. Little Rip in Viscous Universe 12. Geometrical invariants to remove the finite-time future singularities 13. Scalar field models avoiding finite-time future singularities 14. Inhomogeneous equation of state 15. Future Singularities with the Account of Quantum Effects 16. FLRW equation including Trace Anomaly 17. Future Singularities with the Account of Thermal Effects 18. Type I Singularity with Thermal Effects: Transition to Type II Singularity 19. Type III Singularity with the Account of Thermal Effects: Transition to Type II Singularity 19. Thermal Radiation for Type II and Type IV Singularities 19. Combination of Quantum Effect and Thermal Effect 20. Quantum Effects May Change the Occurrence of a Finite-time Future Singularity 21. Summary and Conclusions XI Acknowledgments Introduction The dynamics of our Universe is one of the most intriguing mysteries in modern theoretical physics. The physics of the Universe at the early phase and the late phase is understood at a certain level. With the increasing sensitivity in the astronomical observations, modern cosmology has witnessed remarkable success being now a precision science, but there are many fundamental questions that are still unanswered. The finite-time cosmic singularities constitute one of the mysteries in modern cosmology, first pointed out in [1] (also see [2]) motivated by the existence of some phantom fluid in the universe sector [3]. The theory of inflation [4] is able to describe consistently the post-Planck classical early Universe and plays a crucial role to answer some theoretical shortcomings of the standard Big Bang model of cosmology. From the observational point of view, even though the cosmic microwave background radiation goes in support of inflation [5; 6; 7; 8; 9], however, inflation also has some limitations, and thus, at this moment, it is very hard to conclude that the inflation is the only theory for the early Universe. On the other hand, the discovery of the late-time accelerating phase of the Universe [10; 11] has made compelling the presence of some hypothetical fluid with a negative pressure in the Universe sector. However, the source of such hypothetical fluid is not clearly known yet and this imposed to the entire scientific community a difficult puzzle to solve. The puzzle is basically how to model the late-time acceleration era in an observationally viable way. One has to have in mind, that describing nature in broken phases might not be a subtle way to describe accurately nature, so the correct description must also take into account the consistent description of all the evolutionary eras of the Universe. The most basic approach to explain this late-time cosmic acceleration is to add a hypothetical fluid dubbed as Dark Energy (DE), with strong negative pressure in the context of Einstein's gravitational theory described by the General Relativity (GR). The simplest DE candidate is Einstein's positive cosmological constant \(\Lambda\), which together with Cold Dark Matter (CDM), the so-called \(\Lambda\)-Cold Dark Matter (\(\Lambda\)CDM) is constructed, and the latter has been quite successful in describing a large span of observational data related with the Cosmic Microwave Background radiation polarization anisotropies. However, there are mainly two potential issues within this simplest \(\Lambda\)CDM cosmology, namely, the cosmological constant problem [12] and the cosmic coincidence problem (also known as 'why now?' problem) [13]. Apart from that, the latest observational data [14] indicate that the DE equation of state (EoS) parameter is allowed to take values that cross the phantom divide line. Within the framework of simple GR, the only way to model such an observationally allowed phantom value is by using a phantom scalar field, which is not a theoretically elegant description for any sensible physics model. Apart from these issues, there is always the question whether the DE can be dynamical or not. These shortcomings of the \(\Lambda\)CDM description, point out towards a modification of the matter sector of the universe within the context of Einstein's GR to describe consistently the late-time era. Several modifications of the matter sector are suggested in various forms in the literature, and they have been investigated from theoretical perspectives and further tested with the available observational data from diverse astronomical sources, see for instance [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 209; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 287; 288; 289; 288; 289; 291; 289; 281; 285; 289; 286; 287; 288; 289; 289; 292; 2930; 294; 295; 296; 297; 297; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 309; 301; 309; 302; 304; 306; 308; 309; 303; 309; 310; 309; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 323; 324; 325; 326; 327; 328; 329; 333; 334; 335; 336; 337; 338; 339; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 369; 383; 384; 385; 386; 387; 388; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 407; 409; 411; 410; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 431; 42; 432; 433; 435; 436; 437; 444; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 45; 46; 46; 47; 46; 47; 48; 48; 49; 50; 51; 52; 53; 54; 56; 57; 58; 59; 50; 54; 57; 59; 510; 58; 59; 52; 54; 59; 50; 50; 511; 53; 54; 51; 55; 56; 58; 57; 59; 52; 50; 59; 50; 512; 54; 52; 54; 53; 57; 58; 59; 50; 50; 52; 513; 59; 50; 53; 514; 52; 54; 56; 59; 53; 52; 57; 51; 54; 58; 59; 50; 54; 59; 50; 514; 50; 53; 52; 54; 51; 55; 57; 52; 53; 58; 59; 51; 50; 59; 50; 51; 50; 52; 52; 54; 53; 59; 51; 50; 53; 540; 54; 56; 57; 58; 59; 50; 52; 55; 59; 53; 56; 57; 58; 59; 541; 58; 59; 50; 542; 59; 50; 55; 56; 57; 59; 53; 58; 59; 500; 57; 58; 59; 500; 59; 510; 59; 52; 59; 50; 513; 51; 59; 514; 53; 59; 52; 515; 56; 57; 58 reported by a number of investigators that cosmological models may develop a variety of finite-time singularities in the future and the Big Rip singularity is not alone in the list of finite-time future singularities appearing in several cosmological models, there may appear other types of finite-time singularities in the cosmological models, such as the sudden singularity [350] (also see [351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369]), the big freeze singularity [370], the generalized sudden singularity [371], \(w\) singularity [372; 373]. The appearance of such finite-time future cosmological singularities became an important part of modern cosmology and this got massive attention in the scientific community. The first concrete classification of finite-time future cosmological singularities was done in Ref. [374] and for an important stream of articles on finite-time cosmological singularities, see Refs. [361; 363; 367; 368; 369; 375; 376; 377; 378; 379; 400; 401; 402; 403; 404; 405; 406; 407; 408; 409; 410; 411; 412; 413; 414; 415; 416] and the references therein. Note that the possibility of finite time singularities in the context of inflation was investigated and some models along this line were proposed [447; 448]. It is worth discussing what a physical spacetime singularity is. For smooth singularities, like the pressure and Type IV singularities, no geodesics incompleteness occurs, but for strong, crushing type singularities, geodesics incompleteness always occurs. For all the singularities, these are classified according the singular or non-singular behavior of physical quantities and higher order curvature invariants defined on the three dimensional spacelike hypersurface defined by the time instance \(t=t_{s}\), where \(t_{s}\) is the time instance where the singularity occurs. The physical quantities that determine the type of a finite-time physical singularity, according to the classification of [374], are the pressure, the energy density and the scale factor. According to Penrose's definition of a physical spacetime singularity, if the strong and weak energy conditions are satisfied, a singularity in spacetime is accompanied by geodesics incompleteness, which also covers the case that the spacetime is spatially flat and a single point is removed from the spacetime. The latter is geodesically incomplete, although the curvature is null everywhere. The physical spacetime singularities on smooth spacetime manifolds are accompanied by higher order curvature divergences. Specifically certain integrals of higher order curvature invariants calculated on geodesics are divergent, when a finite-time singularity is developed. One example of this sort is the following integral [352], \[\int_{0}^{t_{s}}dt^{\prime}\int_{0}^{t^{\prime}}dt^{\prime\prime}R^{i}_{0j0}(t ^{\prime\prime})\,, \tag{1}\] which depends on the Riemann tensor, and if a crushing type singularity occurs at \(t=t_{s}\), it strongly diverges. This is the best way to perceive an actual crushing type finite-time singularity, the fact that geodesics incompleteness occurs. The geodesics incompleteness is always accompanied by singular behaviors in higher order curvature invariants, but for singular spacetimes, the definition of the curvature itself is ill defined, in an open set in the spacetime surrounding the spacetime singularity. In fact, a strong finite-time singularity in spacetime maybe have exotic implications on the topology of spacetime, since it is speculated that topology change might actually occur [449]. Also the presence of a singularity is accompanied by closed time-like curves and in all cases the notion of differentiability is lost because the smoothness of spacetime itself is lost. Also a finite-time singularity may not actually be considered as an isolated point in spacetime but as a three dimensional hypersurface defined by \(t=t_{s}\), on which physical observables and curvature invariants strongly diverge. This is also Penrose's view of the Big Bang singularity, who states that the Big Bang singularity cannot be viewed as a singular isolated spacetime point but an initial singular hypersurface [450]. This perspective is a valid one because if the Big Bang was an isolated point in spacetime, this would indicate that when it occurs, this would lead to an infinite number of overlapping particle horizons, and therefore at the next time instance we would end up with an infinity of causally disconnected regions as the Universe evolves. With this review, we aim to present all the different theories that may lead to finite-time future singularities. We shall include many different theoretical frameworks that lead to cosmological singularities and we shall analyze in detail the reasons why these theories lead to cosmic singularities. Our purpose is two-fold: first to provide to the literature a unique text that gathers all the theoretical frameworks that lead to cosmic singularities, and second by using our initial aim, to further inspire the academic society to seek fundamental physics behind cosmic singularities. The latter is the most important motivation, because singularities in physics always indicate an effective theory behind a classical physics description. Indeed the singularities of classical electrodynamics are resolved if one considers the complete effective theory of quantum electrodynamics, so it is tempting to consider this analogy and seek for a fundamental theory behind an apparent future cosmological singularity. However, the cosmological singularities in a GR or modified GR framework are not similar to the singularities in quantum systems. It is known most GR singularities are well hidden behind cosmic horizons. So the question is whether such a horizon exists for future or past finite-time singularities. What are the implications of such horizons for classical physics, and to some extent, what does a crushing type singularity implies for the spacetime itself? Does a crushing singularity indicate a change in the topology, or equivalently, the shape of the Universe? Is the very own fabric of the Universe ripped by a future cosmic singularity? These are the fundamental questions and the answers to these are not trivial. Several aspects of such exotic scenarios regarding the effects of horizons and topology changing Universe due to the occurrence of crushing type singularities were developed in Refs. [449; 451], which may be considered as starting points, among other works too. Nature provides us with similar pictures of singularities indicating changes in topology of the physical system in solid state physics, like for example in the Hele-Shaw systems [452], so with this review we aim to bring forth all the different theories that lead to cosmic singularities and to impose the question whether there is a quantum or even classical resolution of cosmic singularities, and further inspire work towards this research line. The review is organized as follows: In Section II, we discuss various types of finite-time singularities in the past and future, where in particular, Section II.1 deals with the hot Big Bang singularity in the past, Section II.2 describes the variety of finite-time future singularities. In Section III, we discuss the emergence of finite-time singularities in various DE models where in particular, section III.1 deals with the singularities in viscous cosmologies, section III.2 describes the appearance of finite time singularities in the context of interacting dark matter-dark energy cosmologies. Then in Section IV, we discuss the finite-time singularities appearing in various modified gravity theories. In particular, we organize Section IV as follows: Section IV.1 describes the Scalar-tensor gravity; Section IV.2 describes the Brans-Dicke gravity; Section IV.3 describes the \(k-\)essence model; Section IV.4 presents the Scalar-Einstein-Gauss-Bonnet gravity; Section IV.5 describes the \(F(R)\) gravity (\(R\) is the Ricci scalar) which further includes Section IV.5.1 and Section IV.5.2 describing the correspondence of singularities in \(F(R)\) and unimodular \(F(R)\) gravity theories, respectively; Section IV.6 describes the \(F(G)\) gravity (\(G\) is the Gauss-Bonnet invariant); Section IV.7 describes the \(F(R,G)\) gravity; Section IV.8 describes the \(F(T)\) gravity (\(T\) is the torsion scalar); Section IV.9 discusses the non-local gravity; Section IV.1 describes the non-minimal Maxwell-Einstein gravity; Section IV.9 describes the semi-classical gravity. Then in Section V, we describe the finite-time singularities in the braneworld gravity. Section VI describes the finite-time singularities in matter creation models. In Section VII we discuss the finite-time singularities in the context of Loop Quantum Cosmology (LQC). Further, in Section VIII, we make a correspondence between the dynamical analysis and the finite time singularities. Then in Section IX, we discuss the possibility to avoid the finite-time singularities through some heuristic routes that include the quantum, thermal and other non-standard effects. Finally, in Section X, we present a brief summary of the review by highlighting the important features that need to be considered for upcoming works. We also discuss the fundamental future perspectives of cosmological finite-time singularities that need to be addressed by the future researchers. ## II Classification of finite-time singularities In this section, we describe the finite-time singularities appearing in the past and future evolution of the Universe. In agreement with the observational evidences in the large scale, our Universe is almost homogeneous and isotropic and such geometrical configuration of the Universe is well described by the Friedmann-Lemaitre-Robertson-Walker (FLRW) line element: \[ds^{2}=-dt^{2}+a^{2}(t)\Bigg{[}\frac{dr^{2}}{1-kr^{2}}+r^{2}\left(d\theta^{2}+ \sin^{2}\theta d\phi^{2}\right)\Bigg{]}\,, \tag{2}\] where \(a(t)\) is the expansion scale factor of the Universe, \((t,r,\theta,\phi)\) are the co-moving coordinates and \(k\) describes three different geometries for three distinct values, namely, spatially flat (\(k=0\)), closed (\(k=+1\)), open (\(k=-1\)). We often use the case \(k=0\) for which Eq. (2) takes the form \[d\mathrm{s}^{2}=-dt^{2}+a^{2}(t)\Bigg{[}dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2 }\theta d\phi^{2}\right)\Bigg{]}\,. \tag{3}\] ### Big Bang: What is it? The Big Bang is the simplest past singularity, and appears, for example, when one studies a fluid with linear Equation of State (EoS) \(p=w\rho\) where the EoS parameter satisfy \(w>-1\) (non-phantom fluid), and \(p\) and \(\rho\) are the pressure and the energy density of the Universe, respectively. Writing \(w=\gamma-1\rightarrow\gamma>0\), and dealing with the flat FLRW line element, after combining the Friedmann and Raychaudhuri equations \[H^{2}=\frac{\rho\kappa^{2}}{3},\qquad\dot{H}=-\frac{(\rho+p)\kappa^{2}}{2}\,, \tag{4}\] where \(H\) is the Hubble rate of the FLRW Universe and \(\kappa^{2}=8\pi G_{N}\) (\(G_{N}\) is the Newton's gravitational constant) is the Einstein's gravitational constant, one gets \[\dot{H}=-\frac{3\gamma}{2}H^{2}\,, \tag{5}\] which could be integrated obtaining \[H(t)=\frac{H_{0}}{\frac{3}{2}\gamma H_{0}(t-t_{0})+1}=\frac{2}{3\gamma(t-t_{s})}\,, \tag{6}\] where \(t_{0}\) is present cosmic time, \(H_{0}\) is the current value of the Hubble rate and \(t_{s}=t_{0}-\frac{2}{3\gamma H_{0}}\). Inserting this expression into the Friedmann equation, one gets the following value of the energy density: \[\rho(t)=\frac{4}{3\gamma^{2}(t-t_{s})^{2}\kappa^{2}}\,. \tag{7}\] One can see that both the Hubble rate and the energy density diverge at the finite past time \(t_{s}<t_{0}\). In addition, since the scale factor \(a(t)\), at time \(t_{s}\) is given by \(a(t_{s})=a_{0}\mathrm{e}^{-\int_{t_{s}}^{t_{0}}H(t)dt}\), where \(a_{0}\) is the current value of the scale factor, one easily gets \(a(t_{s})=0\). This is the well-known _Big Bang_ singularity. On the other hand, note that \(t_{s}\) gives us an indication of the age of the Universe. Effectively, for this model one gets \[t_{0}-t_{s}=\frac{2}{3\gamma H_{0}}\sim 14\quad\text{billion years}\,, \tag{8}\] where we have used that the current value of the Hubble rate is approximately \(H_{0}\sim 70\,\text{Km/s/Mpc}\). Here, it is very important to realize that the Big Bang solution we have found is just a singular mathematical solution of the comic equations. In addition, it is well-known that the General Theory of Relativity is a viable theory that has been proved to match the observational data at low energy densities, but we still do not know what are the valid physical laws at very high energy densities. Taking this into account, and the fact that GR is of no use to describe the physics in very small scales, e.g., the atomic scale, it is accepted that we need to quantize gravity in order to depict our Universe at very early times, at least up to the Planck scales. But, for the moment, nobody knows how to obtain a quantum theory of gravity, it even might be simply impossible. Maybe gravity is a force of a very different nature, as compared to the electromagnetic and the nuclear forces. However, as we will see in this review, there are attempts to introduce the quantum effects to understand the past evolution of our Universe. ### Finite-time Future Singularities In this section, we outline the type of future singularities that appear in various cosmological models, which are classified as follows in [374]: 1. Type I (Big Rip) singularity: \(t\to t_{s}\), \(a\rightarrow\infty\), \(\rho\rightarrow\infty\) and \(|p|\rightarrow\infty\). 2. Type II (Sudden) singularity: \(t\to t_{s}\), \(a\to a_{s}\), \(\rho\rightarrow\rho_{s}\) and \(|p|\rightarrow\infty\). 3. Type III (Big Freeze) singularity: \(t\to t_{s}\), \(a\to a_{s}\), \(\rho\rightarrow\infty\) and \(|p|\rightarrow\infty\). 4. Type IV (Generalized Sudden) singularity: \(t\to t_{s}\), \(a\to a_{s}\), \(\rho\rightarrow\rho_{s}\), \(|p|\to p_{s}\) and some higher derivatives of \(H\) diverge. 5. Type V (\(w\)) singularity: \(w\rightarrow\infty\) but \(p\), \(\rho\) are finite. In the following, we shall give a brief illustration to each finite-time cosmological singularity mentioned above. #### iii.2.1 Type I Singularity This kind of future singularity was introduced first by the authors of Ref. [1]. For a linear EoS, when \(w<-1\rightarrow\gamma<0\), that is, when one deals with a phantom fluid [3], as we can see from the Eqs. (6) and (7), one obtains a future singularity because in that case \(t_{s}>t_{0}\). In this situation the Hubble rate and the energy density also diverge at this moment, but now \(\lim_{t\to t_{s}}a(t)=\infty\). This is known as the Type I or _Big Rip_ singularity. Type II Singularity In Ref. [350], Barrow proposed a new kind of finite-time future singularity appearing in an expanding FLRW Universe. The singularity may appear without violating the strong energy condition: \(\rho>0\) and \(3p+\rho>0\). This kind of singularity was named as the _Sudden singularity_. To deal with this kind of singularities, we consider a nonlinear EoS of the form [380] \[p=-\rho-f(\rho), \tag{9}\] where \(f\) is an analytic function of the energy density \(\rho\). In that case the conservation equation \[\dot{\rho}+3H(\rho+p)=0\,, \tag{10}\] becomes \(\dot{\rho}=3Hf(\rho)\), and using the Friedmann equation, one gets \[\dot{\rho}=\sqrt{3}\kappa\rho^{1/2}f(\rho)\,. \tag{11}\] Choosing, as in Ref. [380] \[f(\rho)=\frac{A}{\sqrt{3}\kappa}\rho^{\nu+\frac{1}{2}}, \tag{12}\] where \(A\) and \(\nu\) are two free parameters, from Eq. (11), one obtains the first order differential equation \[\dot{\rho}=A\rho^{\nu+1}\,, \tag{13}\] whose solution is given by \[\rho(t)=\left\{\begin{array}{ccc}\bigg{(}\rho_{0}^{-\nu}-\nu A(t-t_{0}) \bigg{)}^{-1/\nu},&\text{for}&\nu\neq 0,\\ \rho_{0}\text{e}^{A(t-t_{0})},&\text{for}&\nu=0,\end{array}\right. \tag{14}\] where \(\rho_{0}\) is the current value of the energy density. Firstly, we consider the case \(\nu<-1/2\). From the Friedmann equation the Hubble rate is given by \[H(t)=\frac{\kappa}{\sqrt{3}}\bigg{(}\rho_{0}^{-\nu}-\nu A(t-t_{0})\bigg{)}^{ -1/2\nu}, \tag{15}\] which introducing the time \(t_{s}=t_{0}+\frac{\rho_{0}^{-\nu}}{\nu A}\), could be written as \[H(t)=\frac{\kappa}{\sqrt{3}}\bigg{(}\nu A(t_{s}-t)\bigg{)}^{-1/2\nu}\,, \tag{16}\] and thus, the scale factor is given by \[\ln\bigg{(}\frac{a(t)}{a_{0}}\bigg{)}=-\frac{2\kappa}{3A(2\nu-1)}\bigg{(}\nu A (t_{s}-t)\bigg{)}^{(2\nu-1)/2\nu}+\frac{2\kappa}{3A(2\nu-1)}\bigg{(}\nu A(t_{s} -t_{0})\bigg{)}^{(2\nu-1)/2\nu}. \tag{17}\] Then, for a non phantom fluid, that is for \(A<0\), the effective EoS parameter \(w\equiv p/\rho\) is given by \(w=-1-\frac{A}{\sqrt{3}\kappa}\rho^{\nu-\frac{1}{2}}>-1\). Since in that case, \(t_{s}>t_{0}\) and \(\rho(t_{s})\) vanishes, we find that the pressure \(p\) diverges at the instant \(t_{s}\), obtaining a Sudden singularity [350; 371]. #### ii.2.3 Type III Singularity We consider the same EoS (9) where \(f(\rho)\) is given in Eq. (12), but here we consider the case \(\nu>1/2\) and \(A>0\), that is, a phantom fluid, which implies that \(t_{s}>t_{0}\). From Eq. (17), we see that \(a(t)\to a_{s}\) (finite) when \(t\to t_{s}\), and from Eq. (15), we deduce that both the energy density and the pressure diverge at that instant, which leads to a Big Freeze singularity. Type IV Singularity We continue with the same EoS (9) where \(f(\rho)\) given in Eq. (12), but with \(-1/2<\nu<0\) and \(A<0\) (non-phantom fluid). In this situation, \(t_{s}>t_{0}\), and once again, the scale factor converges to \(a_{s}\) (finite) when \(t\to t_{s}\), but now the energy density and the pressure go to zero when the cosmic time approaches to \(t_{s}\). In addition, looking at the Hubble rate obtained in Eq. (15) we easily conclude that when \(-1/(2\nu)\) is not a natural number, some higher order derivatives of the Hubble parameter diverge at \(t=t_{s}\), obtaining a Generalized Sudden singularity. #### ii.1.5 Type V (w) Singularity For \(t\to t_{s}\), \(a\to\infty\), \(\rho\to 0\), \(|p|\to 0\) or to a finite value, but the EoS, \(w\to\infty\). This kind of singularity is known as Type V or \(w\) singularity [372]. This future singularity appears when the Hubble rate has the following analytic form near the singularity [453], \[H(t)=\sum_{n=1}^{\infty}H_{n}\left(\frac{t_{s}-t}{t_{n}}\right)^{n}, \tag{18}\] where all \(t_{n}\)'s (\(n=1,2,...\)) are positive numbers. In that case, using the EoS \[w(t)=-1-\frac{2\dot{H}}{3H^{2}}, \tag{19}\] near \(t_{s}\), one finds that \[w(t)\cong-1+\frac{2r}{H_{r}t_{r}}\left(\frac{t_{s}-t}{t_{r}}\right)^{-r-1}, \tag{20}\] where \(H_{r}\) is the first non-vanishing term of the series. We can see that the EoS parameter \(w\) diverges at time \(t_{s}\), but the energy density and the pressure, \[\rho=\frac{3H^{2}}{\kappa^{2}}\,,\quad p=-\frac{1}{\kappa^{2}}(3H^{2}+2\dot{H })\,, \tag{21}\] are finite at time \(t_{s}\). In fact, the energy density vanishes and the pressure is equal to \(\frac{2H_{1}}{\kappa^{2}t_{1}}\), so it vanishes only when \(H_{1}=0\). As an example we continue with the model (9) where \(f(\rho)\) is given by Eq. (12). And we consider the simple case \(\nu=-\frac{1}{2}\) and \(A<0\). Then, we have \[w=-1-\frac{A}{\sqrt{3}\kappa\rho}\,. \tag{22}\] Now, from Eq. (16) the Hubble rate has the form \[H(t)=-\frac{A\kappa}{2\sqrt{3}}(t_{s}-t)\,, \tag{23}\] and the energy density becomes \[\rho(t)=\frac{A^{2}}{2}(t_{s}-t)^{2}\,, \tag{24}\] which means that at \(t=t_{s}\), the energy density vanishes and the pressure has the finite value \(\frac{A}{\sqrt{3}\kappa}\), and, as a consequence, the EoS parameter \(w\) diverges at \(t=t_{s}\). To end this section, note that the case \(0<\nu<1/2\) and \(A<0\) (non-phantom fluid) corresponds to a Big Bang singularity and for \(0<\nu<1/2\) and \(A>0\) (phantom fluid) one gets a Big Rip singularity. In addition, when \(\nu=0\) and \(A>0\) one obtains the so-called Little Rip [454; 455]1 where \(w<-1\) and it asymptotically converges to \(-1\). Effectively, in this case the energy density is given by \(\rho(t)=\rho_{0}e^{A(t-t_{0})}\) which diverges when \(t\to\infty\). So, we find Footnote 1: The Little Rip scenario has been discussed in detail in section IX.1. \[w=\frac{p}{\rho}=-1-\frac{A}{\sqrt{3\rho}\kappa}\to-1\,. \tag{25}\] Finally, for the remaining case \(\nu=1/2\) and \(A>0\), the Hubble rate is given by \[H=\frac{2\kappa}{\sqrt{3}A(t_{s}-t)}\,, \tag{26}\] and thus, the scale factor is given by \[\ln\left(\frac{a(t)}{a_{0}}\right)=-\frac{2\kappa}{\sqrt{3}A}\ln\left(\frac{t_ {s}-t}{t_{s}-t_{0}}\right)\,, \tag{27}\] which shows that the scale factor diverges when \(t\to t_{s}\), and thus, a Big Rip singularity occurs. Summarizing, for the EoS \(p=-\rho-\frac{A}{\sqrt{3}\kappa}\rho^{\rho+\frac{1}{2}}\), we have the following classification for future singularities as a function of \(\nu\): * For \(\nu<-1/2\) and \(A<0\), one has a Type II singularity. * For \(\nu=-1/2\) and \(A>0\), one has a Little Rip. * For \(\nu=-1/2\) and \(A<0\), one has a Type V singularity. * For \(-1/2<\nu<0\) and \(A<0\), one has a Type IV singularity. * For \(\nu=0\) and \(A>0\), one has a Little Rip. * For \(0<\nu\leq 1/2\) and \(A>0\), one has a Big Rip singularity. * For \(\nu>1/2\) and \(A>0\), one has a Type III singularity. ### The Hamilton-Jacobi approach to cosmological singularities The Hamilton-Jacobi approach developed by Salopek and Bond [456] can be used in the context of cosmological singularities. According to this approach [456], cosmological models can be written as a function of a scalar field \(\phi\) with a potential associated with any dependence of the Hubble rate. Therefore, one can find the corresponding potential associated with the singular behavior studied above. Starting with the Big Rip singularity, with a Hubble rate given by \(H(t)=-\frac{2}{3\gamma(t_{s}-t)}\) with \(\gamma<0\), one can find the potential of a phantom field leading to this dynamical behavior. Denoting \(H^{\prime}\) as the derivative with respect to the scalar field \(\phi\), we have \(\dot{H}=H^{\prime}\dot{\phi}\), and thus, the Raychaudhuri equation becomes \(H^{\prime}=\frac{\kappa^{2}}{2}\dot{\phi}\) (recall that we are dealing with a phantom field). Thus, the Friedmann equation will be \[H^{2}(\phi)=\frac{\kappa^{2}}{3}\left(-\frac{2}{\kappa^{4}}(H^{\prime}(\phi)) ^{2}+V(\phi)\right)\Longrightarrow V(\phi)=\frac{3}{\kappa^{2}}H^{2}(\phi)+ \frac{2}{\kappa^{4}}(H^{\prime}(\phi))^{2}. \tag{28}\] Next, from the Raychaudhuri equation one can find the following relation between the field and the cosmic time \(\phi=\frac{1}{\kappa}\int\sqrt{2\dot{H}}dt\), which for our Hubble rate leads to \[\phi(t)=-\frac{2}{\kappa\sqrt{-3\gamma}}\ln\left(\frac{t_{s}-t}{\kappa}\right), \tag{29}\] where as an initial condition we have chosen \(\phi(t_{s}-\kappa)=0\). Then, we have \(H(\phi)=-\frac{2}{3\kappa\gamma}e^{\sqrt{-3\gamma}\kappa\phi/2}\), and the corresponding potential is given by \[V(\phi)=\frac{2}{3\kappa^{4}\gamma^{2}}(2-\gamma)e^{\sqrt{-3}\gamma\kappa\phi}. \tag{30}\] Dealing with the singularities of Type II-IV, we consider that the Hubble rate given in the equation (16), which can be written as \[H(t)=B(t_{s}-t)^{-1/2\nu},\qquad\text{with}\qquad B=\frac{\kappa}{\sqrt{3}}( \nu A)^{-1/2\nu}. \tag{31}\] Here, it is important to realize that in order to realize this dynamics, since for the Type II and IV singularities one has \(w>-1\), we need a non phantom field, and for the Type III, a phantom field is needed. Therefore, following the similar steps as in the Big Rip singularity, we find \[H_{\pm}(\phi)=C_{\pm}\phi^{\frac{2}{1-2\nu}}\qquad\text{with}\qquad C_{\pm}=B \left(\pm\frac{\kappa^{2}(1-2\nu)^{2}}{16B\nu}\right)^{\frac{1}{1-2\nu}}, \tag{32}\] where the \((+)\) sign refers to the Type III singularity because in that case \(\nu>1/2\), and the \((-)\) sign refers to the singularities of Type II and IV, because in that case \(\nu<0\). Then, the corresponding potential is given by \[V_{\pm}(\phi)=\frac{C_{\pm}^{2}}{\kappa^{2}}\left(3\pm\frac{8}{\kappa^{2}(1-2 \nu)^{2}\phi^{2}}\right)\phi^{\frac{4}{1-2\nu}}, \tag{33}\] the shape of which depends on the value of the parameter \(\nu\). Effectively, we have the following observations: 1. For the Type II singularity where \(\nu<-1/2\), one has \(V(-\infty)=+\infty\) and \(V(0)=-\infty\). 2. For the Type III singularity where \(\nu>1/2\), one has \(V(-\infty)=0\) and \(V(0)=+\infty\). 3. For the Type IV singularity where \(-1/2<\nu<0\), one has \(V(-\infty)=+\infty\) and \(V(0)=0\). Finally, to discuss the Type V singularity one may choose the Hubble rate given in Eq. (23) which corresponds to the case \(\nu=-1/2\) and \(A<0\), and thus, a non phantom fluid is represented. As we have already explained, to depict this behavior we need a non phantom field whose potential, for this Hubble rate, is given by \[V(\phi)=-\frac{\sqrt{3}A\kappa}{4}\left(\phi^{2}-\frac{2}{3\kappa^{2}}\right). \tag{34}\] ## III Finite-time singularities in de models Within the context of GR, a hypothetical cosmic fluid is added to explain the accelerating phase of the Universe, known as DE. The nature and dynamics of DE are not known. Hence, over the last several years, a plethora of DE models have been introduced in the literature [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30] (also see the review articles in this direction [116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130] and the references therein). ### Viscous Cosmologies A bulk viscous fluid is characterized by its energy density \(\rho\) and pressure \(p\), where the pressure component has two sub-components, one is the conventional component \(p_{\rm conv}=w\rho\) and other is the bulk viscosity component \(p_{\rm vis}=-\xi u^{\mu}_{;\mu}\), where \(u^{\mu}_{;\mu}\) is the fluid expansion scalar, and \(\xi\) is the bulk viscous coefficient which could be either constant or dynamical. Thus, the pressure of a bulk viscous fluid is represented as \[p=p_{\rm conv}+p_{\rm vis}=w\rho-\xi u^{\mu}_{;\mu}. \tag{35}\] In the flat FLRW Universe, one has \(u^{\mu}_{;\mu}=3H\), thus, in pressure of the bulk viscous fluid, the pressure term given in Eq. (35) reduces to \[p=w\rho-3H\xi\,, \tag{36}\] where if \(\xi\) is dynamical, then it could take one of the forms such as, \(\xi\equiv\xi(t)\), \(\xi\equiv\xi(\rho)\), \(\xi\equiv\xi(a)\), \(\xi\equiv\xi(H)\), \(\xi\equiv\xi\left(H,\dot{H},...\right)\), or it can take a more general form like \(\xi=\xi\left(t,a,\rho,H,\dot{H},\ddot{H},...\right)\). The microscopic dynamics may determine the dynamics of \(\xi\). Because we do not know the microscopic dynamics, we consider arbitrary cases. We start once again with the simple choice of the EoS, \(p=-\rho-f(\rho)\), given in Eq. (9). One can realize that Eq. (9) is a very special case of Eq. (36). For example, if \(\xi\) does not depend on time, i.e., \(\xi=\bar{\xi}\) (constant), then from the Friedmann equation (4), one can express the Hubble rate as, \(H=\kappa\sqrt{\rho/3}\) (taking the expansion of the Universe, i.e., \(H>0\)) and consequently, Eq. (36) turns out to be \(p=w\rho-\kappa\bar{\xi}\sqrt{3\rho}=-\rho-(k\bar{\xi}\sqrt{3\rho}-\rho-w\rho)\) in which \(f(\rho)=k\bar{\xi}\sqrt{3\rho}-\rho-w\rho\). On the other hand, if \(\xi\) is a function of \(H\), then similarly one can find a suitable \(f(\rho)\). Let us introduce a more generalized EoS of the bulk viscous fluid as follows [380]: \[p=-\rho-f(\rho)-G(H), \tag{37}\] where \(G(H)\) is some arbitrary function of the Hubble parameter \(H\). However, \(G\) can be any arbitrary function of \(H\) and its derivatives of any order and the scale factor of the Universe [457, 458]. We start with the simple case where \(G\) is the function of the Hubble parameter only, that means, we focus on Eq. (9). Let us note that for a spatially flat Universe, the EoS given in Eqs. (9) and (37) are equivalent since using the Friedmann equation, one can express the Hubble parameter in terms of the energy density of the Universe, while for a non-flat case, Eq. (37) is not equivalent to Eq. (9). Therefore, we also keep the EoS shown in Eq. (37) in our discussion. Now plugging Eq. (37) into the conservation equation, i.e. Eq. (10), one gets, \[\dot{\rho}-3H\left(f(\rho)+G(H)\right)=0\,. \tag{38}\] And using the Friedmann equation, i.e. Eq. (4) in an expanding Universe, one derives that \[\dot{\rho}=F(\rho)\equiv 3\kappa\sqrt{\frac{\rho}{3}}\left[f(\rho)+G\left( \kappa\sqrt{\rho/3}\,\right)\right]\,. \tag{39}\] We start with a simple EoS as follows \[p=w_{1}\rho+w_{2}\frac{3}{\kappa^{2}}H^{2}\,, \tag{40}\] where \(w_{1}\) and \(w_{2}\) are constants. Now, using the Friedmann equation, the above EoS (40), can be rewritten as \[p=\left(w_{1}+w_{2}\right)\rho\,, \tag{41}\] which takes the form of \(p=w_{\rm eff}\rho\) in which \(w_{\rm eff}\) has the following expression \[w_{\rm eff}=w_{1}+w_{2}\,. \tag{42}\] Now from Eq. (42), as long as \(w_{\rm eff}>-1\) even if \(w_{1}<-1\), no Big Rip singularity occurs. However, on the other hand, even if \(w_{1}>-1\), but one may obtain \(w_{\rm eff}<-1\) for sufficiently negative value of \(w_{2}\), and thus the Big Rip singularity may occur. A more generalized EoS can be considered where \(f(\rho)\) can be modified as \[f(\rho)\to f(\rho)+G(H). \tag{43}\] For example, choosing \(f(\rho)=A\rho^{\alpha}\) and \(G(H)=BH^{2\beta}\), and using the Friedmann equation (4), \(f(\rho)\) gets modified as, \[f_{\rm eff}(\rho)=f(\rho)+G(H)=A\rho^{\alpha}B\left(\frac{\kappa^{2}}{3} \right)^{\beta}\rho^{\beta}. \tag{44}\] For \(\beta>\alpha\), when \(\rho\) is large, the second term in the right hand side (r.h.s.) of Eq. (44) becomes dominant leading to \[f_{\rm eff}(\rho)\to B\left(\frac{\kappa^{2}}{3}\right)^{\beta}\rho^{ \beta}\,. \tag{45}\] On the other hand, if \(\beta<\alpha\), the second term in the right hand side of Eq. (44) becomes dominant and we obtain again (45) for \(\rho\to 0\). Following Ref. [380], we summarize the appearance of singularities for this EoS considering various ranges of \(\alpha\) and \(\beta\): * When \(\alpha>1\): for most of the values of \(\beta\), there Type III singularity occurs. Additionally, when \(0<\beta<1/2\), Type IV singularity appears and when \(\beta<0\), Type II singularity appears. * When \(\alpha=1\): if \(\beta>1\), then Type III singularity appears (for \(\beta=1\), we get the EoS as in Eq. (40)); if \(\beta<1\) and \(A>0\), Big Rip or Type I singularity occurs. In addition to the Type I singularity, Type IV singularity appears for \(0<\beta<1/2\) and Type II singularity appears for \(\beta<1\). * When \(1/2<\alpha<1\): Type III singularity appears for \(\beta>1\); Type I singularity appears for \(1/2\leq\beta<1\) (even for \(\beta=1/2\)) or \(\beta=1\) and \(B>0\); Type IV singularity appears for \(0<\beta<1/2\); and Type II singularity appears for \(\beta<0\). * When \(\alpha=1/2\): Type III singularity appears for \(\beta>1\); Type I singularity appears for \(1/2<\beta<1\) or \(\beta=1\) and \(B>0\); Type IV singularity appears for \(0<\beta<1/2\); Type II singularity occurs for \(\beta<0\). We note that for \(\beta=1/2\) or \(\beta=0\), no singularity appears. * For \(0<\alpha<1/2\): Type IV singularity appears for \(0<\beta<1/2\); Type II singularity appears for \(\beta<0\). In addition to Type IV singularity, Type III singularity occurs for \(\beta>1\) and Type I singularity occurs for \(1/2\leq\beta<1\) or \(\beta=1\)\(B>0\). * When \(\alpha<0\): Type II singularity occurs. In addition to Type II singularity, Type III singularity appears for \(\beta>1\); Type I singularity for \(1/2\leq\beta<1\) or \(\beta=1\) and \(B>0\). We also propose an implicit inhomogeneous EoS by generalizing the expression \(p=-\rho-f(\rho)\) as \[\mathcal{F}(p,\rho,H)=0\,. \tag{46}\] In order to understand the cosmological consequences of the implicit inhomogeneous EoS, we present the following example: \[\left(p+\rho\right)^{2}-C_{\rm c}\rho^{2}\left(1-\frac{H_{\rm c}}{H}\right)= 0\,, \tag{47}\] with \(C_{\rm c}\) (dimensionless) and \(H_{\rm c}\) as positive constants. Plugging Eq. (47) into the Raychaudhuri equation \(\dot{H}=-\frac{\kappa^{2}(p+\rho)}{2}\) and using the Friedmann equation \(H^{2}=\frac{\kappa^{2}\rho}{3}\) in Eq. (4), we acquire \[\dot{H}^{2}=\frac{9}{4}C_{\rm c}H^{4}\left(1-\frac{H_{\rm c}}{H}\right)\,. \tag{48}\] We can integrate Eq. (48) as \[H=\frac{16}{9C_{\rm c}^{2}H_{\rm c}\left(t-t_{-}\right)\left(t_{+}-t\right)}\,, \tag{49}\] with \[t_{\pm}\equiv t_{\rm c}\pm\frac{4}{3C_{\rm c}H_{\rm c}}\,, \tag{50}\] where \(t_{\rm c}\) is a constant of integration. By substituting Eq. (49) into \(p=-\rho\left(-1-\frac{2\dot{H}}{3H^{2}}\right)\) and \(\rho=\frac{3H^{2}}{\kappa^{2}}\) we find \[p=-\rho\left[1+\frac{3C_{\rm c}^{2}}{4H_{\rm c}}(t-t_{\rm c})\right]\quad{\rm and }\quad\rho=\frac{256}{27C_{\rm c}^{4}H_{\rm c}^{2}\kappa^{2}\left(t-t_{-} \right)^{2}\left(t_{+}-t\right)^{2}}\,. \tag{51}\] Thus, we have \[w=\frac{p}{\rho}=-1-\frac{3C_{\rm c}^{2}}{4H_{\rm c}}\left(t-t_{\rm c}\right)\,. \tag{52}\] From Eq. (49), we see that if \(t_{-}<t<t_{+}\), \(H>0\). At \(t=t_{\rm c}=\left(t_{-}+t_{+}\right)/2\), \(H\) becomes the minimum of \(H=H_{\rm c}\). On the other hand, in the limit of \(t\to t_{\pm}\), \(H\to\infty\). This may be interpreted that at \(t=t_{-}\), there exists a Big Bang singularity, whereas at \(t=t_{-}\), a Big Rip singularity appears. It is clearly seen from Eq. (49) that when \(t_{-}<t<t_{\rm c}\), \(w>-1\) (the non-phantom (quintessence) phase), and when \(t_{\rm c}<t<t_{+}\), \(w<-1\) (the phantom phase). At \(t=t_{\rm c}\), the crossing of the phantom divide line from the non-phantom phase to the phantom phase occurs. This is realized by an inhomogeneous term in the EoS. ### Interacting DE Models Cosmological models allowing a non-gravitational interaction between DM and DE are widely known as Interacting DE (IDE) or Coupled DE (CDE) models. The theory of IDE has received massive attention due to its ability to explain several cosmological puzzles and for offering some interesting results [42; 459; 46; 47; 48; 49; 50; 51] (also see two review articles in this direction, e.g. Refs. [552; 553]). In IDE theory, the choice of the interaction function or the interaction rate is usually made from the phenomenological ground. Even though there are some attempts to derive the interaction function from the fundamental principle [554; 555; 556; 557; 558; 559; 560], however, this sector needs considerable progress. Therefore, usually, in most of the IDE scenarios, the interaction function is chosen by hand, therefore, the appearance of finite-time singularities in those models is not unnatural. In this section, we shall investigate whether the interacting models allow the finite-time singularities in the future. In a homogeneous and isotropic Universe characterized by the FLRW line element, the conservation equations of the DM and DE sectors in presence of an interaction between them take the forms \[\left\{\begin{array}{rl}\dot{\rho}_{\rm DM}+3H\rho_{\rm DM}=&-Q(t),\\ \dot{\rho}_{\rm DE}+3H(1+w_{\rm DE})\rho_{\rm DE}=&Q(t),\end{array}\right. \tag{53}\] where \(\rho_{\rm DM}\), \(\rho_{\rm DE}\) are the energy density of DM and DE, respectively, \(w_{\rm DE}=p_{\rm DE}/\rho_{\rm DE}\) (\(p_{\rm DE}\) is the pressure of DE) refers to the EoS parameter of DE and DM has been assumed to be pressure-less, and \(Q(t)\) is the energy exchange rate between the dark sectors. The sign of \(Q(t)\) determines the direction of energy flow between the dark components. For \(Q>0\), energy flows from DM to DE and for \(Q(t)<0\) the direction of flow is reversed, i.e. energy flows from DE to DM. The Friedmann and Raychaudhuri equations (4) in the flat FLRW Universe (3) are \[H^{2}=\frac{\kappa^{2}}{3}(\rho_{\rm DM}+\rho_{\rm DE}),\quad{\rm and}\quad \dot{H}=-\frac{\kappa^{2}}{2}(\rho_{\rm DM}+\rho_{\rm DE}+p_{\rm DE})\,, \tag{54}\] from which one could rewrite the energy densities of DE and DM as \[\rho_{\rm DE} =-\frac{1}{\kappa^{2}w_{\rm DE}}(3H^{2}+2\dot{H}), \tag{55}\] \[\rho_{\rm DM} =\frac{1}{\kappa^{2}w_{\rm DE}}\left[3(1+w_{\rm DE})H^{2}+2\dot{ H}\right]. \tag{56}\] Now if one combine this last expression (i.e. Eq. (56)) with the conservation equation for DM given in Eq. (53), one gets the following expression for \(Q(t)\)[429] \[Q(t)=-\frac{1}{\kappa^{2}w_{\rm DE}}\left[9(1+w_{\rm DE})H^{3}+6(2+w_{\rm DE} )H\dot{H}+2\ddot{H}-\frac{\dot{w}_{\rm DE}}{w_{\rm DE}}(3H^{2}+2\dot{H}) \right], \tag{57}\] which means that given a dynamics, characterized by \(H(t)\), one recovers the interaction rate \(Q(t)\). Moreover, any singularity in \(H\), \(\dot{H}\) or \(\ddot{H}\) determines a singularity in \(Q(t)\) and vice versa. In the following we shall discuss the possibility of singularities for various choices for the DE EoS parameter. #### iv.2.1 IDE with constant EoS in DE For the choice of constant EoS in DE, the contribution of \(\dot{w}_{\rm DE}\) from Eq. (57) is removed. We remark that for some specific dynamics, the interaction models may encounter with finite-time future singularities. We consider the following dynamics where the Hubble rate takes the form [429] \[H(t)=H_{s}+\frac{2}{3t}+\frac{h_{s}}{(t_{s}-t)^{\beta}}\, \tag{58}\] where \(H_{s}\gg 1/t_{s}\) and \(h_{s}\) are some positive constants and \(\beta\neq 0\) is a dimensionless parameter which determines the type of singularity at time \(t_{s}\). Let us note that the above parametrization depicts a matter dominated early Universe while at late-time, close to \(t_{s}\), the Universe is dominated by a cosmological constant when \(\beta<0\), with a deviation from the \(\Lambda\)CDM model controlled by the last term in (58). On the contrary, it has a singularity at \(t_{s}\) for positive values of \(\beta\). Inserting Eq. (58) into Eq. (57) and retaining the leading term near \(t_{s}\), for \(\beta<1\) one gets \(Q(t)\sim(t_{s}-t)^{-3\beta}\), and for \(-2<\beta<1\) the leading term is \(Q(t)\sim(t_{s}-t)^{-\beta-2}\). Then, the following cases appear: * When \(\beta>1\): close to \(t_{s}\), one has \[H(t)\sim h_{s}(t_{s}-t)^{-\beta}\to\infty,\qquad\text{and}\qquad a(t)\sim\exp \left(-\frac{h_{s}}{1-\beta}(t_{s}-t)^{1-\beta}\right)\to\infty\,,\] (59) which means that a Big Rip singularity appears. * When \(\beta=1\): near \(t_{s}\) one has \[H(t)\cong\frac{h_{s}}{t_{s}-t}\to\infty,\qquad\text{and}\qquad a(t)\sim(t_{s}- t)^{-h_{s}}\to\infty\,,\] (60) which also indicates a Big Rip singularity. * When \(0<\beta<1\): at \(t\sim t_{s}\) one has \[H(t)\sim h_{s}(t_{s}-t)^{-\beta}\to\infty,\qquad\text{and}\qquad a(t)\sim\exp \left(-\frac{B}{1-\beta}(t_{s}-t)^{1-\beta}\right)\to 1\,,\] (61) which indicates a Type III singularity. * When \(-1<\beta<0\): at \(t\sim t_{s}\) one has \[H(t)\sim H_{s}\,\qquad\text{and}\qquad a(t)\cong\mathrm{e}^{H_{s}(t-t_{s})} \sim 1\,,\] (62) and thus both quantities are finite, but \(\dot{H}\cong\frac{h_{s}}{(t_{s}-t)^{1+\beta}}\to\infty\), which means that, the pressure diverges, and hence, a Type II singularity is obtained. * When \(-2<\beta<-1\): at \(t\sim t_{s}\) the scale factor, the Hubble rate and its derivative are finite, but the second derivative of the Hubble rate diverges, and thus, a Type IV (_Generalized Sudden_) singularity appears. #### iii.2.2 IDE with dynamical EoS in DE In this Section we follow the approach of [429], and we consider the following EoS parameter of DE \[w_{\text{DE}}=w_{s}+\frac{k_{s}}{(t_{s}-t)^{\beta}}\, \tag{63}\] where \(w_{s}\), \(k_{s}>0\) and \(\beta\) are free parameters. Note that when \(-1<\beta<0\), the quantity \(\dot{w}_{\text{DE}}/w_{\text{DE}}^{2}\) in (57) diverges when \(t=t_{s}\), and thus, \(Q\) diverges. Here, we analyze different situations: we start with positive values of \(\beta\). Then, \(\frac{k_{s}}{(t_{s}-t)^{\beta}}\cong-\frac{2H}{3H^{2}}\) and the dynamics is given by \[H(t)\cong\frac{2(1-\beta)}{3k_{s}}(t_{s}-t)^{\beta-1}\,, \tag{64}\] and thus, \[\ln\left(\frac{a(t)}{a_{s}}\right)\cong\frac{2(\beta-1)}{3k_{s}\beta}(t_{s}-t )^{\beta},\quad\text{and}\quad\dot{H}(t)\cong\frac{2(1-\beta)^{2}}{3k_{s}}(t_{ s}-t)^{\beta-2}\,. \tag{65}\] From this result, we have the following observations: * For \(\beta\geq 2\), there is a Type V singularity at \(t=t_{s}\). * For \(0<\beta<2\), there is a Type II singularity at \(t=t_{s}\). On the other hand, for negative values of \(\beta\), \(w_{\text{DE}}(t)\to w_{s}\) as \(t\to t_{s}\). Thus, the Hubble rate and its derivatives do not diverge at \(t=t_{s}\). The only divergence appears in the energy exchange rate \(Q(t)\), which diverges for \(-1<\beta<0\) Some Specific Interaction Models According to the existing records, a variety of interaction models have been proposed in the literature, see Refs. [42; 461; 463; 464; 465; 466; 467; 468; 471; 473; 476; 478; 479; 480; 484; 485; 486; 488; 489; 501; 502; 503; 504; 505; 506; 507; 508; 509; 510; 511; 512; 513; 514; 515; 516; 517; 518; 519; 520; 521; 522; 523; 524; 525; 526; 527; 528; 529; 530; 531; 532; 533; 534; 535; 536; 537; 538; 539; 540; 539; 541; 542; 543; 544; 545; 546; 547; 548; 549; 550; 551; 552; 553; 554; 555; 556; 557; 558; 559; 560; 561; 562; 563; 564; 565; 566; 566; 567; 568; 569; 570; 571; 572; 573; 574; 575; 576; 577; 578; 579; 580; 581; 582; 583; 584; 585; 586; 587; 588; 589; 590; 591; 592; 593; 594; 595; 596; 597; 598; 599; 599; 598; 599; 600; 601; 602; 603; 604; 605; 606; 607; 608; 609; 610; 611; 612; 613; 614; 615; 616; 617; 618; 619; 620; 621; 622; 623; 624; 625; 626; 627; 628; 629; 630; 631; 632; 633; 634; 635; 636; 637; 638; 639; 640; 641; 642; 643; 644; 645; 646; 647; 648; 649; 650; 651; 652; 653; 654; 655; 656; 657; 658; 659; 666; 667; 668; 669; 670; 671; 672; 673; 674; 675; 676; 677; 678; 679; 680; 681; 682; 683; 684; 685; 686; 687; 688; 689; 689; 690; 691; 692; 693; 694; 695; 696; 697; 698; 699; 700; 701; 702; 703; 704; 705; 706; 707; 708; 709; 710; 711; 712; 713; 714; 715; 716; 717; 718; 719; 720; 721; 722; 723; 724; 725; 726; 727; 728; 729; 730; 731; 732; 733; 734; 735; 736; 737; 738; 739; 740; 741; 742; 743; 744; 745; 746; 747; 748; 749; 750; 751; 752; 753; 754; 755; 756; 757; 758; 759; 76; 759; 761; 762; 763; 764; 765; 766; 767; 768; 769; 777; 788; 789; 790; 791; 792; 793; 794; 795; 796; 797; 798; 799; 799; 800; 801; 802; 803; 804; 805; 806; 807; 808; 809; 810; 811; 812; 813; 814; 815; 816; 817; 818; 819; 822; 82; 83; 83; 84; 84; 84; 846; 848; 859; 86; 87; 88; 89; 890; 811; 81; 81; 81; 81; 81; 82; 83; 849; 85; 86; 87; 89; 88; 89; 891; 83; 892; 893; 894; 895; 896; 897; 898; 898; 899; 99; 990; 991; 992; 993; 994; 995; 996; 997; 998; 999; 999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 19999; 19999; 1999; 19999; 19999; 1999; 19999; 19999; 1999; 19999; 19999; 19999; 1999; 1999; 19999; 19999; 1999; 1999; 19999; 1999; 19999; 19999; 19999; 1999; 19999; 1999; 1999; 19999; 19999; 19999; 1999; 19999; 1999; 19999; 1999; 19999; 19999; 19999; 1999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 1999; 19999; 19999; 19999; 1999; 19999; 19999; 1999; 19999; 19999; 19999; 1999; 19999; 19999; 19999; 19999; 1999; 19999; 19999; 1999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 199999; 19999; 19999; 19999; 19999; 19999; 19999; 19999; 199999; 19999; 19999; 199999; 19999; 19999; 19999; 199999; 199999; 199999; 19999; 19999; 199999; 19999; 19999; 19999; 19999 _(ii) Interaction model II_ The second interaction rate between DE and DM has the following form [429] \[Q=\Gamma\rho_{\rm DE}^{n}\, \tag{72}\] where \(n\) is a dimensionless parameter and \(\Gamma\) is the coupling parameter having dimension equal to the dimension of the Hubble parameter. Again using the expression of \(\rho_{\rm DE}\) from Eq. (55), the interaction rate \(Q\) of Eq. (72) \[Q=\Gamma\left(\frac{3H^{2}+2\dot{H}}{\widetilde{\kappa}^{2}}\right)^{n}. \tag{73}\] Now, plugging the expression of the interaction function \(Q\) of Eq. 73 into Eq. (57), one gets \[2\ddot{H}+6(2+w_{\rm DE})H\dot{H}+9(1+w_{DE})H^{3}-\frac{\dot{w}_{\rm DE}}{w_{ \rm DE}}\left(3H^{2}+2\dot{H}\right)+\widetilde{\kappa}^{2(1-n)}\Gamma\left(3H ^{2}+2\dot{H}\right)^{n}=0. \tag{74}\] For simplicity we consider the constant equation of state in DE for which \(\dot{w}_{\rm DE}=0\) in Eq. (74) which leads to \[2\ddot{H}+6(2+w_{\rm DE})H\dot{H}+9(1+w_{\rm DE})H^{3}+\widetilde{\kappa}^{2(1 -n)}\Gamma\left(3H^{2}+2\dot{H}\right)^{n}=0. \tag{75}\] Now, looking carefully at (75), one can see that in addition to the Minkowski critical point \(H=0\), it also admits de Sitter critical points [429] \[H_{\rm dS}=\left[-\frac{9(1+w_{\rm DE})}{3^{n}\widetilde{\kappa}^{1-n}\Gamma }\right]^{\frac{1}{2n-3}}\, \tag{76}\] which exist for \(w_{\rm DE}\neq-1\). Within this interaction model, under certain restrictions, one can obtain other type of future singularities apart from the Big Rip singularity. We look for the solutions where \(|\dot{H}|\gg H^{2}\) and additionally we focus on the solutions driven by the interaction function which requires \(|H|\ll|\Gamma(\dot{H}/\widetilde{\kappa}^{2})^{n-1}|\). Under these conditions, Eq. (76) reduces to [429] \[\ddot{H}+\Gamma\left(\frac{\widetilde{\kappa}^{2}}{2}\right)^{1-n}\dot{H}^{n }\simeq 0. \tag{77}\] Note that Eq. (77) remains invariant under a constant shift of \(H\) which is important to keep \(H\) finite. Moreover, Eq. (77) can be solved leading to \[H\simeq C(t_{s}-t)^{p}+H_{s}\, \tag{78}\] where \(t_{s}\) and \(H_{s}\) are integration constants and \[C=\frac{(1-n)^{p}}{n-2}\left[\Gamma\left(\frac{\widetilde{\kappa}^{2}}{2} \right)^{1-n}\right]^{p-1}\qquad p=\frac{n-2}{n-1}. \tag{79}\] Now, if we take \(n=3\), then from Eq. (77) one finds that \(H\simeq C\sqrt{t_{s}-t}+H_{s}\) which gives \(H(t_{s})=H_{s}\) but for \(t\to t_{s}\), \(\dot{H}\rightarrow\infty\), i.e. a type II or Sudden Singularity is realized. This behavior remains general for values of \(n\) for which \(0<p<1\). Additionally, this interaction model could lead to a Big Freeze or a type III singularity for \(p=-1/2\) which is obtained for \(n=5/3\). In this case, \(H\simeq C(t_{s}-t)^{-1/2}\), and hence, \(a\simeq a_{s}e^{-2C\sqrt{t_{s}-t}}\). Thus, for \(t\to t_{s}\), \(H\) and its derivatives diverge but \(a\) remains finite which shows the occurrence of a type III singularity. In fact, for those values of \(n\) which restricts \(p\) in the interval, \(-1<p<0\) one always has a type III singularity. _(iii) Interaction model III_ The interaction function has the following form [561; 562; 563; 564; 565; 566] \[Q=3H\left(\mu\rho_{\rm DM}+\nu\rho_{\rm DE}\right), \tag{80}\] where \(\mu\) and \(\nu\) are the coupling parameters of the interaction function. For a perfect fluid with stress-tensor \(T^{\alpha}_{\beta,A}=p_{A}\delta^{\alpha}_{\beta}+(\rho_{A}+p_{A})u^{\alpha}_{ A}u_{\beta,A}\), where \(A={\rm DM},{\rm DE}\) and \(u^{\alpha}_{A}=\frac{dx^{\alpha}}{\sqrt{-ds^{2}}}\) is the four-velocity of the fluid. We consider the following covariant interacting system \[\nabla_{\alpha}T^{\alpha}_{\ \beta,A}=Q_{\beta,A}\,, \tag{81}\] where \[Q_{\beta,{\rm DM}}=-Q_{\beta,{\rm DE}}=\nabla_{\eta}u^{\eta}_{ \rm DM}\left(\bar{\mu}T^{\alpha}_{\ \alpha,{\rm DM}}u_{\beta,{\rm DM}}+\bar{\nu}T^{\alpha}_{\ \alpha,{\rm DE}}u_{\beta,{\rm DE}}\right)\,. \tag{82}\] At the background level we will have \(\nabla_{\eta}u^{\eta}_{\rm DM}=3H\), \(T^{\alpha}_{\ \alpha,A}=3p_{A}-\rho_{A}\) and \(\nabla_{\alpha}T^{\alpha}_{\ 0,A}=-\hat{\rho}_{A}-3H(\rho_{A}+p_{A})\), then if we define \(\bar{\mu}=\mu\) and \(\bar{\nu}=\frac{\nu}{1-3w_{\rm DE}}\), we obtain the dynamical system (53) with \(Q=Q_{0,DM}=3H(\mu\rho_{\rm DM}+\nu\rho_{\rm DE})\). Now introducing a new variable \(N=\ln\left(\frac{a}{a_{0}}\right)\), where \(a_{0}\) denotes the present value of the scale factor, and taking into account that \(\dot{N}=H\), the system (53) leads to the following linear first order autonomous dynamical system: \[\left\{\begin{array}{rcl}\rho^{\prime}_{\rm DM}+3\rho_{\rm DM}& =&-3(\mu\rho_{\rm DM}+\nu\rho_{\rm DE}),\\ \rho^{\prime}_{\rm DE}+3(1+w_{\rm DE})\rho_{\rm DE}&=&3(\mu\rho_{\rm DM}+\nu \rho_{\rm DE})\,,\end{array}\right. \tag{83}\] where now the prime denotes the derivative with respect the variable \(N\). The autonomous system (83) can be expressed in the matrix form as \(X^{\prime}=BX\), where \[X=\left(\begin{array}{c}\rho_{\rm DM}\\ \rho_{\rm DE}\end{array}\right)\,, \tag{84}\] and \[B=\left(\begin{array}{cc}-3(1+\mu)&-3\nu\\ +3\mu&-3(1+w_{\rm DE}-\nu)\end{array}\right). \tag{85}\] Now in order to get singular behaviors, one of the eigenvalues of the matrix \(B\) should have a positive real part. In terms of the trace and determinant of \(B\), this means that there are two different situations: 1. \({\rm Tr}B>0\) and \({\rm Det}B>0\). 2. \({\rm Det}B<0\). In the former case both the eigenvalues are positive, and the origin of coordinates is a repeller. In order to ensure that the energy densities of both the fluids are always positive, one has to impose that the origin is not a focus, because if so, then at early times, the orbits would oscillate around the origin leading to negative energy densities. To prevent this behavior one has to impose that the discriminant \(\Delta=({\rm Tr}B)^{2}-4{\rm Det}B\) is positive, that means, the origin is a node. In addition, for a node, to ensure that the energy densities should be positive, both the orbits following the respective eigenvectors of the matrix \(B\) (\(X_{+}(N)={\rm e}^{\lambda_{+}N}V_{+}\) and \(X_{-}(N)={\rm e}^{\lambda_{-}N}V_{-}\) being \(\lambda_{+}\) and \(\lambda_{-}\) the eigenvalues of the matrix \(B\) and, \(V_{+}=(V_{1,+},V_{2,+})\) and \(V_{-}=(V_{1,-},V_{2,-})\) their corresponding eigenvectors) must belong to the first quadrant, that is, the condition \(V_{1,\pm}\geq 0\) and \(V_{2,\pm}\geq 0\) must be satisfied. As a consequence, all orbits with an initial condition in the first quadrant, i.e., with initial positive values of \(\rho_{\rm DM}\) and \(\rho_{\rm DE}\), will remain in the first quadrant, which ensures that the energy densities are always positive. All these conditions lead to the following constraints that the parameters \(\mu\) and \(\nu\) need to satisfy: \[\left\{\begin{array}{rcl}2+\mu-\nu+w_{\rm DE}&<&0,\\ (1+\mu)(1+w_{\rm DE})-\nu&>&0,\\ (w_{\rm DE}-\mu-\nu)^{2}-4\mu\nu&>&0,\end{array}\right. \tag{86}\] where the first equation is \({\rm Tr}B>0\), the second is \({\rm Det}B>0\) and the third is \(\Delta>0\). On the other hand, the eigenvalues of \(B\) are given by \(\lambda_{\pm}=({\rm Tr}B\pm\sqrt{\Delta})/2\), and the corresponding eigenvectors are given by as follows: 1. For \(\nu\neq 0\), \[V_{\pm}=\left(1,-\frac{\mu+1+\lambda_{\pm}/3}{\nu}\right)\,.\] (87) 2. For \(\nu=0\), the eigenvalues are \(\lambda_{+}=-3(1+w_{\rm DE})\) and \(\lambda_{-}=-3(1+\mu)\). This implies that \(w_{\rm DE}<-1\) (phantom fluid) and also \(\mu<-1\). The eigenvectors are given by \[V_{+}=(0,1),\quad V_{-}=\left(1,\frac{\mu}{w_{\rm DE}-\mu}\right)\,.\] (88) Then, the conditions \(V_{1,\pm}\geq 0\) and \(V_{2,\pm}\geq 0\) lead to the following restrictions: \[\left\{\begin{array}{cc}-\frac{1}{\nu}\left(\mu+1+\frac{\lambda_{+}}{3} \right)\geq 0,&\mbox{for}\ \ \nu\neq 0,\\ w_{\rm DE}<\mu\leq 0,&\mbox{for}\ \ \nu=0\,.\end{array}\right. \tag{89}\] Thus, in order that the initial condition was in the region defined by the orbits \(X_{+}(N)\) and \(X_{-}(N)\), and the energy densities are always positive, is \[\min\left(V_{2,+},V_{2,-}\right)\leq\frac{\rho_{\rm DE,0}}{\rho_{\rm DM,0}}\leq \max\left(V_{2,+},V_{2,-}\right) \tag{90}\] for \(\nu\neq 0\), and \[\frac{\rho_{\rm DE,0}}{\rho_{\rm DM,0}}\geq\frac{\mu}{w_{\rm DE}-\mu}\,, \tag{91}\] for \(\nu=0\). Now, considering the simple case \(\nu=0\), and taking into account that \(\rho_{\rm DE,0}/\rho_{\rm DM,0}=\Omega_{\rm DE,0}/\Omega_{\rm DM,0}\), where, as usual, we denote by \(\Omega_{A}=\frac{\kappa_{A}^{2}\rho_{A}}{3H^{2}}\), and taking the observationally reliable values of the present day density parameters \(\Omega_{\rm DM,0}\cong 0.262\) and \(\Omega_{\rm DE,0}\cong 0.69\), from Eq. (91) we deduce that the parameter \(\mu\) must satisfy the condition \[0.72w_{\rm DE}<\mu\leq 0\,, \tag{92}\] and from Eq. (86), we conclude that the value of the parameter \(\mu\) has to satisfy \[0.72w_{\rm DE}<\mu<-1,\quad\mbox{ with }\quad w_{\rm DE}<-1.38\,. \tag{93}\] Finally, we consider the case where the origin is a saddle point, i.e., when \({\rm Det}B<0\), which leads to the following constraint \[(1+\mu)(1+w_{\rm DE})-\nu<0\,. \tag{94}\] This constraint has to be added to the constraints in Eqs. (89), (90) and (91). And for the simple case with \(\nu=0\), one gets the following range of the parameters \[0.72w_{\rm DE}<\mu<0,\quad w_{\rm DE}<-1,\quad\mu>-1\,. \tag{95}\] ## IV Finite time singularities in modified gravity theories Modified gravity theories are rich from both theoretical and observational perspectives [131; 132; 133; 134; 135; 136; 137; 138; 139; 290; 291; 292; 293] (also see the review articles in this direction [121; 294; 295; 296; 297; 298; 299; 300; 301; 302; 303; 304; 305] and the references therein). It has been consistently observed that the modified gravity models can successfully describe both the late-time accelerating expansion of the Universe [133; 136; 141; 142; 146; 295; 567] as well as its evolution in the early phase, known as inflationary era [164; 295; 568; 569; 570; 571]. Additionally, it has been found that the modified gravity theories can also unify both inflation and DE eras in a single picture [135; 163; 164; 165; 166; 167; 168; 169; 307; 309; 329; 330; 332; 572; 573]. Thus, undoubtedly, modified gravity theories are very appealing to explain different phases of the Universe. However, as there is no unique way to modify the gravitational sector of the Universe, therefore, over the years several modified gravity models have been introduced and confronted with the observational data. Based on the qualitative nature of a specific modified gravity model, this can lead to finite-time singularities in the future. In this section we shall discuss how finite-time singularities appear in different modified gravity theories. In the case of GR, we have already seen that the cosmological equations can be written as \[\rho=\frac{3H^{2}}{\kappa^{2}}\,,\quad p=-\frac{1}{\kappa^{2}}\left(2\dot{H}+3H^{ 2}\right)\,. \tag{96}\] Motivated by Eq. (96), we may define the effective energy density \(\rho_{\rm eff}\) and the effective pressure \(p_{\rm eff}\), as follows, \[\rho_{\rm eff}\equiv\frac{3H^{2}}{\kappa^{2}}\,,\quad p_{\rm eff}\equiv-\frac {1}{\kappa^{2}}\left(2\dot{H}+3H^{2}\right)\,. \tag{97}\] Note that \(\rho_{\rm eff}\) and \(p_{\rm eff}\) defined in Eq. (97) satisfy the conservation equation \[\dot{\rho}_{\rm eff}+3H(\rho_{\rm eff}+p_{\rm eff})=0. \tag{98}\] We now assume that the Hubble rate \(H\) behaves as in (59), \[H\sim\frac{h_{s}}{\left(t_{s}-t\right)^{\beta}}\,, \tag{99}\] with a constant \(h_{s}>0\) when \(t\lesssim t_{s}\). When \(\beta<1\) and different from zero, \(\rho_{\rm eff}\) and \(p_{\rm eff}\) behaves as \(\rho_{\rm eff}\sim\left(t_{s}-t\right)^{-2\beta}\), \(p_{\rm eff}\sim\left(t_{s}-t\right)^{-\beta-1}\). On the other hand, when \(\beta\geq 1\), we find \(\rho_{\rm eff}\sim p\sim\left(t_{s}-t\right)^{-2\beta}\). Then \(\beta\geq 1\) corresponds to Type I singularity, \(0<\beta<1\) to Type III, \(-1<\beta<0\) to Type II, and the case that \(\beta<-1\) and \(\beta\) is not an integer corresponds to Type IV. In the case of Type IV singularity, if \(-n<\beta<1-n\) for a positive integer, \(\frac{d^{m}H}{dt^{m}}\) (\(m\geq n\)) diverges at \(t=t_{s}\). For \(\frac{d^{m}H}{dt^{m}}\) (\(0\leq m\leq n-1\)), the behavior of (99) could be modified to be \[H=H_{s}(t)+\frac{h_{s}}{\left(t_{s}-t\right)^{\beta}}\,. \tag{100}\] Here \(H_{s}(t)\) is a function which can be differentiated \(n\)-times (that is, it is a \(\mathcal{C}^{n}\) class function). Then for \(\frac{d^{m}H}{dt^{m}}\) (\(0\leq m\leq n-1\)), the first term in (100) becomes dominant when \(t\sim t_{s}\) and for \(\frac{d^{m}H}{dt^{m}}\) (\(m\geq n\)), the second term dominates when \(t\sim t_{s}\). In this section, by using the formalism of the reconstruction, we construct models which realize the above singularities in (99). Usually, we start from a theory, which is defined by the action, and solve equations of motion to define the background dynamics. We may consider, however, the inverse problem, i.e., the cosmological reconstruction of gravitational theories, that is, by using the fact that modified gravity is defined in terms of some arbitrary function(s), we can show how the complicated background cosmology, which complies with the observational data, may be reconstructed. The general approach to reconstruction in modified gravity and DE models was developed in Refs. [151; 154; 574]. ### Scalar-tensor Gravity We remind the reconstruction of the scalar-Einstein gravity (or scalar-tensor theory), whose action can be written as \[S=\int d^{4}x\sqrt{-g}\left\{\frac{1}{2\kappa^{2}}R-\frac{1}{2}\omega(\phi) \partial_{\mu}\phi\partial^{\mu}\phi-V(\phi)+L_{\rm matter}\right\}\,. \tag{101}\] Here, \(L_{\rm matter}\) is the matter Lagrangian, and \(\omega(\phi)\) and \(V(\phi)\) are functions of the scalar \(\phi\). The function \(\omega(\phi)\) is not relevant and can be absorbed into the redefinition of the scalar field \(\phi\). In fact, if one redefines the scalar field \(\phi\) by \[\varphi\equiv\int^{\phi}d\phi\sqrt{\left|\omega(\phi)\right|}\,, \tag{102}\] the kinetic term of the scalar field in the action (101) has the following form: \[-\omega(\phi)\partial_{\mu}\phi\partial^{\mu}\phi=\left\{\begin{array}{ll}- \partial_{\mu}\varphi\partial^{\mu}\varphi,&\mbox{when $\omega(\phi)>0$},\\ \partial_{\mu}\varphi\partial^{\mu}\varphi,&\mbox{when $\omega(\phi)<0$}.\end{array}\right. \tag{103}\] The case of \(\omega(\phi)>0\) corresponds to the quintessence or non-phantom scalar field, but the case of \(\omega(\phi)<0\) corresponds to the phantom scalar. Although \(\omega(\phi)\) can be absorbed into the redefinition of the scalar field, we keep \(\omega(\phi)\) for later convenience. The reconstruction of the scalar-tensor gravity is based on the Refs. [575, 576, 577]. For the action (101), in the flat FLRW spacetime (3), the energy density and the pressure are as follows, \[\rho=\frac{1}{2}\omega(\phi)\dot{\phi}^{2}+V(\phi)\,,\quad p=\frac{1}{2} \omega(\phi)\dot{\phi}^{2}-V(\phi)\,. \tag{104}\] The above equations in (104) can be rewritten as \[\omega(\phi)\dot{\phi}^{2}=-\frac{2}{\kappa^{2}}\dot{H}\,,\quad V(\phi)=\frac{ 1}{\kappa^{2}}\left(3H^{2}+\dot{H}\right)\,. \tag{105}\] Assuming \(\omega(\phi)\) and \(V(\phi)\) are given by a single function \(f(\phi)\), as follows, \[\omega(\phi)=-\frac{2}{\kappa^{2}}f^{\prime}(\phi)\,,\quad V(\phi)=\frac{1}{ \kappa^{2}}\left(3f(\phi)^{2}+f^{\prime}(\phi)\right)\,, \tag{106}\] we find that in the case where we neglect the contribution from the matter, the exact solution of the Friedmann and Raychaudhuri equations or the FLRW equations with (104) has the following form: \[\phi=t\,,\quad H=f(t)\,. \tag{107}\] We also note that the equation given by the variation with respect to \(\phi\) \[0=\omega(\phi)\ddot{\phi}+\frac{1}{2}\omega^{\prime}(\phi)\dot{\phi}^{2}+3H \omega(\phi)\dot{\phi}+V^{\prime}(\phi)\,, \tag{108}\] is also satisfied by the solution (107). Therefore, the arbitrary Universe evolution expressed by \(H=f(t)\) can be realized by an appropriate choice of \(\omega(\phi)\) and \(V(\phi)\). In other words, by defining the particular type of Universe evolution, the corresponding scalar-Einstein gravity can be found. Especially in the case of the singular behavior in (99), we find \[\omega(\phi)=-\frac{2\beta h_{s}}{\kappa^{2}\left(t_{s}-\phi\right)^{\beta+1} }\,,\quad V(\phi)=\frac{1}{\kappa^{2}}\left(\frac{3h_{s}^{2}}{\left(t_{s}- \phi\right)^{2\beta}}-\frac{\beta h_{s}}{\left(t_{s}-\phi\right)^{\beta+1}} \right)\,. \tag{109}\] We should note that in the case that \(\beta\) is positive, \(\omega(\phi)\) becomes negative when \(\phi<t_{s}\), which means that the scalar field \(\phi\) is a ghost. The ghost generates negative norm states in the quantum theories and conflicts with the so-called Copenhagen interpretation. By using other kinds of modified gravity theories, we can realize the behavior of (99) without generating the ghost. Now, from Eq. (102), one can see that the new field \(\varphi\), is given by \[\varphi=\left\{\begin{array}{ll}\frac{2\sqrt{-\beta h_{s}}}{\kappa\left(1- \beta\right)}\left(t_{s}-\phi\right)^{\frac{1-\beta}{2}},&\text{when }\beta<0,\\ \frac{2\sqrt{\beta h_{s}}}{\kappa\left(1-\beta\right)}\left(t_{s}-\phi\right) ^{\frac{1-\beta}{2}},&\text{when }\beta>0,\quad\text{but}\quad\beta\neq 1,\\ \frac{2\sqrt{h_{s}}}{\kappa}\ln\left(t_{s}-\phi\right),&\text{when }\beta=1. \end{array}\right.\,. \tag{110}\] Therefore, the action (101) has the following form when \(\beta<0\), \[S=\int d^{4}x\sqrt{-g} \left\{\frac{1}{2\kappa^{2}}R-\frac{1}{2}\partial_{\mu}\varphi \partial^{\mu}\varphi-\frac{1}{\kappa^{2}}\left(3h_{s}^{2}\left(\frac{2\sqrt{ -\beta h_{s}}}{\kappa\left(1-\beta\right)}\right)^{\frac{4\beta}{1-\beta}} \varphi^{\frac{4\beta}{1-\beta}}\right.\right. \tag{111}\] \[\left.\left.+h_{s}\beta\left(\frac{2\sqrt{-\beta h_{s}}}{\kappa \left(1-\beta\right)}\right)^{-\frac{\beta\left(\beta+1\right)}{1-\beta}} \varphi^{\frac{\beta\left(\beta+1\right)}{1-\beta}}\right)+L_{\text{matter}} \right\}\,,\] when \(\beta>0\) and \(\beta\neq 1\), \[S=\int d^{4}x\sqrt{-g} \left\{\frac{1}{2\kappa^{2}}R+\frac{1}{2}\partial_{\mu}\varphi \partial^{\mu}\varphi-\frac{1}{\kappa^{2}}\left(3h_{s}^{2}\left(\frac{2\sqrt{ \beta h_{s}}}{\kappa\left(1-\beta\right)}\right)^{\frac{4\beta}{1-\beta}} \varphi^{\frac{4\beta}{\beta+1}}\right.\right. \tag{112}\] \[\left.\left.+h_{s}\beta\left(\frac{2\sqrt{\beta h_{s}}}{\kappa \left(1-\beta\right)}\right)^{-\frac{\beta\left(\beta+1\right)}{1-\beta}} \varphi^{\frac{\beta\left(\beta+1\right)}{1-\beta}}\right)+L_{\text{matter}} \right\}\,.\] and when \(\beta=1\), \[S=\int d^{4}x\sqrt{-g}\,\left\{\frac{1}{2\kappa^{2}}R+\frac{1}{2} \partial_{\mu}\varphi\partial^{\mu}\varphi-\frac{1}{\kappa^{2}}\left(3h_{s}^{2} +H_{s}\right)\mathrm{e}^{-\frac{\kappa\varphi}{\sqrt{h_{s}}}}+L_{\mathrm{ matter}}\right\}\,. \tag{113}\] In the case of Eqs. (112) and (113), the signature in front of the kinetic term of the scalar field \(\varphi\) is negative and therefore the scalar field becomes a ghost. The Eq. (112) with \(\beta>1\) and (113) correspond to a Type I singularity; Eq. (112) with \(0<\beta<1\) corresponds to a Type III singularity; Eq. (111) with \(-1<\beta<0\) corresponds to a Type II singularity; and Eq. (111) where \(\beta<-1\) and \(\beta\) is not a negative integer corresponds to a Type IV singularity. ### Brans-Dicke Gravity The action of the original Brans-Dicke model is given by [578] \[S_{\mathrm{BD}}=\frac{1}{2\kappa^{2}}\int d^{4}x\sqrt{-g}\left( \phi R-\omega_{0}\frac{\partial_{\mu}\phi\partial^{\mu}\phi}{\phi}\right)\,. \tag{114}\] Here \(\omega_{0}\) is a constant known as the Brans-Dicke coupling constant. We now consider the following generalization, \[S=\int d^{4}x\sqrt{-g}\left[\frac{\mathrm{e}^{\varphi(\phi)}R}{2 \kappa^{2}}-\frac{1}{2}\omega(\phi)\partial_{\mu}\phi\partial^{\mu}\phi-V( \phi)+L_{\mathrm{matter}}\right]\,. \tag{115}\] In the original Brans-Dicke model, there is a strong constraint on the Brans-Dicke coupling constant \(\omega_{0}>40,000\) but in the generalized model (122), by adjusting the parameters in the potential \(V(\phi)\), which includes the mass term, we can escape from the constraint. The generalized model was applied to the DE problem in [139], and it was found that the phantom Universe can be realized even if \(\omega(\phi)>0\), that is, the scalar field is canonical, which is contrary to the case of the scalar-tensor model in Eq. (101), where, as discussed after Eq. (109), there appears the ghost when \(\beta\) is negative and \(\beta<-1\) in the case of the phantom Universe. If we re-scale the metric by \(g_{\mu\nu}=\mathrm{e}^{-\varphi(\phi)}g_{\mu\nu}^{(\mathrm{E})}\), the action (122) has the following form, \[S=\int d^{4}x\sqrt{-g^{(E)}} \left[\frac{R}{2\kappa^{2}}-\frac{1}{2}\left(\mathrm{e}^{-\varphi (\phi)}\omega(\phi)+\frac{3}{2\kappa^{2}}\varphi^{\prime}(\phi)^{2}\right) \partial_{\mu}\phi\partial^{\mu}\phi-\mathrm{e}^{-2\varphi(\phi)}V(\phi)\right. \tag{116}\] \[\left.+\mathrm{e}^{-2\varphi(\phi)}\left.L_{\mathrm{matter}} \right|_{g_{\mu\nu}=\mathrm{e}^{-\varphi(\phi)}g_{\mu\nu}^{(E)}}\right]\,,\] The action in Eq. (116) is called the Einstein frame action. Except for the matter part, the action can be regarded with the action of the scalar-tensor theory given in Eq. (101). In the Einstein frame, there appears the coupling of the scalar field \(\phi\) with the matter due to the rescaling \(g_{\mu\nu}=\mathrm{e}^{-\varphi(\phi)}g_{\mu\nu}^{(E)}\). If the matter is minimally coupled with the gravity in the action (122), the cosmological time observed by the observer made of the matter is given by the original metric \(g_{\mu\nu}\) but never \(g_{\mu\nu}^{(E)}\). Then even if the Universe is not accelerating expanding in the Einstein frame, the expansion of the Universe may be accelerated in the original frame given by the original metric \(g_{\mu\nu}\), which is called the Jordan frame. It is straightforward, however, to discuss the existence of the ghost in the Einstein frame because there is no direct coupling of the scalar field \(\phi\) with the scalar curvature \(R\). The ghost can be avoided if \[\mathrm{e}^{-\varphi(\phi)}\omega(\phi)+3\varphi^{\prime}(\phi)^{2}>0\,. \tag{117}\] The Friedmann and Raychaudhuri equations or the FLRW equations are given as follows, \[H^{2}=\frac{\mathrm{e}^{-\varphi}\kappa^{2}}{3}\left[\frac{1}{2 }\omega(\phi)\dot{\phi}^{2}+V(\phi)\right]\,,\quad\dot{H}=-\frac{\mathrm{e}^{- \varphi}\kappa^{2}}{2}\omega(\phi)\dot{\phi}^{2}-\frac{1}{2}\left(\ddot{ \varphi}+\dot{\varphi}^{2}\right)\,, \tag{118}\] this can be rewritten as \[\omega(\phi)\dot{\phi}^{2}=-\frac{2\mathrm{e}^{\varphi}}{\kappa^{2}}\left[ \dot{H}+\frac{1}{2}\left(\ddot{\varphi}+\dot{\varphi}^{2}\right)\right]\,, \quad V(\phi)=\frac{2\mathrm{e}^{\varphi}}{\kappa^{2}}\left[\dot{H}+3H^{2}+ \frac{1}{2}\left(\ddot{\varphi}+\dot{\varphi}^{2}\right)\right]\,. \tag{119}\] Then, if we take the following choice of scalar potentials using a function \(f(\phi)\), \[\omega(\phi)= -\frac{2\mathrm{e}^{\varphi(\phi)}}{\kappa^{2}}\left[f^{\prime}( \phi)+\frac{1}{2}\left(\varphi^{\prime\prime}(\phi)+(\varphi^{\prime}(\phi))^ {2}\right)\right]\,,\] \[V(\phi)= \frac{2\mathrm{e}^{\varphi(\phi)}}{\kappa^{2}}\left[f^{\prime}( \phi)+3f(\phi)^{2}+\frac{1}{2}\left(\varphi^{\prime\prime}(\phi)+(\varphi^{ \prime}(\phi))^{2}\right)\right]\,, \tag{120}\] the explicit solution is given by Eq. (107). Note that \(\varphi(\phi)\) can be an arbitrary function of \(\phi\). In the case of the singular behavior in Eq. (99), we find \[\omega(\phi) = \frac{2\mathrm{e}^{\varphi(\phi)}}{\kappa^{2}}\left[-\frac{\beta h _{s}}{\left(t_{s}-\phi\right)^{\beta+1}}-\frac{1}{2}\left(\varphi^{\prime\prime }(\phi)+(\varphi^{\prime}(\phi))^{2}\right)\right]\,,\] \[V(\phi) = \frac{2\mathrm{e}^{\varphi(\phi)}}{\kappa^{2}}\left[\frac{\beta h _{s}}{\left(t_{s}-\phi\right)^{\beta+1}}+\frac{3h_{s}^{2}}{\left(t_{s}-\phi \right)^{2\beta}}+\frac{1}{2}\left(\varphi^{\prime\prime}(\phi)+(\varphi^{ \prime}(\phi))^{2}\right)\right]\,, \tag{121}\] By using Eq. (121), the action in Eq. (122) has the following form, \[S = \int d^{4}x\sqrt{-g}\left[\frac{\mathrm{e}^{\varphi(\phi)}R}{2 \kappa^{2}}-\frac{\mathrm{e}^{\varphi(\phi)}}{\kappa^{2}}\left\{-\frac{\beta h _{s}}{\left(t_{s}-\phi\right)^{\beta+1}}-\frac{1}{2}\left(\varphi^{\prime \prime}(\phi)+(\varphi^{\prime}(\phi))^{2}\right)\right\}\partial_{\mu}\phi \partial^{\mu}\phi\right. \tag{122}\] \[\left.-\frac{2\mathrm{e}^{\varphi(\phi)}}{\kappa^{2}}\left\{ \frac{\beta h_{s}}{\left(t_{s}-\phi\right)^{\beta+1}}+\frac{3h_{s}^{2}}{\left( t_{s}-\phi\right)^{2\beta}}+\frac{1}{2}\left(\varphi^{\prime\prime}(\phi)+( \varphi^{\prime}(\phi))^{2}\right)\right\}+L_{\mathrm{matter}}\right]\,.\] Different from the case of the scalar-tensor theory (109), by adjusting the function \(\varphi(\phi)\), we can choose \(\omega(\phi)\) to be positive and avoid the ghost even if \(\beta\) is positive. In fact, now the left hand side (l.h.s.) of Eq. (117) has the following form, \[\mathrm{e}^{-\varphi(\phi)}\omega(\phi)+\frac{3}{2\kappa^{2}}\varphi^{\prime} (\phi)^{2}=\frac{1}{\kappa^{2}}\left[-2\frac{\beta h_{s}}{\left(t_{s}-\phi \right)^{\beta+1}}-\varphi^{\prime\prime}(\phi)+\frac{1}{2}(\varphi^{\prime}( \phi))^{2}\right]\,. \tag{123}\] Then, for example, when \(\beta\neq 1\), if we choose \[\varphi(\phi)=\frac{2h_{s}}{1-\beta}\left(t_{s}-\phi\right)^{1-\beta}\,, \tag{124}\] we obtain, \[\mathrm{e}^{-\varphi(\phi)}\omega(\phi)+\frac{3}{2\kappa^{2}}\varphi^{\prime} (\phi)^{2}=\frac{2h_{s}^{2}}{\kappa^{2}\left(t_{s}-\phi\right)^{2\beta}}>0\,. \tag{125}\] Therefore, the condition (117) is satisfied. When \(\beta=1\), we may choose, \[\varphi(\phi)=2h_{s}\ln\left(t_{s}-\phi\right)\,, \tag{126}\] we obtain, \[\mathrm{e}^{-\varphi(\phi)}\omega(\phi)+\frac{3}{2\kappa^{2}}\varphi^{\prime} (\phi)^{2}=\frac{2h_{s}^{2}}{\kappa^{2}\left(t_{s}-\phi\right)^{2}}>0\,, \tag{127}\] and we find that the condition (117) is satisfied, again. By using (124), when \(\beta\neq-1\), the action (122) can be rewritten as \[S=\int d^{4}x\sqrt{-g}\left[\frac{\mathrm{e}^{\frac{2h_{s}}{1-\beta}\left(t_{ s}-\phi\right)^{1-\beta}}R}{2\kappa^{2}}+\frac{2h_{s}^{2}\mathrm{e}^{\frac{2h_{s}}{1- \beta}\left(t_{s}-\phi\right)^{1-\beta}}}{\kappa^{2}\left(t_{s}-\phi\right)^ {2\beta}}\partial_{\mu}\phi\partial^{\mu}\phi-\frac{10h_{s}^{2}\mathrm{e}^{ \frac{2h_{s}}{1-\beta}\left(t_{s}-\phi\right)^{1-\beta}}}{\kappa^{2}\left(t_ {s}-\phi\right)^{2\beta}}+L_{\mathrm{matter}}\right]\,. \tag{128}\] Furthermore, if we define a new scalar field \(\xi\) by \[\xi=\frac{2}{\kappa}\mathrm{e}^{\frac{h_{s}}{1-\beta}\left(t_{s}-\phi\right)^{ 1-\beta}}\,, \tag{129}\] the action (128) is further rewritten as \[S=\int d^{4}x\sqrt{-g}\left[\frac{\xi^{2}R}{8}+\frac{1}{2}\partial_{\mu}\xi \partial^{\mu}\xi-\frac{5h_{s}^{2}\xi^{2}}{2}\left(\frac{1-\beta}{2h_{s}}\ln \frac{\kappa\xi}{2}\right)^{\frac{-2\beta}{1-\beta}}+L_{\mathrm{matter}} \right]\,. \tag{130}\] On the other hand, when \(\beta=1\), by using (126) the action (122) is rewritten as \[S=\int d^{4}x\sqrt{-g}\,\left[\frac{\left(t_{s}-\phi\right)^{2h_{s}}R}{2\kappa^{2} }+\frac{2h_{s}^{2}\left(t_{s}-\phi\right)^{2h_{s}-2}}{\kappa^{2}}\partial_{\mu} \phi\partial^{\mu}\phi-\frac{10h_{s}^{2}\left(t_{s}-\phi\right)^{2h_{s}-2}}{ \kappa^{2}}+L_{\rm matter}\right]\,. \tag{131}\] In addition, if we define a new scalar field \(\xi\) by \[\xi=\frac{2}{\kappa}\left(t_{s}-\phi\right)^{h_{s}}\,, \tag{132}\] the action (128) is further rewritten as \[S=\int d^{4}x\sqrt{-g}\left[\frac{\xi^{2}R}{8}+\frac{1}{2}\partial_{\mu}\xi \partial^{\mu}\xi-\frac{10h_{s}^{2}}{\kappa^{2}}\left(\frac{\kappa\xi}{2} \right)^{2-\frac{2}{h_{s}}}+L_{\rm matter}\right]\,. \tag{133}\] We should note that the signature in front of the kinetic term of \(\xi\) in the action (130) or (133) might seem to tell that \(\xi\) might be a ghost but as clear in the Einstein frame action (116), because the condition (117) is satisfied as in (125) or (127), there does not appear the ghost in the model. ### The \(k-\)essence Model Here, based on [579], we review the reconstruction of the \(k\)-essence model, which is a generalization of quintessence theory. The \(k\)-essence model is a rather general model that includes only one scalar field and the action is given by \[S=\int d^{4}x\sqrt{-g}\left(\frac{R}{2\kappa^{2}}-K\left(\phi,X \right)+L_{\rm matter}\right)\,,\quad X\equiv\partial^{\mu}\phi\partial_{\mu} \phi\,. \tag{134}\] Here, \(\phi\) is a scalar field, again. The Einstein equation has the following form: \[\frac{1}{\kappa^{2}}\left(R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R\right)=-K\left( \phi,X\right)g_{\mu\nu}+2K_{X}\left(\phi,X\right)\partial_{\mu}\phi\partial_{ \nu}\phi+T_{\mu\nu}\,. \tag{135}\] Here, \(K_{X}\left(\phi,X\right)\equiv\frac{\partial K\left(\phi,X\right)}{\partial X}\). On the other hand, the variation of the action with respect to \(\phi\) gives \[0=-K_{\phi}\left(\phi,X\right)+2\nabla^{\mu}\left(K\left(\phi,X \right)\partial_{\mu}\phi\right)\,. \tag{136}\] Here, \(K_{\phi}\left(\phi,X\right)\equiv\frac{\partial K\left(\phi,X\right)}{\partial\phi}\) and it is assumed that the scalar field \(\phi\) does not directly couple with the matter. When we neglect the contribution from the matter, the Friedmann and Raychaudhuri equations or the FLRW equations are given by \[\frac{3}{\kappa^{2}}H^{2}=2X\frac{\partial K\left(\phi,X\right)}{\partial X} -K\left(\phi,X\right)\,,\quad-\frac{1}{\kappa^{2}}\left(2\dot{H}+3H^{2}\right) =K\left(\phi,X\right)\,. \tag{137}\] As in the previous model, if we consider the following model \[K(\phi,X)=\sum_{n=0}^{\infty}\left(X+1\right)^{n}K^{(n)}(\phi) \,,\quad K^{(0)}(\phi)=-\frac{1}{\kappa^{2}}\left(2f^{\prime}(\phi)+3f(\phi) ^{2}\right)\,,\quad K^{(1)}(\phi)=\frac{1}{\kappa^{2}}f^{\prime}(\phi)\,, \tag{138}\] there exists a solution given by (107)., again. Note that in (138), \(K^{(n)}(\phi)\)\(n\geq 2\) can be an arbitrary function but \(K^{(2)}\) is related with the stability of the solution and \(K^{(3)}\) is related with the existence of the Schwarzschild spacetime. As in the previous models, we can realize the singular behavior in (99) by choosing \(f(\phi)=\frac{1}{\left(t_{s}-\phi\right)^{\varphi}}\). More explicitly, \(K(\phi,X)\) in (138) is given by \[K(\phi,X)=-\frac{1}{\kappa^{2}}\left(\frac{2\beta}{\left(t_{s}- \phi\right)^{\beta+1}}+\frac{3}{\left(t_{s}-\phi\right)^{2\beta}}\right)+ \frac{\beta}{\kappa^{2}\left(t_{s}-\phi\right)^{\beta+1}}+\sum_{n=2}^{\infty} \left(X+1\right)^{n}K^{(n)}(\phi)\,, \tag{139}\] Then, the model with \(\beta\geq 1\) generates Type I singularity, the model \(0<\beta<1\), Type III, the model with \(-1<\beta<0\) Type II, and the model where \(\beta<-1\) and \(\beta\) is not a negative integer generates Type IV singularity. ### Scalar-Einstein-Gauss-Bonnet Gravity We now consider the reconstruction of the scalar-Einstein-Gauss-Bonnet gravity (For pioneering work on the scalar-Einstein-Gauss-Bonnet gravity, see [580]) based on [581, 140]. The scalar-Einstein-Gauss-Bonnet gravity was first considered as a candidate of the DE in [140]. The action of the Einstein-Gauss-Bonnet gravity with a scalar field \(\phi\) is \[S=\int d^{4}x\sqrt{-g}\left[\frac{R}{2\kappa^{2}}-\frac{1}{2}\partial_{\mu} \phi\partial^{\mu}\phi-V(\phi)-\xi(\phi)G\right]\,. \tag{140}\] Here \(G\) is the Gauss-Bonnet invariant defined by \[G=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\xi\sigma}R^{\mu\nu\xi\sigma}\,. \tag{141}\] The variation of the action (140) with respect to the scalar field \(\phi\) yields the following equation: \[\nabla^{2}\phi-V^{\prime}(\phi)+\xi(\phi)G=0\,. \tag{142}\] When we neglect the contribution from the matter, the variation of the action (140) with respect to the metric \(g_{\mu\nu}\) yields the following field equations, \[0= \frac{1}{2\kappa^{2}}\left(-R^{\mu\nu}+\frac{1}{2}g^{\mu\nu}R \right)+\left(\frac{1}{2}\partial^{\mu}\phi\partial^{\nu}\phi-\frac{1}{4}g^{ \mu\nu}\partial_{\rho}\phi\partial^{\rho}\phi\right)-\frac{1}{2}g^{\mu\nu}V(\phi)\] \[+2\left(\nabla^{\mu}\nabla^{\nu}\xi(\phi)\right)R-2g^{\mu\nu} \left(\nabla^{2}\xi(\phi)\right)R-4\left(\nabla_{\rho}\nabla^{\mu}\xi(\phi) \right)R^{\nu\rho}-4\left(\nabla_{\rho}\nabla^{\nu}\xi(\phi)\right)R^{\mu\rho}\] \[+4\left(\nabla^{2}\xi(\phi)\right)R^{\mu\nu}+4g^{\mu\nu}\left( \nabla_{\rho}\nabla_{\sigma}\xi(\phi)\right)R^{\rho\sigma}-4\left(\nabla_{ \rho}\nabla_{\sigma}\xi(\phi)\right)R^{\mu\rho\nu\sigma}. \tag{143}\] We should note that the scalar field equation (142) can be obtained from Eq. (143). Therefore Eq. (142) is not an independent equation and we forget the equation hereafter. In the flat FLRW spacetime (3), Eq. (143) has the following forms, \[0= -\frac{3}{\kappa^{2}}H^{2}+\frac{1}{2}\dot{\phi}^{2}+V(\phi)+24H^ {3}\frac{d\xi(\phi(t))}{dt}\,, \tag{144}\] \[0= \frac{1}{\kappa^{2}}\left(2\dot{H}+3H^{2}\right)+\frac{1}{2}\dot {\phi}^{2}\,-V(\phi)-8H^{2}\frac{d^{2}\xi(\phi(t))}{dt^{2}}-16H\dot{H}\frac{d \xi(\phi(t))}{dt}-16H^{3}\frac{d\xi(\phi(t))}{dt}\,. \tag{145}\] By combining (144) and (145) and deleting \(V(\phi)\), we obtain \[0= \frac{2}{\kappa^{2}}\dot{H}+\dot{\phi}^{2}-8H^{2}\frac{d^{2}\xi( \phi(t))}{dt^{2}}-16H\dot{H}\frac{d\xi(\phi(t))}{dt}+8H^{3}\frac{d\xi(\phi(t) )}{dt}\] \[= \frac{2}{\kappa^{2}}\dot{H}+\dot{\phi}^{2}-8a\frac{d}{dt}\left( \frac{H^{2}}{a}\frac{d\xi(\phi(t))}{dt}\right)\,, \tag{146}\] and integrating Eq. (146) with respect to \(\xi(\phi)\), we get \[\xi(\phi(t))= \frac{1}{8}\int^{t}dt_{1}\frac{a(t_{1})}{H(t_{1})^{2}}\int^{t_{1}} \frac{dt_{2}}{a(t_{2})}\left(\frac{2}{\kappa^{2}}\dot{H}(t_{2})+\dot{\phi}^{ 2}(t_{2})\right)\,. \tag{147}\] Finally, by substituting \(\xi\) in (147) into (144), we have \[V(\phi(t))= \frac{3}{\kappa^{2}}H(t)^{2}-\frac{1}{2}\dot{\phi}(t)^{2}-3a(t)H(t) \int^{t}\frac{dt_{1}}{a(t_{1})}\left(\frac{2}{\kappa^{2}}\dot{H}(t_{1})+\dot{ \phi}^{2}(t_{1})\right)\,. \tag{148}\] Therefore, for the model where \(V(\phi)\) and \(\xi(\phi)\) are given by using adequate functions \(g(t)\) and \(f(\phi)\) in the following way, \[V(\phi)= \frac{3}{\kappa^{2}}g^{\prime}\left(f(\phi)\right)^{2}-\frac{1} {2(f^{\prime}(\phi))^{2}}\] \[-3g^{\prime}\left(f(\phi)\right)\mathrm{e}^{g(f(\phi))}\int^{\phi} d\phi_{1}f^{\prime}(\phi_{1})\mathrm{e}^{-g(f(\phi_{1}))}\left(\frac{2}{\kappa^{2}}g^{ \prime\prime}\left(f(\phi_{1})\right)+\frac{1}{(f^{\prime}(\phi_{1}))^{2}} \right)\,,\] \[\xi(\phi)= \frac{1}{8}\int^{\phi}d\phi_{1}\frac{f^{\prime}(\phi_{1})\mathrm{ e}^{g(f(\phi_{1}))}}{(g^{\prime}(\phi_{1}))^{2}}\int^{\phi_{1}}d\phi_{2}f^{\prime}(\phi_{2}) \mathrm{e}^{-g(f(\phi_{2}))}\left(\frac{2}{\kappa^{2}}g^{\prime\prime}\left(f( \phi_{2})\right)+\frac{1}{(f^{\prime}(\phi_{2}))^{2}}\right)\,, \tag{149}\] the solution of (149) is given by \[\phi=f^{-1}(t)\quad\left(t=f(\phi)\right)\,,\quad a=a_{s}\mathrm{e}^{g(t)}\,\, \left(H=g^{\prime}(t)\right)\,. \tag{150}\] Then by choosing \[g^{\prime}(t)=\frac{h_{s}}{\left(t_{s}-t\right)^{\beta}}\,, \tag{151}\] we obtain the singular behavior in (99). We should note that as clear from the action (140), the ghost does not appear in this model even if \(\beta>0\), that is, the case of Type I and Type III singularities. It is difficult to execute the integrations in (149) for general \(\beta\) in (151). Then as an example, we consider the following case which corresponds to Type I singularity, \[g(t)=-h_{s}\ln\left(t_{s}-t\right)\,,\quad f=t_{s}-f_{s}\mathrm{e}^{\phi}\,. \tag{152}\] Then, we find \(g=h_{s}\left(\phi+\ln f_{s}\right)\) and \[V(\phi)= \,\frac{1}{f_{s}^{2}\left(h_{s}+1\right)}\left\{\frac{3h_{s}^{2} }{\kappa^{2}}\left(-h_{s}+1\right)-4h_{s}+1\right\}\mathrm{e}^{-2\phi}\,,\] \[\xi(\phi)= \,\frac{1}{16h_{s}^{2}\left(h_{s}+1\right)}\left(\frac{2h_{s}}{ \kappa^{2}}+1\right)\mathrm{e}^{-2\phi}\,, \tag{153}\] which is the model proposed in [139]. ### \(F(r)\) Theories of Gravity The theory of \(F(R)\)-gravity is the most simplest and straightforward generalization of Einstein's General theory of Relativity and it has received quite remarkable attention in the cosmological community for two reasons: One is due to its simple construction and the other is its ability to explain the late-time accelerating expansion of the Universe and the early dynamics of the Universe. Over the last several years, the theory of \(F(R)\) gravity has been investigated by numerous investigators including its successes and failures in different domains of astrophysics and cosmology [133, 148, 151, 154, 155, 161, 163, 165, 231, 255, 307, 574, 582, 583, 584, 585, 586, 587]. We refer to the review articles on \(F(R)\) gravity for more details in this direction [294, 295, 297, 298]. The action of \(F(R)\) gravity is obtained by replacing the scalar curvature \(R\) of the Einstein-Hilbert action by a suitable function \(F(R)\) as follows [133, 294] \[S_{F(R)}=\int d^{4}x\sqrt{-g}\left[\frac{F(R)}{2\kappa^{2}}+L_{\rm matter} \right]\,. \tag{154}\] One can view the modified part in \(F(R)\) compared to the Einstein-Hilbert part as \(F(R)=R+f(R)\). Now, in the background of a spatially flat FLRW Universe (3), one can derive the gravitational equations in this modified gravitational theory taking the forms: \[\rho_{\rm eff}=\frac{3}{\kappa^{2}}H^{2}\,,\quad p_{\rm eff}=-\frac{1}{\kappa ^{2}}(2\dot{H}+3H^{2})\,, \tag{155}\] where \(\rho_{\rm eff}\) and \(p_{\rm eff}\) are given by \[\rho_{\rm eff}= \,\frac{1}{\kappa^{2}}\left[-\frac{1}{2}f(R)+3\left(H^{2}+\dot{H }\right)f^{\prime}(R)-18\left(4H^{2}\dot{H}+H\ddot{H}\right)f^{\prime\prime}( R)\right]+\rho\,, \tag{156}\] \[p_{\rm eff}= \,\frac{1}{\kappa^{2}}\left[\frac{1}{2}f(R)-\left(3H^{2}+\dot{H }\right)f^{\prime}(R)+6\left(8H^{2}\dot{H}+4\dot{H}^{2}+6H\ddot{H}+\ddot{H} \right)f^{\prime\prime}(R)+36\left(4H\dot{H}+\ddot{H}\right)^{2}f^{\prime \prime\prime}(R)\right]\] \[+p\,. \tag{157}\] in which \(\rho\), \(p\) are the energy density and pressure of the matter sector, respectively, and \(R=12H^{2}+6\dot{H}\). If the matter sector has a constant EoS parameter \(w=p/\rho\), then using Eqs. (156) and (157), one can quickly derive that \[p_{\rm eff}-w\rho_{\rm eff}=G\left(H,\dot{H},\ddot{H},\cdots\right)\,, \tag{158}\] where \[G\left(H,\dot{H},\ddot{H},...\right)=-\frac{1}{\kappa^{2}}\left(2\dot{H}+3(1+w)H^{ 2}\right)\,, \tag{159}\] and the explicit form of \(G\left(H,\dot{H},\ddot{H},...\right)\) is given by \[G\left(H,\dot{H},\ddot{H}\cdots\right)= \frac{1}{\kappa^{2}}\Bigg{[}\frac{1+w}{2}f(R)-\left\{3\left(1+w \right)H^{2}+\left(1+3w\right)\dot{H}\right\}f^{\prime}(R)\] \[+6\left\{\left(8+12w\right)H^{2}\dot{H}+4\dot{H}^{2}+\left(6+3w \right)H\ddot{H}+\ddot{H}\right\}f^{\prime\prime}(R)+36\left(4H\dot{H}+\ddot{ H}\right)^{2}f^{\prime\prime\prime}(R)\Bigg{]}. \tag{160}\] The above equations (159) and (160) have a very important consequence in cosmology. For example, if a cosmology is given in terms of the Hubble rate \(H(t)\), then the r.h.s. of Eq. (159) can be expressed in terms of a function of time, say \(f(t)\). Now, if we can find a combination of \(H\), \(\dot{H}\), \(\ddot{H},...\) in \(G\left(H,\dot{H},\ddot{H},...\right)\) that reproduces the function \(f(t)\), then the cosmology given by \(H(t)\) can be realized by this reconstruction mechanism. Let us illustrate the above by taking an example which will be essential to investigate the singularities in this context. Assume that a cosmology is given by the following Hubble rate: \[H=h_{1}+\frac{h_{2}}{t}\,, \tag{161}\] where \(h_{1}\) and \(h_{2}\) are constants. Thus, for the choice of \(H(t)\) in Eq. (161), one can derive its time derivatives as follows: \[\dot{H}=-\frac{h_{2}}{t^{2}}\,,\quad\ddot{H}=\frac{2h_{2}}{t^{3}}\,,\quad \cdots\,, \tag{162}\] As a consequence, the r.h.s. of Eq. (159) turns out to be \[-\frac{1}{\kappa^{2}}\left(2\dot{H}+3(1+w)H^{2}\right)=-\frac{1}{\kappa^{2}} \left(3(1+w)h_{1}^{2}+\frac{6(1+w)h_{1}h_{2}}{t}+\frac{-2h_{2}+3(1+w)h_{2}^{2} }{t^{2}}\right)\,. \tag{163}\] Now, if \(G\left(H,\dot{H},\ddot{H},..\right)\) is given by the following function \[G\left(H,\dot{H},\ddot{H},...\right)=\frac{1}{\kappa^{2}}\left\{-3(1+w)h_{1}^ {2}+6(1+w)h_{1}H+\left[2-3\left(1+w\right)h_{1}\right]\dot{H}\right\}\,, \tag{164}\] then Eq. (161) is a solution of Eq. (159). However, the choice of \(G\left(H,\dot{H},\ddot{H},\cdots\right)\) in Eq. (164), is indeed not unique and there is a large freedom in its choice, but the choice mainly depends on the underlying gravitational theory where we are interested in, see for instance Refs. [151, 574, 575]. Now, we shall investigate the appearance of finite-time future singularities in some \(F(R)\) gravity models using the reconstruction technique. This phenomenon is not surprising because the modified gravity can be represented as the Einstein gravity with an effective ideal fluid having a phantom or quintessence-like EoS (see the details in Ref. [588]). It is known that in some cases, such ideal fluid having a phantom or quintessence-like EoS may induce the finite-time future singularities. #### iv.2.1 (i) Big Rip singularity: We start with the Big Rip singularity which is characterized by the following evolution of the Hubble rate [589] \[H(t)=\frac{h_{s}}{t_{s}-t}\,, \tag{165}\] where \(h_{s}\) and \(t_{s}\) are positive real numbers. As one can notice, for \(t\to t_{s}\), \(H(t)\) of Eq. (165) diverges. Now, we shall apply the reconstruction technique. That means one can reconstruct the \(F(R)\) gravity theory realizing the cosmology given in terms of the Hubble rate of Eq. (165) which has a Big Rip singularity. One can rewrite the action of Eq. (154) with the use of proper functions \(P(\phi)\) and \(Q(\phi)\) of a scalar field \(\phi\) as follows [154]: \[S=\int d^{4}x\sqrt{-g}\bigg{(}P(\phi)R+Q(\phi)+L_{\rm matter}\bigg{)}\,. \tag{166}\] As the scalar field \(\phi\) does not have a kinetic term, therefore, it can be considered to be an auxiliary field. Now, varying the action (166) with respect to the scalar field \(\phi\), one gets \[0=P^{\prime}(\phi)R+Q^{\prime}(\phi)\,, \tag{167}\] where the prime denotes the derivative with respect to \(\phi\) and this can in principle be solved with respect to \(\phi\) as \(\phi=\phi(R)\). Now plugging \(\phi=\phi(R)\) into the action (166), one can certainly express the action in terms of \(F(R)\) given by \[F(R)=P\left(\phi(R)\right)R+Q\left(\phi(R)\right)\,. \tag{168}\] Now by varying the action (166) with respect to the metric tensor \(g_{\mu\nu}\), one can find \[-\frac{1}{2}g_{\mu\nu}\left\{P(\phi)R+Q(\phi)\right\}-R_{\mu\nu}P(\phi)+ \nabla_{\mu}\nabla_{\nu}P(\phi)-g_{\mu\nu}\nabla^{2}P(\phi)+\frac{1}{2}T_{ \mu\nu}=0\,, \tag{169}\] where \(T_{\mu\nu}\) is the energy momentum tensor of the matter sector. In the background of a spatially flat FLRW Universe (3), the gravitational field equations in Eq. (169) reduce to \[-6H^{2}P(\phi)-Q(\phi)-6H\frac{dP(\phi(t))}{dt}+\rho =0\,, \tag{170}\] \[\left(4\dot{H}+6H^{2}\right)P(\phi)+Q(\phi)+2\frac{d^{2}P(\phi(t ))}{dt^{2}}+4H\frac{dP(\phi(t))}{dt}+p =0\,. \tag{171}\] By combining Eqs. (170) and (171) we get \[2\frac{d^{2}P(\phi(t))}{dt^{2}}-2H\frac{dP(\phi(t))}{dt}+4\dot{H}P(\phi)+p+ \rho=0\,. \tag{172}\] As one can redefine the scalar field \(\phi\), therefore, we may choose \(\phi=t\). Now for the Hubble rate in Eq. (165), one can solve for the scale factor as \[a(t)=\widetilde{a}_{0}\exp\left(g(t)\right)\,, \tag{173}\] where \(\widetilde{a}_{0}>0\) is a constant and \(\dot{g}(t)=H(t)\). Concerning the matter sector we can assume that \(\rho=\sum_{i}\rho_{i}\) and \(p=\sum_{i}p_{i}\) where \(\rho_{i}\) and \(p_{i}\) denotes the energy density and pressure of the \(i\)-th fluid, respectively. If the fluid components do not interact with each other, then using the usual conservation equation, one can find \(\rho_{i}=\rho_{i0}a^{-3(1+w_{i})}\), where \(\rho_{i0}\) is the current value of the energy density \(\rho_{i}\) and \(w_{i}=p_{i}/\rho_{i}\) denotes the EoS parameter of the \(i\)-th fluid. With the above considerations, we can now re-express Eq. (172) as \[2\frac{d^{2}P(\phi)}{d\phi^{2}}-2g^{\prime}(\phi)\frac{dP(\phi)}{d\phi}+4g^{ \prime\prime}(\phi)P(\phi)+\sum_{i}(1+w_{i})\rho_{i0}a_{0}^{-3(1+w_{i})}\exp \left[-3\left(1+w_{i}\right)g(\phi)\right]=0\,. \tag{174}\] If Eq. (174) is solved for \(P(\phi)\), then from Eq. (170), one can find \(Q(\phi)\) as \[Q(\phi)=-6\left(g^{\prime}(\phi)\right)^{2}P(\phi)-6g^{\prime}(\phi)\frac{dP( \phi)}{d\phi}+\sum_{i}\rho_{i0}a_{0}^{-3(1+w_{i})}\exp\left[-3\left(1+w_{i} \right)g(\phi)\right]\,. \tag{175}\] Thus, we see that a given expansion history of the Universe specified by the Hubble rate or the scale factor can be realized by some specific \(F(R)\) gravity model. Now, neglecting the matter sector from this context, the general solution of Eq. (174), can either be given by \[P(\phi)=P_{+}\left(t_{s}-\phi\right)^{\alpha_{+}}+P_{-}\left(t_{s}-\phi\right) ^{\alpha_{-}},\quad\alpha_{\pm}\equiv\frac{-h_{s}+1\pm\sqrt{h_{s}^{2}-10h_{s}+ 1}}{2}\,, \tag{176}\] when \(h_{s}>5+2\sqrt{6}\) or \(h_{s}<5-2\sqrt{6}\) or can be given by \[P(\phi)=(t_{s}-\phi)^{-(h_{s}+1)/2}\left(\widetilde{A}\cos\left((t_{s}-\phi)\ln \frac{-h_{s}^{2}+10h_{s}-1}{2}\right)+\widetilde{B}\sin\left((t_{s}-\phi)\ln \frac{-h_{s}^{2}+10h_{s}-1}{2}\right)\right)\,, \tag{177}\] when \(5-2\sqrt{6}<h_{s}<5+2\sqrt{6}\). Here \(\widetilde{A}\) and \(\widetilde{B}\) are arbitrary constants. Now, using Eqs. (167), (168), and (175) (without matter sector), the forms of \(F(R)\) for large \(R\) can be derived as follows: \[F(R)\propto R^{1-\frac{\alpha_{-}}{2}}\,,\quad\text{when}\quad h_{s}>5+2 \sqrt{6},\quad\text{or}\quad h_{s}<5-2\sqrt{6}\,, \tag{178}\] \[F(R)\propto R^{(h_{s}+1)/4}\times(\text{oscillating parts})\,,\quad\text{when} \quad 5-2\sqrt{6}<h_{s}<5+2\sqrt{6}\,. \tag{179}\] _(ii) Other types of singularities_ Here we discuss more general singularities appearing in the context of \(F(R)\) gravity. In order to proceed with the general singularities, we consider that the Hubble rate be given in (59) or (99) as [589, 590], where we assume \(h_{s}\) (\(>0\)), \(\beta\) (\(\neq 0,1\)) are real numbers,3 and \(t<t_{s}\) since we are living in an expanding Universe. Note that for non-integer \(\beta<0\), some derivatives of \(H\) and therefore the curvature may become singular. For the above Hubble rate in (99), one may realize the evolution of the scale factor as in (59). Notice from Eq. (59) that for non-integer values of \(\beta\), when \(t_{s}<t\), the scale factor, and therefore the metric tensor may become complex number which is unphysical. This could hint towards the ending of our Universe at \(t=t_{s}\) even if \(\beta\) could be negative or less than \(-1\). As we are interested to explore the general singularities, we focus on \(\beta\neq 1\) and examine its various ranges. When \(\beta>1\), the scalar curvature \(R\) behaves as Footnote 3: The case \(\beta=0\) corresponds to the de Sitter space and \(\beta=1\) corresponds to the Big Rip singularity as discussed above, hence, we are interested to investigate the cases that \(\beta\neq 0,1\). \[R\sim 12H^{2}\sim 12h_{s}^{2}\left(t_{s}-t\right)^{-2\beta}\,, \tag{180}\] while for \(\beta<1\), the scalar curvature \(R\) behaves as \[R\sim 6\dot{H}\sim 6h_{s}\beta\left(t_{s}-t\right)^{-\beta-1}\,. \tag{181}\] Now, it is possible to trace the asymptotic solution for \(P\) when \(\phi\to t_{s}\) as follows: 1. For \(\beta>1\), one can find the following asymptotic expression of \(P(\phi)\): \[P(\phi)\sim \,\mathrm{e}^{(h_{s}/2(\beta-1))(t_{s}-\phi)^{-\beta+1}}\left(t_{s }-\phi\right)^{\beta/2}\left(\widetilde{A}\cos\left(\omega\left(t_{s}-\phi \right)^{-\beta+1}\right)+\widetilde{B}\sin\left(\omega\left(t_{s}-\phi\right) ^{-\beta+1}\right)\right)\,,\] (182) \[\omega\equiv \,\frac{h_{s}}{2\left(\beta-1\right)}\,.\] When \(\phi\to t_{s}\), \(P(\phi)\) tends to vanish. Using (167), (168), and (175), at large \(R\), \(F(R)\) can be derived as \[F(R)\propto\mathrm{e}^{(h_{s}/2(\beta-1))\left(\frac{R}{12h_{s}}\right)^{(\beta -1)/2\beta}}R^{-1/4}\times(\text{oscillating part})\,\,.\] (183) 2. For \(0<\beta<1\), one gets the asymptotic expression of \(P(\phi)\) as follows: \[P(\phi)\sim B\mathrm{e}^{-(h_{s}/2(1-\beta))(t_{s}-\phi)^{1-\beta}}\left(t_{s}- \phi\right)^{(\beta+1)/8},\] (184) and \(F(R)\) is given by \[F(R)\sim\mathrm{e}^{-(h_{s}/2(1-\beta))(-6\beta h_{s}R)^{(\beta-1)/(\beta+1)}} R^{7/8}\,.\] (185) 3. For \(\beta<0\), the asymptotic expression of \(P(\phi)\) is given by: \[P(\phi)\sim A\mathrm{e}^{-(h_{s}/2(1-\beta))(t_{s}-\phi)^{1-\beta}}\left(t_{s} -\phi\right)^{-\left(\beta^{2}-6\beta+1\right)/8},\] (186) and consequently, \(F(R)\) is given by \[F(R)\sim(-6h_{s}\beta R)^{\left(\beta^{2}+2\beta+9\right)/8(\beta+1)}\,\mathrm{ e}^{-(h_{s}/2(1-\beta))(-6h_{s}\beta R)^{(\beta-1)/(\beta+1)}}\,,\] (187) where note that \(-6h_{s}\beta R>0\) for real solution. Alternatively, one can trace the behavior of \(H\) from the behavior of \(R\). Let us consider the case when \(R\) behaves as, \[R\sim 6\dot{H}\sim R_{s}\left(t_{s}-t\right)^{-\gamma}\,. \tag{188}\] Which corresponds to \(\gamma=\beta+1<2\). In this situation: If \(1<\gamma<2\), which corresponds to \(0<\beta\) (\(=\gamma-1)<1\), \(H\) is given by \[H\sim\frac{R_{s}}{6\left(\gamma-1\right)}\left(t_{s}-t\right)^{-\gamma+1}\,. \tag{189}\] And if \(\gamma<1\), which corresponds to \(\beta=\gamma-1<0\), then \(H\) follows \[H\sim H_{s}+\frac{R_{s}}{6\left(\gamma-1\right)}\left(t_{s}-t\right)^{-\gamma+ 1}\,, \tag{190}\] where \(H_{s}\) is an arbitrary constant and it does not affect the behavior of \(R\). Note that \(H_{s}\) has been chosen to vanish in Eq. (99). On the contrary, if \(\gamma>2\), which corresponds to \(\beta=\gamma/2>1\), one has \(R\sim 12H^{2}\) and \(H\) behaves as \[H\sim\sqrt{\frac{R_{s}}{12}}\left(t_{s}-t\right)^{-\gamma/2}. \tag{191}\] Now, for the above expressions of the Hubble rate, one can find the evolution of the scale factor. If \(\gamma>2\), we find that the scale factor evolves as \[a(t)\propto\exp\left(\left(\frac{2}{\gamma}-1\right)\sqrt{\frac{R_{s}}{12}} \left(t_{s}-t\right)^{-\gamma/2+1}\right)\,. \tag{192}\] When \(1<\gamma<2\), \(a(t)\) evolves as \[a(t)\propto\exp\left(\frac{R_{s}}{6\gamma\left(\gamma-1\right)}\left(t_{s}-t \right)^{-\gamma}\right)\,. \tag{193}\] And if \(\gamma<1\), we get \[a(t)\propto\exp\left(H_{s}t+\frac{R_{s}}{6\gamma\left(\gamma-1\right)}\left(t _{s}-t\right)^{-\gamma}\right)\,. \tag{194}\] However, we see that a sudden future singularity appears at \(t=t_{s}\)[350; 352; 355; 591] when \(\gamma<2\). Now, dealing with the case \(\gamma<1\), as the second term in Eq. (190) is smaller than the first term, one may solve the differential Eq. (174) asymptotically in the following way \[P\sim P_{s}\left(1+\frac{R_{s}}{3\beta(1-\beta)}\left(t_{s}-\phi\right)^{1- \beta}\right)\,, \tag{195}\] where \(P_{s}\) is a constant, which finally leads to \[F(R)\sim F_{0}R+F_{1}R^{2\beta/(\beta+1)}\,. \tag{196}\] Now, since for \(F(R)\) gravity, one can introduce the effective energy density and effective pressure, see Eqs. (155), (156), (157), thus, for different values of \(\beta\) of the Hubble rate (99), the nature of the singularities can be studied as follows. When \(\beta>1\), as \(t\to t_{s}\), we see that \[a\sim\exp\Bigl{(}h_{s}\left(t_{s}-t\right)^{1-\beta}\big{/}\left(\beta-1\right) \Bigr{)}\rightarrow\infty,\,\,\,\text{and as a consequence, }\,\rho_{\text{eff}}\rightarrow\infty,\,|p_{\text{eff}}| \rightarrow\infty, \tag{197}\] which means that a Big Rip (Type I) singularity occurs. If \(0<\beta<1\), \(a\) goes to a constant but \(\rho_{\text{eff}}\rightarrow\infty\), \(|p_{\text{eff}}|\rightarrow\infty\), that means a Type III singularity occurs. If \(-1<\beta<0\), then \(a\) and \(\rho_{\text{eff}}\) vanish but \(|p_{\text{eff}}|\rightarrow\infty\) which means that a Type II singularity occurs. When \(\beta<0\), instead of Eq. (99), as in Eq. (190), one may assume, as in (100) with \(H_{s}(t)\) is a constant, \[H\sim H_{s}+h_{s}\left(t_{s}-t\right)^{-\beta}\,. \tag{198}\] Now, if \(-1<\beta<0\), for \(t\to t_{s}\), \(\rho_{\rm eff}\) approaches to finite value \(3H_{s}^{2}/\kappa^{2}\) but \(|p_{\rm eff}|\) diverges, so we have a Sudden singularity. If \(\beta<-1\) but \(\beta\) is not an integer, then \(a\) remains finite and \(\rho_{\rm eff}\), \(p_{\rm eff}\) vanish if \(H_{s}=0\) or \(\rho_{\rm eff}\) and \(p_{\rm eff}\) are finite if \(H_{s}\neq 0\), however, higher derivatives of \(H\) diverge. That means in this case a Type IV singularity occurs. Thus, we see that \(F(R)\) gravity may allow various type of finite-time future singularities. This is not unnatural because in the context of modified gravity, one can find an effective phantom/quintessence phase [294] and it is well-known that a phantom/quintessence-dominated Universe may end up with finite-time future singularities of various types. Hence, with the reconstruction of the \(F(R)\) gravity from a given cosmology one can find the possible functional forms of \(F(R)\) that may lead to finite-time future singularities. For example, from the present discussions, one can see that \(F(R)=R+\widetilde{\alpha}R^{n}\) with \(n>2\) leads to a Type I singularity and \(F(R)=R-\widetilde{\beta}R^{-n}\) with \(n>0\) leads to a Type III singularity where \(\widetilde{\alpha}\) and \(\widetilde{\beta}\) are any real numbers. #### v.2.1 Occurrence of Singularities in Different Frames: \(F(r)\) Gravity The choice of the physical frame in the context of \(F(R)\) gravity is an important point because cosmology of an \(F(R)\) gravity in one frame could be different in other frame. The accelerating phase of the Universe in one frame may correspond to its decelerating phase [592]. Also, the type of a singularity changes from one frame to other frame [593, 594]. In this section, we shall discuss the second possibility in detail, that means, how the choice of the frames affects the type of finite-time future singularities appearing in \(F(R)\) gravity theory. We start with the vacuum \(F(R)\) gravity in the Jordan frame whose action is given by (154) when we neglect the contribution from matter by putting \(L_{\rm matter}=0\)[594]. Varying the action (154) with \(L_{\rm matter}=0\) with respect to the metric \(g_{\mu\nu}\) representing a spatially flat FLRW geometry (3), one can obtain the following gravitational field equations, which is equivalent to (155) with (156) and (157) when we neglect the matter, \[0= -\frac{F(R)}{2}+3\left(H^{2}+\dot{H}\right)F^{\prime}(R)-18\left( 4H^{2}\dot{H}+H\ddot{H}\right)F^{\prime\prime}(R)\,, \tag{199}\] \[0= \frac{F(R)}{2}-\left(\dot{H}+3H^{2}\right)F^{\prime}(R)+6\left(8 H^{2}\dot{H}+4\dot{H}^{2}+6H\ddot{H}+\ddot{H}\right)F^{\prime\prime}(R)+36 \left(4H\dot{H}+\ddot{H}\right)^{2}F^{\prime\prime\prime}(R)\,, \tag{200}\] where an overhead dot represents the differentiation with respect to the cosmic time \(t\) in (3) and the prime corresponds to the differentiation with respect to the Ricci scalar \(R\). Now introducing the auxiliary fields \(A\) and \(B\), the action of Eq. (154) with \(L_{\rm matter}=0\) written in the Jordan frame can be expressed into an equivalent action as follows [595] (see also [596]) \[S=\frac{1}{2\kappa^{2}}\int d^{4}x\,\sqrt{-g}\Bigg{[}B(R-A)+F(A)\Bigg{]}\,. \tag{201}\] The action (201) can be varied with respect to the auxiliary scalar \(B\) leading to the condition \(A=R\), and hence, one can recover the action (154) with \(L_{\rm matter}=0\). Moreover, varying the action (201) with respect to \(A\), one can eliminate the auxiliary field \(B\) from it ending up \(B=F^{\prime}(A)\). Hence, the action (201) takes the equivalent form \[S=\frac{1}{2\kappa^{2}}\int d^{4}x\,\sqrt{-g}\Bigg{[}F^{\prime}(A)(R-A)+F(A) \Bigg{]}\,. \tag{202}\] Taking a conformal transformation of the metric tensor, one may obtain a minimally coupled scalar-tensor theory, called the Einstein frame scalar-tensor theory. We use a particular conformal factor taking the following expression [594] \[\hat{g}_{\mu\nu}=\frac{1}{F^{\prime}(A)}g_{\mu\nu}\,, \tag{203}\] which modifies the Ricci scalar as \(R\rightarrow\hat{R}\). With the conformal transformation (203), and then by defining a new scalar field \(\sigma\) in terms of the auxiliary scalar field \(A\), \[\sigma=-\ln F^{\prime}(A)\,, \tag{204}\] the action (202) takes the form \[S=\frac{1}{2\kappa^{2}}\int d^{4}x\sqrt{-\hat{g}}\left\{\hat{R}-\frac{3}{2} \hat{g}^{\mu\nu}\partial_{\mu}\sigma\partial_{\nu}\sigma-V(\sigma)\right\}\,, \tag{205}\] where the potential \(V(\sigma)\) takes the form \[V(\sigma)=\frac{A}{F^{\prime}(A)}-\frac{F(A)}{F^{\prime}(A)^{2}}\,. \tag{206}\] Notice that with the use of Eq. (204), the potential of Eq. (206) can be expressed in terms of the scalar field \(\sigma\). Thus, corresponding to the Jordan frame \(F(R)\) gravity characterized by the action of Eq. (154) with \(L_{\rm matter}=0\), the Einstein frame scalar-tensor theory can be found as given by the action of Eq. (205). Finally, we note that with the use of the following transformation \[\varphi=\sqrt{\frac{3}{2\kappa^{2}}}\sigma\,, \tag{207}\] the action (205) can be expressed into the canonical form [594] \[S=\int d^{4}x\sqrt{-g}\left\{\frac{R}{2\kappa^{2}}-\frac{1}{2}\partial_{\mu} \varphi\partial^{\mu}\varphi-V(\varphi)\right\}\,, \tag{208}\] Alternatively, starting with the scalar-tensor canonical scalar field action given by [594] \[S=\int d^{4}x\sqrt{-\hat{g}}\left\{\frac{\hat{R}}{2\kappa^{2}}-\frac{1}{2} \partial_{\mu}\varphi\partial^{\mu}\varphi-V(\varphi)\right\}\,, \tag{209}\] one can find its equivalent Jordan frame \(F(R)\) gravity action. Assuming a spatially flat FLRW line element (3), the FLRW equations corresponding to the action (209) can be written as \[3\widetilde{H}^{2}=\frac{1}{2}\dot{\varphi}^{2}+V\,,\quad 3\widetilde{H}^{2}+2 \dot{\widetilde{H}}=-\frac{1}{2}\dot{\varphi}^{2}+V\,. \tag{210}\] where we have used '_tilde_' in the physical quantities in this frame (i.e., the Einstein frame) in order to differ the corresponding physical quantities in the Jordan frame. Now one can map the above action (209) to a modified \(F(R)\) gravity theory. In order to do this, we need to perform the conformal transformation given by \(g_{\mu\nu}\to{\rm e}^{\pm\sqrt{\frac{2}{3}}\kappa\varphi}\hat{g}_{\mu\nu}\), and as a result, the FLRW metric becomes, \[ds_{F(R)}^{2}={\rm e}^{\pm\sqrt{\frac{2}{3}}\kappa\varphi}\left(-d\widetilde{ t}^{2}+\widetilde{a}^{2}\left(\,\,\widetilde{t}\,\,\right)\sum_{i=1,2,3} \left(dx^{i}\right)^{2}\right), \tag{211}\] where we have introduced a new time coordinate \(\widetilde{t}\), given by \(dt={\rm e}^{\pm\frac{1}{2}\sqrt{\frac{2}{3}}\kappa\varphi}d\widetilde{t}\), the solution of which, denoted by \(t=f\left(\,\,\widetilde{t}\,\,\right)\), is an increasing function. Let us note that if the function \(f\left(\,\,\widetilde{t}\,\,\right)\) allows singularities, then we may have some problems. As we are considering two different frames, i.e., the Jordan frame and the Einstein frame, therefore, following the relation between \(t\) and \(\widetilde{t}\), the range of the values of \(\widetilde{t}\) could be mapped to a different region in the \(t\) coordinate. Let us consider an interval \([\widetilde{t}_{1},\widetilde{t}_{2}]\) in the Einstein frame, with the scale factor at \(\widetilde{t}=\widetilde{t}_{1}\) being equal to \(\widetilde{a}\left(\,\,\widetilde{t}_{1}\,\,\right)=0\) and at \(\widetilde{t}=\widetilde{t}_{2}\), the scale factor is equal to, \(\widetilde{a}\left(\,\,\widetilde{t}_{2}\,\,\right)=0\) or \(\widetilde{a}\left(\,\,\widetilde{t}_{2}\,\,\right)=\infty\). As one can identify that \(\widetilde{t}=\widetilde{t}_{2}\), corresponds to a Big Crunch or to a Big Rip singularity, respectively, in which case with potentially \(\widetilde{t}_{1}=-\infty\) and/or \(\widetilde{t}_{2}=\infty\), so that these singularities effectively do not occur, a fact which crucially depends on the particular form of the scale factor at hand. The new range of \(t\) coordinate will be \([f\left(\widetilde{t}_{1}\,\,\right)\,,f\left(\widetilde{t}_{2}\right)]\), assuming \(\phi\left(\,\,\widetilde{t}\,\,\right)\) is regular everywhere in the interval \([\widetilde{t}_{1},\widetilde{t}_{2}]\). In the following we shall discuss the correspondence of the finite-time singularities in the Jordan and Einstein frames using some simple examples. We note that while discussing the singularities below, we have followed the same unit system as in Ref. [594], that means, we have considered the same unit system so that the gravitational coupling constant becomes unity, i.e., \(\kappa=1\). #### (i) Power law cosmology We consider the power law cosmology which is described by the following scale factor, \[\widetilde{a}\left(\widetilde{t}\right)=\widetilde{a}_{c}\left(\frac{ \widetilde{t}}{\widetilde{t}_{c}}\right)^{p}\,, \tag{212}\] where \(\widetilde{t}_{c}\) is a fiducial value of the cosmic time, \(p\) is a positive real number and from the above relation we identify that \(\widetilde{a}(\,\widetilde{t}_{c}\,)=\widetilde{a}_{c}\). Note that such a power law cosmology described in Eq. (212) is a solution of the Friedmann equation in the Einstein frame scalar-tensor theory when the potential has the exponential form. In this case the scalar field evolves as \[\phi=\pm\sqrt{2p}\,\ln\left(\frac{\widetilde{t}}{\widetilde{t}_{c}}\right)\,. \tag{213}\] In this case \(\widetilde{t}_{1}=0\) and \(\widetilde{t}_{2}=\infty\) and in this model the Hubble rate \(\widetilde{H}\) diverges at \(\widetilde{t}=0\), so we have a Big Bang singularity in the Einstein frame. Now in order to understand the behavior of the singularities in the Jordan frame, we can follow the procedure as described in Section IV.5.1. The new time variable \(t\) in the Jordan frame can be found from the differential equation \[\frac{dt}{d\widetilde{t}}=\left(\frac{\widetilde{t}}{\widetilde{t}_{c}}\right) ^{\pm\sqrt{\frac{p}{3}}}\,, \tag{214}\] having the following solution, \[t=\frac{3}{3\pm\sqrt{3p}}\widetilde{t}\left(\frac{\widetilde{t}}{\widetilde{t} _{c}}\right)^{\pm 2\sqrt{\frac{p}{3}}}\,. \tag{215}\] The corresponding scale factor as a function of the cosmic time \(t\) takes the form \[a(t)\sim t^{w}\quad\text{where}\quad w=\frac{\sqrt{3p}\pm 3p}{\sqrt{3p}\pm 3}\,. \tag{216}\] Now we have the following observations. When the minus sign is chosen in the conformal factor, the cosmological evolution (216) has a Big Bang singularity at \(t=0\) if the power law parameter \(p\) lies in the interval \(1/3\leq p<3/4\). If \(p=1/3\), the Jordan frame metric becomes static and we do not have any finite-time singularity. When \(0<p<1/3\), the Big Bang singularity at \(\widetilde{t}=0\) in the Einstein frame becomes the beginning of a contracting Universe in the Jordan frame. In the case \(3/4<p<3\), from (215) we can see the direction of time is reversed, meaning that this case must be disregarded. Finally, for \(p>3\) the time \(\widetilde{t}=0\) correspond to \(t=-\infty\), so the Big Bang singularity disappears in the Jordan frame. _(ii) Cosmology generated by \(R^{-n}\) gravity in the Jordan Frame_ We now consider the cosmology driven by \(R^{-n}\) gravity in the Jordan frame (see [593] for details). So, for \(F(R)\sim R^{-n}\), from Eqs. (199) and (200) one can see that the corresponding scale factor takes the power law form \[a\sim(t_{s}-t)^{\frac{(n+1)(2n+1)}{n+2}}\,\,. \tag{217}\] Therefore, if either \(n<-2\) or \(-1<n<-1/2\), a Big Rip Type I singularity appears at \(t=t_{s}\) in the Jordan frame, and in the remaining cases, a Type III Big Crunch singularity is present at this point. In this case, the corresponding Einstein frame canonical scalar field takes the form \[\sigma\sim(n+1)\ln R\sim-2(n+1)\ln(t_{s}-t)\,, \tag{218}\] and the Ricci scalar takes the following expression \[R\sim\frac{6(n+1)(2n+1)(4n+5)n}{(n+2)^{2}(t_{s}-t)^{2}}\,. \tag{219}\] Now the time coordinate \(\widetilde{t}\) in the corresponding Einstein frame scalar-tensor theory is given by \[d\widetilde{t}=\pm\mathrm{e}^{\frac{1}{2}\sigma}dt\,\,\,\,\sim\pm(t_{s}-t)^{-( n+1)}dt\,, \tag{220}\] which gives \(\widetilde{t}=\pm(t_{s}-t)^{-n}\). Therefore, for \(n>0\), \(t\) approaches \(t\to t_{s}\) in the Jordan frame corresponds to \(\widetilde{t}\rightarrow\pm\infty\) in the Einstein frame. As a consequence, we observe that the singularity changes its structure abruptly, that means it does not appear in finite-time in the Einstein frame scalar-tensor theory. Nevertheless, a new additional singularity may be present, as when \(t\) approaches infinity in the Einstein frame, it corresponds to the new time coordinate \(\widetilde{t}\to 0\), and thus any singularities at infinity can be brought back to a finite-time. On the other hand, when \(n<0\), the limit \(t\to t_{s}\) in the Jordan frame corresponds to \(\widetilde{t}\to 0\) in the Einstein frame. Finally, we note that the metric in the Einstein frame scalar-tensor theory behaves as \[ds^{2}=\mathrm{e}^{\sigma}\left(-dt^{2}+a^{2}(t)\sum_{i=1,2,3}(dx^{i})^{2} \right)\sim-d\widetilde{t}^{2}+\widetilde{a}^{2}\left(\ \widetilde{t}\ \right)\sum_{i=1,2,3}(dx^{i})^{2}\,,\quad \widetilde{a}^{2}\left(\ \widetilde{t}\ \right)\sim a_{s}^{2}\ \widetilde{t}^{\ \frac{2n(n^{2}-1)}{n+2}}\,, \tag{221}\] where the constant \(a_{s}\) is a real number. In this case, the power of the scale factor is negative only when \(-2<n<-1\) or \(0<n<1\), and thus a Big Rip singularity is present. Thus, for the Big Rip singularity in the Jordan frame, the scale factor in the Einstein frame behaves as \(\widetilde{a}^{2}\left(\ \widetilde{t}\ \right)\to 0\) when \(\widetilde{t}\to 0\), thus, it becomes a Type III Big Crunch singularity in the Einstein frame. _(iii) A singular cosmological evolution_ In this section, we examine how the simplest singular cosmology behaves in different frames. We consider the following Hubble rate which describes the simplest singular cosmology as in (99), \[H\left(\ \widetilde{t}\ \right)=h_{s}\left(\ \widetilde{t}-\widetilde{t}_{s}\ \right)^{-\beta}\,, \tag{222}\] where \(h_{s}\) is any positive real number and \(\beta\) is a real number. From the values of \(\beta\) one can determine the singularity type. In particular, we realize the following type of singularities based on the values of \(\alpha\): * For \(\beta>1\), we realize a Type I singularity. * For \(0<\beta<1\), a Type III singularity is found. * When \(-1<\beta<0\), a Type II singularity occurs. * When \(\beta<-1\), a Type IV singularity is realized. Let us assume that the Hubble rate of Eq. (222) is given in the Einstein frame and we aim to investigate the singular behavior captured in the Hubble rate of Eq. (222) in the context of Jordan frame \(F(R)\) theory. We use the conformal factor given by \(\mathrm{e}^{\sqrt{\frac{2}{3}}\varphi}\) which results in the transformation of the metric as \[ds_{F(R)}^{2}=\mathrm{e}^{\sqrt{\frac{2}{3}}\varphi}\left(-d\widetilde{t}^{2}+ \widetilde{a}^{2}(\ \widetilde{t}\ )\sum_{i=1,2,3}\left(dx^{i}\right)^{2}\right)\,, \tag{223}\] and the scale factor now becomes, \[a(t)=\mathrm{e}^{\frac{1}{2}\sqrt{\frac{2}{3}}\varphi}\ \widetilde{a}\left(\ \widetilde{t}\ \right)\,, \tag{224}\] where the Jordan frame time parameter \(t\) is defined as \[dt=\mathrm{e}^{\frac{1}{2}\sqrt{\frac{2}{3}}\varphi}d\widetilde{t}, \tag{225}\] the solution of which is an incomplete gamma function. However, for the Hubble rate as given in Eq. (222), using the corresponding equations of motion, the Einstein frame scalar field can be found to be \[\varphi=\frac{2\sqrt{2h_{s}\beta}\left(\ \widetilde{t}-\widetilde{t}_{s}\ \right)^{\frac{1-\beta}{2}}}{1-\beta}\,, \tag{226}\] where \(h_{s}\beta>0\) is an essential criterion in order for the scalar field to be canonical. We further notice that the transformation blows up at \(\widetilde{t}_{s}\) if \(\beta>1\), hence, extra caution is required in this case. in effect, the new Hubble rate in terms of our initial time coordinate \(\widetilde{t}\), i.e., \(H(\widetilde{t})\equiv\frac{1}{a}\frac{da}{dt}\), is equal to, \[H(\widetilde{t})=\frac{\sqrt{h_{s}\beta}\ (\ \widetilde{t}-\widetilde{t}_{s}\ )^{- \frac{\beta+1}{2}}}{\sqrt{3}}+h_{s}(\ \widetilde{t}-\widetilde{t}_{s}\ )^{-\beta}\,, \tag{227}\] and the question is what is the effect on the time coordinate \(t\). We suppose \(t=f(\ \widetilde{t}\ )\). Now, when \(f(\ \widetilde{t}\ )\neq 0\), we have, \[f^{\prime}(\ \widetilde{t}\ )=\mathrm{e}^{\frac{1}{2}\sqrt{\frac{ \pi}{3}}\varphi}\,, \tag{228}\] and consequently the following relation \[\frac{dH}{dt}=\frac{d\widetilde{t}}{dt}\frac{d\widetilde{H}}{d \widetilde{t}}=\mathrm{e}^{-\frac{1}{2}\sqrt{\frac{\pi}{3}}\varphi}\frac{d \widetilde{H}}{d\widetilde{t}}\,. \tag{229}\] holds, that means, the expression \(dH/dt\) diverges if and only if \(d\widetilde{H}/d\widetilde{t}\) diverges for \(\beta<1\), as the conformal factor is finite at \(\widetilde{t}_{s}\). Now the Big Rip (Type I) singularity occurs when the Hubble rate \(H(t)\) diverges at a finite-time thus, we need to examine whether \(f(\ \widetilde{t}_{s}\ )\) is finite or not. One can easily derive \(t_{s}\) given by [594] \[t_{s}=f(\ \widetilde{t}_{s}\ )=c_{1}-c_{2}\Gamma\left(\frac{2}{\alpha+1} \right)\,, \tag{230}\] which is finite, provided \(2/(1-\beta)\) is not a negative integer. This readily implies that the singularity appears in the Hubble rate \(H\) at a finite-time, as long as \(\beta\neq-(2/n)+1\) where \(n\geq 2\) is an integer. Accordingly, a Type II singularity is found if \(dH/dt\) diverges, but \(H\) does not diverge. Combining together, these imply that \(-3<\beta<-1\). On the other hand, for \(\beta<-3\), a Type IV singularity occurs. Now we investigate the evolution of the scale factor when it is conformally transformed. The evolution of the scale factor in the Einstein frame reads, as in (59), \[\widetilde{a}\left(\ \widetilde{t}\ \right)=\widetilde{a}_{s}\exp\left( \frac{h_{s}\left(\ \widetilde{t}-\widetilde{t}_{s}\ \right)^{1-\beta}}{1-\beta}\right)\,, \tag{231}\] which in the Jordan frame reads, \[a\left(\ \widetilde{t}\ \right)=a_{s}\exp\left(\frac{3h_{s}\left(\ \widetilde{t}-\widetilde{t}_{s}\ \right)^{1-\beta}\pm 2\sqrt{3\beta h_{s}}\big{(}\ \widetilde{t}-\widetilde{t}_{s}\ \big{)}^{\frac{1-\beta}{2}}}{3(1-\beta)}\right)\,. \tag{232}\] Now, depending on the values of \(\beta\), from the scale factor (232), one can witness the following type of singularities: 1. For \(\beta>1\), a Type I or no singularity occurs. 2. For \(-1<\beta<1\), a Type III singularity occurs. 3. For \(-3<\beta<-1\), a Type II singularity occurs. 4. For \(\beta<-3\), a Type IV singularity occurs. In TABLE 1, we show the correspondence of the finite-time singularities between the Jordan frame and the Einstein frame. TABLE 1 clearly shows that the singularity in one frame may not be same in the other frame. For example, as displayed in TABLE 1, the Type I (Big Rip) singularity in the Einstein frame may correspond to a non-singular evolution in the Jordan frame. Further, the Type II singularity in the Einstein frame could be modified to a more severe Type III singularity in the Jordan frame, and the Type IV singularity in the Einstein frame may correspond to a Type II singularity in the Jordan frame. We close this section with a special case of the singular evolution (222) describing the singular bounce cosmology, a special case of the symmetric bounce [597, 598, 599]. One can rewrite the scale factor of the cosmological evolution (222) as follows, \[a(t)=a_{s}\ \exp\left(h_{s}(t-t_{s})^{2(1+\epsilon)}\right)\,, \tag{233}\] for which the Hubble rate becomes \[H(t)=2(1+\epsilon)h_{s}(t-t_{s})^{2\epsilon+1}, \tag{234}\] where \(\epsilon>0\) and it has been chosen in such a way so that all the quantities remain real. Now, in order to realize a bouncing scenario, for \(t<t_{s}\) the Hubble rate must become negative (i.e., \(H<0\)) and additionally, in order for the bounce (233) to be a deformation of the symmetric bounce described by \(a(t)\sim\exp\bigl{(}\beta t^{2}\bigr{)}\), the parameter \(\epsilon\) must lie in the interval \(0<\epsilon<1\) having the following form \[\epsilon=\frac{2n}{2m+1}\,, \tag{235}\] where \(m\) and \(n\) are integers and they are chosen in such a way so that \(\epsilon<1\) is satisfied. Now, for this choice of \(\epsilon\), the cosmology described by the scale factor (233) and the Hubble rate (234) clearly depicts a Type IV singular cosmology, in which case, the Hubble rate and its first derivative with respect to the cosmic time, i.e., \(\dot{H}\) remain finite, but its second derivative with respect to the cosmic time, i.e., \(\dot{H}\) diverges. As demonstrated in Refs. [600, 601] with the use of reconstruction techniques, the pure \(F(R)\) gravity realizing the cosmological evolution (234) can approximately be given by \[F(R)=R+\frac{R^{2}}{4C}+\Lambda\,, \tag{236}\] where \(C\) is a positive real number, near the bouncing point, which is \(t\simeq t_{s}\). Let us define a new parameter \(x=t-t_{s}\) for simplicity. Thus, the limit near the bouncing point, i.e., \(t\simeq t_{s}\), corresponds to the limit \(x\to 0\). Now, in order to transform the theory in the Einstein frame, we consider the following conformal transformation \[g_{\mu\nu}=\mathrm{e}^{-\sigma}\hat{g}_{\mu\nu}\,, \tag{237}\] where the scalar field \(\sigma\) is equal to \(\sigma=-\ln F^{\prime}(A)\). In terms of the new parameter \(x\), the Ricci scalar reads \[R=12h_{s}(\epsilon+1)x^{2\epsilon}\Biggl{[}4h_{s}(\epsilon+1)x^{2\epsilon+2}+ 2\epsilon+1\Biggr{]}, \tag{238}\] and if we are close to the singularity where \(x\) is small, then the Ricci scalar becomes, \[R\approx 12h_{s}(\epsilon+1)(2\epsilon+1)x^{2\epsilon}\,. \tag{239}\] Consequently, by combining Eqs. (236) and (239), one obtains \[F^{\prime}(R)\approx 1+\frac{6h_{s}(\epsilon+1)(2\epsilon+1)x^{2\epsilon}}{C }\equiv 1+Dx^{2\epsilon}\,, \tag{240}\] where \(D\) is a positive real number. This means that the new time coordinate of the Einstein frame FLRW metric will be given in terms of \(x\) as follows \[d\widetilde{t}=(1+Dx^{2\epsilon})^{\frac{1}{2}\sqrt{\frac{7}{2}}}dx\,, \tag{241}\] the solution of which is a hypergeometric function. The new scale factor in terms of \(x\) reads \[a(\ \widetilde{t}\ )=(1+Dx^{2\epsilon})^{\frac{1}{2}\sqrt{\frac{7}{2}}}\ \widetilde{a}(x)\,, \tag{242}\] and subsequently, the derivative of the scale factor is given by \[\frac{da}{d\widetilde{t}}=\frac{dx}{d\widetilde{t}}\frac{da}{dx}=\frac{dx}{d \widetilde{t}}\Biggl{[}(1+Dx^{2\epsilon})^{\frac{1}{2}\sqrt{\frac{7}{2}}} \frac{d\widetilde{a}}{dx}+2\epsilon x^{2\epsilon}(1+Dx^{2\epsilon})^{\frac{ 1}{2}\sqrt{\frac{7}{2}}-1}\widetilde{a}(x)\Biggr{]}\,. \tag{243}\] As \(dt=dx\) at \(x\simeq 0\), thus, looking at the second derivative of the scale factor \(a(t)\), one may conclude that the scale factor diverges at \(x=0\) provided \(\epsilon<1/2\). Hence, the Type IV singularity in the Jordan frame becomes a Type II singularity in the Einstein frame, also reflected from TABLE I. \begin{table} \begin{tabular}{|c|c|} \hline **Singularity in the Einstein Frame** & **Singularity in the Jordan Frame** \\ \hline \hline Type I & Type I or No Singularity \\ Type II & Type III \\ Type III & Type III \\ Type IV & Type IV or Type II \\ \hline \end{tabular} \end{table} Table 1: The table shows the correspondence of finite-time singularities in the Einstein and Jordan frames for the cosmological evolution given in terms of the Hubble rate (222) in the Einstein frame. Occurrence of Singularities in Different Frames: Unimodular \(F(R)\) Gravity In this section, we discuss the correspondence between the frames in the context of unimodular \(F(R)\) gravity [597; 598; 602; 599]. The Jordan frame unimodular \(F(R)\) gravity is described by the action [597] \[S=\int d^{4}x\bigg{[}\sqrt{-g}\,\left(F(R)-\lambda\right)+\lambda\bigg{]}+S_{ \text{matter}}\,, \tag{244}\] where we assume that \(F(R)\) is a smooth function of the Ricci scalar \(R\), \(\lambda\) denotes the Lagrange multiplier function and \(S_{\text{matter}}\) stands for the action of the matter fluids present in the Universe sector. Notice that the variation of the action (244) with respect to \(\lambda\) leads to the unimodular constraint \[\sqrt{-g}=1\,. \tag{245}\] The unimodular constraint given in Eq. (245) is the key point of unimodular \(F(R)\) gravity. Now if we vary the unimodular \(F(R)\) gravity action (244) with respect to the metric tensor \(g_{\mu\nu}\), we obtain the unimodular \(F(R)\) gravity field equations as follows \[0=\frac{1}{2}g_{\mu\nu}\left(F(R)-\lambda\right)-R_{\mu\nu}F^{\prime}(R)+ \nabla_{\mu}\nabla_{\nu}F^{\prime}(R)-g_{\mu\nu}\nabla^{2}F^{\prime}(R)+\frac {1}{2}T_{\mu\nu}\,. \tag{246}\] Now, in order to proceed with the cosmological evolution, one needs to be very careful because the flat standard FLRW metric (3) does not satisfy the unimodular constraint given in Eq. (245). Nevertheless, taking the following coordinate transformation \[d\tau=a^{3}(t)dt\,, \tag{247}\] one can verify that the resulting metric satisfies the unimodular condition (245). Now, with the use of the transformation (247), the FLRW metric in (3) can be written as \[ds^{2}=-a^{-6}\left(t\left(\tau\right)\right)d\tau^{2}+a^{2}\left(t\left(\tau \right)\right)\left(dx^{2}+dy^{2}+dz^{2}\right), \tag{248}\] which for the sake of brevity we call the unimodular FLRW metric. Using this unimodular FLRW metric, the vacuum field equations become [597; 598; 602; 599], \[0= -\frac{a^{-6}}{2}\left(F(R)-\lambda\right)+\left(3\dot{K}+12K^{2} \right)F^{\prime}(R)-3K\frac{dF^{\prime}(R)}{d\tau}\,, \tag{249}\] \[0= \frac{a^{-6}}{2}\left(F(R)-\lambda\right)-\left(\dot{K}+6K^{2} \right)F^{\prime}(R)+5K\frac{dF^{\prime}(R)}{d\tau}+\frac{d^{2}F^{\prime}(R)} {d\tau^{2}}\,, \tag{250}\] where the function \(K(\tau)\) is defined as the corresponding Hubble rate in the "\(\tau\)" coordinate, that is, \[K=\frac{1}{a(\tau)}\frac{da(\tau)}{d\tau}\,. \tag{251}\] By using the unimodular FLRW metric of Eq. (248), the corresponding Ricci scalar now becomes \[R=a^{6}\left(6\dot{K}+30K^{2}\right)\,. \tag{252}\] Now, in the following sections we describe the correspondence of the Jordan frame unimodular \(F(R)\) gravity in the Einstein frame. We begin with the unimodular \(F(R)\) gravity action in the Jordan frame, i.e., Eq. (244) without the matter sector, and following the same approach as in the ordinary \(F(R)\) gravity, that means, introducing the auxiliary field \(A\), we rewrite the action (244) as follows \[S=\int d^{4}x\bigg{[}\sqrt{-g}\bigg{(}F^{\prime}(A)(R-A)+F(A)-\lambda\bigg{)} +\lambda\bigg{]}\,. \tag{253}\] We note that the last term of the action (253) will remain unaffected by the conformal transformations. Now to obtain a minimally coupled scalar-tensor theory, we perform the conformal transformation as in the standard \(F(R)\) gravity case, i.e., \(\hat{g}_{\mu\nu}=\mathrm{e}^{\sigma}g_{\mu\nu}\), where \(\hat{g}\) denotes the metric in the Einstein frame, and \(g\) is the metric in the Jordan frame. Further, the scalar field \(\sigma\) in terms of the auxiliary field \(A\) is given by, \(\sigma=-\ln F^{\prime}(A)\), hence, the action (253) becomes, \[S=\int d^{4}x\left\{\sqrt{-\hat{g}}\left(\hat{R}-\frac{3}{2}\hat{g}^{\mu\nu} \partial_{\mu}\sigma\partial_{\nu}\sigma-V(\sigma)-\lambda\mathrm{e}^{-2\sigma }\right)+\lambda\right\}\,, \tag{254}\] which describes a canonical scalar field action, in the absence of any matter sector. We note that the unimodular constraint is not unaffected by the conformal transformation, so in the case it becomes \[\sqrt{-\hat{g}}=\mathrm{e}^{2\sigma}\,. \tag{255}\] In effect, the FLRW metric does not satisfy this constraint identically, so in order to overcome this issue, similar to the earlier approach, we introduce a new time coordinate \(\tilde{\tau}\), which is related to the cosmic time \(\tilde{t}\) as follows \[d\widetilde{\tau}=\widetilde{a}^{3}\left(\ \widetilde{t}\ \right)\mathrm{e}^{2 \sigma\left(\ \widetilde{t}\ \right)}d\widetilde{t}\,, \tag{256}\] so that the conformally transformed unimodular constraint of Eq. (255) is satisfied and the corresponding Einstein frame unimodular FLRW metric takes the form \[ds^{2}=-\frac{\mathrm{e}^{4\sigma\left(\widetilde{\tau}\right)}}{\widetilde{a }^{6}\left(\ \widetilde{\tau}\ \right)}d\widetilde{\tau}^{2}+\widetilde{a}^{2}\left( \widetilde{\tau}\right)\sum_{i=1}^{3}dx_{i}^{2}\,. \tag{257}\] In summary, in order to transform from one frame to other at the level of metric, we have the following transformations. The scale factor transforms as \[\widetilde{a}\left(\ \widetilde{\tau}\ \right)=\mathrm{e}^{\sigma/2}a(\tau)\,, \tag{258}\] where the parameter \(\widetilde{\tau}\) is related to \(\tau\) as follows, \[\left(\frac{\mathrm{e}^{4\sigma\left(\widetilde{\tau}\right)}}{\widetilde{a} ^{6}\left(\ \widetilde{\tau}\ \right)}\right)d\widetilde{\tau}^{2}=\left(\frac{\mathrm{e}^{\sigma}}{a^{6}( \tau)}\right)d\tau^{2}\,. \tag{259}\] Combining Eqs. (258) and (259), one can easily observe that the new time coordinate \(\widetilde{\tau}\) is the same as the coordinate \(\tau\), i.e., \(\widetilde{\tau}=\tau\). So far we discuss the unimodular \(F(R)\) gravity scenario, however, we also need to discuss the scalar-tensor unimodular gravity. To proceed, we begin with the following minimally coupled scalar-tensor action \[S=\int d^{4}x\left\{\sqrt{-\hat{g}}\left(\frac{\hat{R}}{2\kappa^{2}}-\frac{1} {2}\hat{g}^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi-V(\varphi)- \lambda h(\varphi)\right)+\lambda\right\}\,, \tag{260}\] where \(\lambda\) is a constant and in addition, for the time being we have assumed that the determinant of the metric is given by an arbitrary function of the scalar field but this will be determined later by the requirement that the action has a Jordan frame. Now, considering the flat unimodular FLRW metric, for the above action (260), one can write down the gravitational equations which are nothing but the standard scalar field cosmological equations with a modified scalar potential having the form \(V(\varphi)+\lambda h(\varphi)\). In particular, the gravitational equations read \[3\widetilde{K}^{2}= \frac{1}{2}\dot{\varphi}^{2}+\Big{(}V(\varphi)+\lambda h(\varphi )\Big{)}\widetilde{a}^{-6}(\ \widetilde{\tau}\ )\,, \tag{261}\] \[9\widetilde{K}^{2}+2\dot{\widetilde{K}}= -\frac{1}{2}\dot{\varphi}^{2}+\Big{(}V(\varphi)+\lambda h(\varphi )\Big{)}\widetilde{a}^{-6}(\widetilde{\tau})\,, \tag{262}\] where \(\widetilde{K}(\widetilde{\tau})\) denotes the unimodular Hubble parameter in the Einstein frame, explicitly given by \[\widetilde{K}\left(\widetilde{\tau}\right)=\widetilde{K}(\tau)=\frac{1}{ \tilde{a}(\tau)}\frac{d\widetilde{a}(\tau)}{d\tau}\,. \tag{263}\] Now in order to conformally transform this action (260) to a Jordan frame, we apply the following conformal transformation \[g_{\mu\nu}=\mathrm{e}^{\pm\kappa\sqrt{\frac{2}{3}\widetilde{\tau}}}\hat{g}_{ \mu\nu}\,. \tag{264}\] This rescaling eliminates the kinetic term from the action (260) and recast into the following form \[S=\int d^{4}x\Bigg{[}\sqrt{-g}\bigg{\{}\mathrm{e}^{\pm\kappa\sqrt{\frac{2}{3}} \varphi}2\kappa^{2}R-\mathrm{e}^{\pm 2\kappa\sqrt{\frac{2}{3}}\varphi}\bigg{(}V( \varphi)+\lambda h(\varphi)\bigg{)}\bigg{\}}+\lambda\Bigg{]}\,. \tag{265}\] Now, varying the action (265) with respect to the scalar field \(\phi\), which is now just an auxiliary field, we get \[R=\mathrm{e}^{\pm\kappa\sqrt{\frac{2}{3}}\varphi}\Bigg{[}4\kappa^{2}\bigg{(}V( \varphi)+\lambda h(\varphi)\bigg{)}\pm 2\kappa\sqrt{\frac{2}{3}}\bigg{(}V^{ \prime}(\varphi)+\lambda h^{\prime}(\varphi)\bigg{)}\Bigg{]}\,. \tag{266}\] In order to find the Jordan frame unimodular \(F(R)\) gravity, one needs to invert Eq. (266) to get the scalar field \(\phi(R)\), as a function of \(R\) only, and so independent of \(\lambda\). This directs us to choose \[h(\varphi)=\mathrm{e}^{-2\kappa\sqrt{\frac{2}{3}}\varphi}\,. \tag{267}\] Hence, in this case we can invert to find \(\varphi=\varphi(R)\), and consequently, the resulting unimodular \(F(R)\) gravity theory takes the form \[S=\int d^{4}x\Bigg{[}\sqrt{-g}\bigg{(}F(R)-\lambda\bigg{)}+\lambda\Bigg{]}, \tag{268}\] where the expression of \(F(R)\) is given by \[F(R)=\mathrm{e}^{\pm\kappa\sqrt{\frac{2}{3}}\varphi(R)}2\kappa^{2}R-\mathrm{e }^{\pm 2\kappa\sqrt{\frac{2}{3}}\varphi(R)}V(\varphi(R))\,. \tag{269}\] Now, having the correspondence between the Jordan and Einstein frames, in the following we discuss the correspondence of the finite-time singularities in the two frames. We again note that for all the physical quantities in the Einstein frame we use tilde to differ it from the Jordan frame, and additionally, we set the gravitational coupling constant to be unity throughout the discussions, i.e., \(\kappa=1\). _(i) Power law evolution_ We consider the power law cosmology characterized by the following scale factor expressed in terms of \(\widetilde{\tau}\) (the time coordinate of the unimodular Einstein frame FLRW metric (257)) as follows \[\widetilde{a}\left(\widetilde{\tau}\right)=\widetilde{a}_{c}\left(\frac{ \widetilde{\tau}}{\widetilde{\tau}_{c}}\right)^{p}\,, \tag{270}\] where \(\widetilde{\tau}_{c}\) is some fiducial time and \(p\) is a positive real number. For this scale factor, the unimodular Hubble parameter becomes \(\widetilde{K}(\tau)=p\widetilde{\tau}^{-1}\). Thus, one can see that at \(\widetilde{\tau}=0\), the unimodular Hubble rate diverges, and hence, for the power law cosmology in the Einstein frame, a Big Bang singularity is realized. The above scale factor is a solution of the Einstein frame Friedmann and Raychaudhuri equations (261)-(262) when the potential has an exponential form. In this case the scalar field takes the form \[\varphi\left(\widetilde{\tau}\right)=\pm\sqrt{2p(1-3p)}\,\log\left(\widetilde {\tau}/\widetilde{\tau}_{c}\right)\,. \tag{271}\] Now, using this solution (271), it is possible to change all the variables from the Einstein frame to the Jordan frame with the use of the following conformal transformation \[g_{\mu\nu}=\mathrm{e}^{\pm\sqrt{\frac{2}{3}}\varphi}\hat{g}_{\mu\nu}\,. \tag{272}\] As discussed above, the time coordinate \(\tau\) in the Jordan frame is equivalent to the original time coordinate in the Einstein frame. Thus, using the transformation (272), one can express scale factor in the Jordan frame as follows \[a(\tau)=\mathrm{e}^{\pm\frac{1}{2}\sqrt{\frac{2}{3}}\varphi}\widetilde{a}( \widetilde{\tau})\sim\tau^{p\pm\sqrt{\frac{p(1-3p)}{3}}}\,. \tag{273}\] Now, when the minus sign is taken in the conformal transformation, we see that for \(0<p<1/6\) the quantity \(p-\sqrt{\frac{p(1-3p)}{3}}\) is negative, so at \(\tau=0\) the scale factor diverges, this means that in the Jordan frame we have a contracting Universe. For \(p=1/6\) the Universe becomes static. When \(1/6<p\leq 1/3\), in the Jordan frame, the Universe conserves the Big Bang singularity at \(\tau=0\), and finally, for \(p>1/3\) the exponent in (273) becomes complex, meaning that this case does not make physical sense. _(ii) The case \(F(R)=R^{-n}\) in the Jordan Frame_ Here we investigate a model starting in the Jordan frame and conformally transform it to the Einstein frame. Let us consider the vacuum unimodular \(F(R)\) gravity having a typical form \(F(R)\sim R^{-n}\). In this case, the scale factor evolves as \[a(\tau)\sim(\tau_{s}-\tau)^{\frac{1+3n+2n^{2}}{5+10n+6n^{2}}}\,, \tag{274}\] where we have used the \(\tau\) coordinate of the unimodular FLRW metric. One can see that a Big Rip singularity occurs in the \(\tau\) coordinate, if the parameter \(n\) lies in the range \(-1<n<-1/2\). Now, in terms of the original FLRW cosmic time \(t\), the scale factor takes the form \[a(t)\sim t^{\frac{(2n+1)(n+1)}{2+n}}\, \tag{275}\] which shows that a Big Rip singularity in the \(\tau\) coordinate, appears in the \(t\) coordinate if \(n<-2\), or, \(-1<n<-1/2\). We shall now investigate what happens in the corresponding Einstein frame scalar-tensor theory. The scalar field \(\sigma\) of the conformal factor is given by \[\sigma\sim(n+1)\ln R\sim-\left(\frac{2(n+1)(2+n)}{5+10n+6n^{2}}\right)\ln(\tau _{s}-\tau)\,, \tag{276}\] where the Ricci scalar in the unimodular time parameter is given by \[R\sim(\tau_{s}-\tau)^{-2\left(\frac{2+n}{5+10n+6n^{2}}\right)}\,. \tag{277}\] This means that under a conformal transformation, the scale factor transforms as \[\widetilde{a}\left(\widetilde{\tau}\right)=\mathrm{e}^{\sigma/2}a(\tau), \tag{278}\] where the parameter \(\widetilde{\tau}\) is the same as the original time coordinate \(\tau\), that means, \(\widetilde{\tau}=\tau\). Therefore, we get \[\widetilde{a}\sim(\tau_{s}-\tau)^{\frac{n^{2}-1}{6n^{2}+10n+5}}\,. \tag{279}\] These conditions show that we could have more values of \(n\) for which the power of the scale factor is negative. As one can see that for \(-1<n<1\), the power of the scale factor becomes negative. Thus, we conclude that a Type I singularity will appear for more values of \(n\). _(iii) A singular cosmological model_ Here we consider a toy model where the unimodular Hubble parameter in the Einstein frame is given by \[\widetilde{K}(\ \widetilde{\tau}\ )=f_{s}(\widetilde{\tau}-\widetilde{\tau}_{s})^{ \alpha}\,, \tag{280}\] where \(k_{s}\), \(\alpha\) are real numbers and \(\widetilde{\tau}_{s}\) refers to a specific time instant of \(\widetilde{\tau}\). For this Hubble parameter, the scale factor can be given by \[\widetilde{a}\left(\widetilde{\tau}\right)=a_{s}\exp\left[\frac{f_{s}\left( \widetilde{\tau}-\widetilde{\tau}_{s}\right)^{\alpha+1}}{\alpha+1}\right]\,. \tag{281}\] where \(a_{s}\) is a constant. Now, to convert this Einstein frame solution to the Jordan frame, one needs to find the scalar field \(\varphi\) giving rise to such a solution (280). To do this it is essential to solve the differential equation obtained by subtracting the unimodular Friedmann equations (261) and (262) as follows \[0=\dot{\varphi}^{2}+6f_{s}^{2}\left(\widetilde{\tau}-\widetilde{\tau}_{s} \right)^{2\alpha}+2f_{s}\alpha\left(\widetilde{\tau}-\widetilde{\tau}_{s} \right)^{\alpha-1}\,. \tag{282}\] However, this equation cannot be solved in general for an arbitrary power of \(\alpha\), hence, one needs to find an approximate solution to the above differential equation. Thus, in order to proceed, one needs to approximate the solution around the singularity \(\widetilde{\tau}=\widetilde{\tau}_{s}\). This can be done for two cases as follows: when \(\alpha<-1\) and the Hubble rate is that of a Type I singularity, and when \(\alpha>-1\) and the other types of singularities are present. For \(\alpha<-1\), and around the singularity, the second term of (282) dominates over the third term and the differential equation (282) is approximated to be \[\dot{\varphi}^{2}+6f_{s}^{2}\left(\widetilde{\tau}-\widetilde{\tau}_{s}\right)^{ 2\alpha}\sim 0\,. \tag{283}\] In this case, the solution for \(\varphi\) becomes imaginary, and hence, such a Hubble rate could only be described by a phantom scalar field. For such a phantom field, the corresponding Jordan frame \(F(R)\) becomes complex and this is unphysical. However, for \(\alpha>-1\), and around the singularity, the third term of (282) dominates over the second term and therefore the scalar field behaves as \[\varphi\left(\widetilde{\tau}\right)\sim\pm\frac{2\sqrt{-2f_{s}\alpha}}{ \alpha+1}\left(\widetilde{\tau}-\widetilde{\tau}_{s}\right)^{\frac{\alpha+1}{ 2}}\,, \tag{284}\] which is real if \(-2f_{s}\alpha\geq 0\) and nontrivial if \(-2f_{s}\alpha\neq 0\). Thus, in this case one can continue further and conformally transform to the Jordan frame. Applying the conformal transformation (264) and using the fact that the time coordinate is unchanged, i.e., \(\tau=\widetilde{\tau}\), the scale factor in the Jordan frame reads \[a(\tau)\sim a_{s}\exp\left(\frac{3f_{s}(\tau-\tau_{s})^{\alpha+1}\pm 2\sqrt{-3 \alpha f_{s}}\left(\tau-\tau_{s}\right)^{\frac{\alpha+1}{2}}}{3(\alpha+1)} \right)\,. \tag{285}\] Now from this scale factor (285), one can describe a variety of finite-time singularities as follows: * For \(-1<\alpha<1\), a Type III singularity occurs. * For \(1<\alpha<3\), a Type II singularity occurs. * For \(3<\alpha\), a Type IV singularity occurs. In TABLE 2 we show how the singularities change their types from one frame to another4. One can observe that the unimodular \(F(R)\) case behaves similarly to the standard \(F(R)\) case. From TABLE 2, we observe that a Type II singularity in the Einstein frame is modified to the more severe Type III singularity in the Jordan frame. The Type IV singularity in the Einstein frame may become a more severe Type II singularity in the Jordan frame if the parameter \(\alpha\) lies in the range \(1<\alpha<3\). Footnote 4: We note that in TABLE 2, we do not keep Type I singularity since it only appears when \(\alpha<-1\), and hence, we have a phantom scalar field and consequently for such a phantom scalar field, the corresponding Jordan frame \(F(R)\) becomes complex and this is unphysical. Qualitative Analysis of the Phase structure of Unimodular \(F(r)\) gravity near Finite-Time Singularities Here we discuss the qualitative behavior of the dynamical system corresponding to the vacuum unimodular \(F(R)\) gravity near the finite-time singularities. To begin with, we introduce the following variables [594] \[x_{1}=-\frac{1}{KF^{\prime}(R)}\frac{dF^{\prime}(R)}{d\tau}\,,\quad x_{2}=- \frac{F(R)}{6K^{2}F^{\prime}(R)a^{6}}\,,\quad x_{3}=\frac{R}{6K^{2}a^{6}}\,, \quad x_{4}=\frac{\lambda}{6a^{6}K^{2}F^{\prime}(R)}\,, \tag{286}\] which allows one to write the Friedmann equation as \[x_{1}+x_{2}+x_{3}+x_{4}=1\,. \tag{287}\] \begin{table} \begin{tabular}{|c|c|} \hline **Singularity in the Einstein Frame** & **Singularity in the Jordan Frame** \\ \hline \hline Type II & Type III \\ Type III & Type III \\ Type IV & Type IV or Type II \\ \hline \end{tabular} \end{table} Table 2: We show the correspondence of finite-time singularities in the Einstein and Jordan frames, for the cosmological evolution (280) in the Einstein frame for the case \(\alpha>-1\). This constraint tells us that in order to describe the dynamics one needs only three variables as the fourth variable can be obtained using the remaining three variables. Using the above variables, the Raychaudhuri equation becomes \[\frac{1}{F^{\prime}(R)K^{2}}\frac{d^{2}F^{\prime}(R)}{d\tau^{2}}=1+5x_{1}+3x_{2} +x_{3}+3x_{4}\,. \tag{288}\] Now, differentiating the variables as defined in Eq. (286) with respect to a cosmological time \(dN=K(\tau)d\tau\), one gets the following dynamical system [594] \[x_{1}^{\prime}= -1+x_{1}^{2}-x_{1}x_{3}-3x_{2}-x_{3}-3x_{4}\,,\] \[x_{2}^{\prime}= -m+4x_{2}+x_{1}x_{2}-2x_{2}x_{3}+50-16x_{3}\,,\] \[x_{3}^{\prime}= m+20x_{3}-2x_{3}^{2}-50\,,\] \[x_{4}^{\prime}= x_{4}(x_{1}-2x_{3}+4)\,, \tag{289}\] where \(m=\tilde{K}/K^{3}\). Let us note that from the constraint Eq. (287), in reality, one only has three dynamical equations in (289). In fact, one can disregard the fourth equation which is a combination of the other three. The dynamical system, as one can notice, is non-autonomous due to the existence of the term \(m\). The critical points of the system (289) have been shown in TABLE 3. For an autonomous system \(m\) should vanish. Now if we consider that \(K(\tau)=f_{s}(\tau-\tau_{s})^{\alpha}\) with \(\alpha<-1\) which corresponds to the Big Rip singularity as \(\tau\to\tau_{s}\), then the parameter \(m\) is equal to, \(m=\frac{(-1+\alpha)\alpha(\tau-\tau_{s})^{-2-2\alpha}}{f_{s}^{2}}\sim 0\), and consequently, the dynamical system of Eq. (289) becomes autonomous near the Big Rip singularity. Thus, one may expect that finding the fixed points of the dynamical system near the Big Rip singularity could offer some insights on the non-autonomous system, however, from TABLE 3, one can clearly see that if one sets \(m=0\), then \(x_{1}\) and \(x_{2}\) become complex, that means the critical points are not real. In fact, if we assume that \(m\) is a constant, then in order to have real critical points one needs \(m\gtrsim 17\). This indicates that the dynamics near the Big Rip singularity behaves very strangely. ### \(F(g)\) Gravity Apart from the popular modified gravity models, namely, \(F(R)\) and \(F(T)\), another very interesting class of modified gravity models which can explain the late-time cosmic acceleration is the string-inspired modified Gauss-Bonnet gravity \(-\) the so called \(F(G)\)-gravity [142; 146; 603; 604; 605; 606; 607; 608; 609; 610; 611; 612; 613; 614; 615; 616; 616; 617; 618; 619; 620; 621; 622; 623; 624] where \(F(G)\) is any arbitrary function of the Gauss-Bonnet invariant defined in (141). Even though this class of modified gravity models can explain the late-time accelerating expansion of the Universe, however, it has been found that they can lead to finite-time future singularities [589; 625]. In this section, we shall describe the appearance of finite-time singularities in this class of modified gravity models. Unfortunately this model includes ghosts [626] although the ghost-free modifications have been proposed [627; 628]. In this section, however, we show the structure of the singularities based on the original models for the purpose of the illustration. We start with the action of \(F(G)\)-gravity which is given by [142] \[S=\int d^{4}x\sqrt{-g}\left[\frac{1}{2\kappa^{2}}\bigg{(}R+F(G)\bigg{)}+L_{ \rm matter}\right]\,. \tag{290}\] In the background of a spatially flat FLRW Universe in (3), one can write down the equation of motion for this gravity as follows \[24H^{3}\dot{F}^{\prime}(G)+6H^{2}+F(G)-GF^{\prime}(G)= \,2\kappa^{2}\rho\,, \tag{291}\] \[8H^{2}\ddot{F}^{\prime}(G)+16H\dot{F}^{\prime}(G)\left(\dot{H}+H^ {2}\right)+\left(4\dot{H}+6H^{2}\right)+F(G)-GF^{\prime}(G)= \,-\,2\kappa^{2}p\,, \tag{292}\] where the overhead dot stands for the derivative with respect to the cosmic time and the prime denotes the differentiation with respect to \(G\). The above two equations (291) and (292) can also be alternatively represented as \[\rho_{\rm eff}=\frac{3}{\kappa^{2}}H^{2}\,,\quad p_{\rm eff}=-\frac{1}{\kappa ^{2}}\left(2\dot{H}+3H^{2}\right)\,, \tag{293}\] where \(\rho_{\rm eff}\) and \(p_{\rm eff}\) are the effective energy density and pressure of the Universe, respectively. We note that the energy density and pressure of the matter sector described by \(L_{\rm matter}\) in Eq. (290) are contained in \(\rho_{\rm eff}\) and \(p_{\rm eff}\). Now, using the expressions of \(R\) and \(G\) in the FLRW Universe, given by \(R=6\left(2H^{2}+\dot{H}\right)\) and \(G=24H^{2}\left(H^{2}+\dot{H}\right)\), one can visualize \(\rho_{\rm eff}\) and \(p_{\rm eff}\) of Eq. (293) as follows: \[\rho_{\rm eff}= \,\frac{1}{2\kappa^{2}}\left[-F(G)+24H^{2}\left(H^{2}+\dot{H} \right)F^{\prime}(G)-24^{2}H^{4}\left(2\dot{H}^{2}+H\ddot{H}+4H^{2}\dot{H} \right)F^{\prime\prime}(G)\right]+\rho\,, \tag{294}\] \[p_{\rm eff}= \,\frac{1}{2\kappa^{2}}\Bigg{[}F(G)-24H^{2}\left(H^{2}+\dot{H} \right)F^{\prime}(G)+(8\times 24)\;H^{2}\Big{\{}6\dot{H}^{3}+8H\dot{H}\ddot{H}+24 \dot{H}^{2}H^{2}+6H^{3}\ddot{H}\] \[\,+8H^{4}\ddot{H}+H^{2}\ddot{H}\Big{\}}F^{\prime\prime}(G)+(8 \times 24^{2})\;H^{4}\left(2\dot{H}^{2}+H\ddot{H}+4H^{2}\dot{H}\right)^{2}F^{ \prime\prime\prime}(G)\Bigg{]}+p\,, \tag{295}\] Assuming the matter sector with a constant EoS parameter \(w\equiv p/\rho\), and then combining the Friedmann and Raychaudhuri equations in Eq. (293), one leads to the following equation: \[G\left(H,\dot{H},\ddot{H},\overline{H},...\right)=-\frac{1}{\kappa^{2}}\left[ 2\dot{H}+3(1+w)H^{2}\right], \tag{296}\] where \[G\left(H,\dot{H},\ddot{H},\overline{H},...\right)=p_{\rm eff}-w\rho_{\rm eff}. \tag{297}\] The Eq. (296) brings in a very crucial physical insight in terms of the new function \(G\left(H,\dot{H},\ddot{H},\overline{H},...\right)\) which involves \(H\), \(\dot{H}\) and the higher order derivatives of \(H\). When a cosmological model is prescribed in terms of the Hubble rate \(H=H(t)\), the right-hand side of Eq. (296) then reduces to a function, \(f(t)\) of the cosmic time \(t\). Now, if the function \(G\left(H,\dot{H},\ddot{H},\overline{H},...\right)\) in Eq. (297) is chosen in such a way so that \(G\left(H,\dot{H},\ddot{H},\widetilde{H},...\right)\) reproduces the above function \(f(t)\), then the aforementioned cosmology given by \(H=H(t)\) can be realized. Therefore, the function \(G\left(H,\dot{H},\ddot{H},\overline{H},...\right)\) plays a very crucial role to judge the viability of a given cosmological scenario characterized by the Hubble rate [589]. The mathematical form of \(G\left(H,\dot{H},\ddot{H},\overline{H},...\right)\) can be determined from the prescribed gravitational theory. In the context of \(F(G)\)-gravity, inserting Eqs. (294) and (295) into Eq. (297), one obtains, \[G\left(H,\dot{H},\ddot{H},\overline{H},...\right)=\frac{1}{2\kappa ^{2}} \,\left[(1+w)F(G)-24(1+w)H^{2}\left(H^{2}+\dot{H}\right)F^{\prime}(G)+(8 \times 24)H^{2}\left\{6\dot{H}^{3}+8H\dot{H}\ddot{H}\right.\right. \tag{298}\] \[\left.\left.+6(4+w)\dot{H}^{2}H^{2}+3(2+w)H^{3}\ddot{H}+4(2+3w)H^ {4}\dot{H}+H^{2}\ddot{H}\right\}F^{\prime\prime}(G)\right.\] \[\left.+(8\times 24^{2})H^{4}\left(2\dot{H}^{2}+H\ddot{H}+4H^{2} \dot{H}\right)^{2}F^{\prime\prime\prime}(G)\right]\,.\] #### vi.2.1 Finite-time Singularities In this section, we investigate the possibility of finite-time future singularities in \(F(G)\) gravity models. To start with we consider the following expression of the Hubble parameter in (198). From Eq. (198) one can clearly notice that if \(\beta>0\), the \(H(t)\) becomes singular in the limit \(t\to t_{s}\). Therefore, \(t_{s}\) determines the time when a singularity appears in the above model. While on the other hand, for \(\beta<0\), \(H(t)\) does not exhibit any singular behavior, but for non-integer negative values of \(\beta\), some derivative of \(H(t)\) may be singular and consequently the curvature could exhibit singular behavior [589]. Therefore, we see that for both positive and negative values of \(\beta\), singularities in \(H(t)\) or in the curvature may appear. Therefore, we assume \(\beta\neq 0\) because the case \(\beta=0\) corresponds to the de Sitter space, which has no singularity. Thus, for the above cosmology characterized by the Hubble rate in Eq. (198), one can try to find the equivalent \(F(G)\) models following the reconstruction technique described in Refs. [154, 589, 625, 629]. Now, with the choice of suitable functions \(P(t)\) and \(Q(t)\) of a scalar field \(t\) which in this case is identified with the cosmic time, the action integral in Eq. (290) can be expressed as \[S=\int d^{4}x\sqrt{-g}\left[\frac{1}{2\kappa^{2}}\bigg{(}R+P(t)G+Q(t)\bigg{)}+ L_{\rm matter}\right]\,. \tag{299}\] The variation of the action in Eq. (299) with respect to \(t\) gives, \[\frac{dP(t)}{dt}G+\frac{dQ(t)}{dt}=0\,, \tag{300}\] from which one can in principle express \(t=t(G)\). Now, substituting \(t=t(G)\) into Eq. (299), one can express the action in terms of \(F(G)\) where \[F(G)=P\left(t(G)\right)G+Q\left(t(G)\right)\,. \tag{301}\] Now, subtracting the equations of motion, namely, Eqs. (291) and (292), we obtain a second order differential equation \[8\frac{d}{dt}\left(H^{2}\frac{dP}{dt}\right)-8H^{3}\frac{dP}{dt}+4\dot{H}+2 \kappa^{2}\rho_{0}(1+w)a^{-3(1+w)}=0\,, \tag{302}\] where \(\rho_{0}\) refers to the present energy density of \(\rho\). We note that instead of considering only one fluid in Eqs. (294) and (295), if one would consider several matter components with \(w_{i}=p_{i}/\rho_{i}\) being the EoS parameter of the \(i\)-th fluid, then in this case, the last term of the equation (302) will be replaced by \(2\kappa^{2}\sum_{i}(1+w_{i})\rho_{i0}a^{-3(1+w_{i})}\). If one can solve for \(P(t)\) from Eq. (302), then from Eq. (291), one can find \(Q(t)\) as follows \[Q(t)=-24H^{3}\frac{dP}{dt}-6H^{2}+2\kappa^{2}\rho_{0}a^{-3(1+w)}\,. \tag{303}\] Thus, one can see that any cosmology which is prescribed in terms of any Hubble rate, then this cosmology can be realized by some specific \(F(G)\) gravity model. Just for simplicity, we neglect the matter term from the above equations and focus on the specific \(F(G)\) models described by the Hubble rate given in Eq. (198). Therefore, under the absence of matter term, Eqs. (302) and (303) become, \[8\frac{d}{dt}\left(H\frac{dP}{dt}\right)-8H^{3}\frac{dP}{dt}+4\dot{H}=0\,, \tag{304}\] and \[Q(t)=-24H^{3}\frac{dP}{dt}-6H^{2}\,. \tag{305}\] In the following we show a correspondence between the finite-time singularities and the associated \(F(G)\) model. (i) _Big Rip singularity:_ Let us focus on the Big Rip singularity which is realized by some of the \(F(G)\) gravity models. To begin with we consider \(\beta=1\) and \(H_{s}=0\) in Eq. (198), for which the Hubble rate \(H(t)\) takes the form of (59) [625] and consequently one can derive \(G\) as follows, \[G=\frac{24h_{s}^{3}}{(t_{s}-t)^{4}}(1+h_{s})\,. \tag{306}\] Thus, with the above choice of \(H\), the most general solution of the second order differential equation (304) is [625] \[P(t)=\frac{1}{4h_{s}(h_{s}-1)}(2t_{s}-t)t+c_{1}\frac{(t_{s}-t)^{3-h_{s}}}{3-h_{s} }+c_{2}\,, \tag{307}\] where \(c_{1}\) and \(c_{2}\) are arbitrary constants. Having the solution for \(P(t)\) as in Eq. (307), from Eq. (305), one can derive \(Q(t)\) as [625] \[Q(t)=-\frac{6h_{s}^{2}}{(t_{s}-t)^{2}}-\frac{24h_{s}^{3}}{(t_{s}-t)^{3}}\left[ \frac{(t_{s}-t)}{2h_{s}(h_{s}-1)}-c_{1}(t_{s}-t)^{2-h_{s}}\right]\,. \tag{308}\] Further, from Eq. (300) we obtain \[t-t_{s}=\left[\frac{24(h_{s}^{3}+h_{s}^{4})}{G}\right]^{1/4}\,, \tag{309}\] which, as one can see, is consistent with Eq. (306). Thus, having all the above information, using Eq. (301), one can write down the most general form of \(F(G)\) realizing the Big Rip singularity as follows [625]: \[F(G)=\left(\frac{\sqrt{6h_{s}^{3}(1+h_{s})}}{h_{s}(1-h_{s})}\right)\sqrt{G}+c _{1}G^{\frac{1+h_{s}}{4}}+c_{2}G\,. \tag{310}\] For \(h_{s}=1\), that means when the Hubble rate becomes \(H(t)=(t_{s}-t)^{-1}\), one can also find another exact solution for \(P(t)\) as follows, \[P(t)=\alpha(t_{s}-t)^{q}\ln\left[\gamma(t_{s}-t)^{z}\right]\,, \tag{311}\] where \(\gamma\) is a positive real number, From Eq. (305), one can derive, \[Q(t)=-\frac{12}{(t_{s}-t)^{2}}\ln\left[\gamma(t_{s}-t)\right]\,, \tag{312}\] Thus, using the expressions for \(P(t)\) and \(Q(t)\), the form of \(F(G)\) is given by [625] \[F(G)=\frac{\sqrt{3}}{2}\sqrt{G}\ln(\gamma G)\,. \tag{313}\] This form of \(F(G)\) corresponds to the Hubble rate (198) with \(h_{s}=1\) and \(H_{s}=0\) and it realizes the Big Rip singularity. In general, for large values of \(G\), \(F(G)\sim\alpha\sqrt{G}\ln(\gamma G)\) with \(\alpha>0\), \(\gamma>0\), and the Big Rip singularity may appear. Moreover, the Big Rip singularity may also appear for \(F(G)\sim\alpha\sqrt{G}\ln(\gamma G^{z}+G_{s})\) with \(\alpha>0\), \(\gamma>0\), \(z>0\) where \(G_{s}\) is a constant such that \(\gamma G^{z}+G_{s}>0\). (ii) _Other types of singularities:_ Let us now discuss the appearance of other types of singularities in \(F(G)\)-gravity models. We consider the Hubble rate of Eq. (198) with \(H_{s}=0\) but \(\beta\neq 1\). In this case, the evolution of the scale factor \(a(t)\) turns out to be \[a(t)=a_{0}\exp\left[\frac{h_{s}}{\beta-1}\left((t_{s}-t)^{1-\beta}-(t_{s}-t_{0 })^{1-\beta}\right)\right]\,, \tag{314}\] and we observe the emergence of following singularities: * When \(\beta>1\), then \(H\) and \(G\) are given by \[H=\frac{h_{s}}{(t_{s}-t)^{\beta}}\,,\quad G\sim\frac{24h_{s}^{4}}{(t_{s}-t)^{ 4\beta}}\,.\] (315) A solution of the Eq. (304) in the limit \(t\to t_{s}\) can be found as \[P(t)\simeq\frac{\alpha}{(t_{s}-t)^{z}}\,,\] (316) where \(z=-2\beta\) and \(\alpha=-1/4h_{s}^{2}\). The expression of \(F(G)\) now follows \[F(G)=-12\sqrt{\frac{G}{24}}\,.\] (317) Hence, if for large values of \(G\), \(F(G)\sim-\alpha\sqrt{G}\) with \(\alpha>0\), then a Type I singularity could appear. * When \(0<\beta<1\) for which the forms of \(H\) and \(G\) are given by \[H=\frac{h_{s}}{(t_{s}-t)^{\beta}}\,,\quad G\sim\frac{24h_{s}^{3}\beta}{(t_{s}-t)^ {3\beta+1}}\,.\] (318) In this case, an asymptotic solution of Eq. (304) in the limit \(t\to t_{s}\) can be found as \[P(t)\simeq\frac{\alpha}{(t_{s}-t)^{z}}\;,\] (319) where \(z=-(1+\beta)\) and \(\alpha=1/2h_{s}(1+\beta)\). The form of \(F(G)\) now becomes \[F(G)=\frac{6h_{s}^{2}}{(\beta+1)}(3\beta+1)\left(\frac{|G|}{24h_{s}^{3}|\beta| }\right)^{2\beta/(3\beta+1)}\,.\] (320) Hence, if for large values of \(G\), \(F(G)\) has the following form \[F(G)\sim\alpha|G|^{\gamma}\,,\quad\gamma=\frac{2\beta}{3\beta+1}\,,\] (321) where \(\alpha>0\) and \(0<\gamma<1/2\), then because we are assuming \(0<\beta<1\) and in that case, a Type III singularity could emerge. Finally, for a general value of \(\beta\) we have the following: * If for \(G\to-\infty\), \(F(G)\) has the form as in Eq. (321) with \(\alpha>0\) and \(-\infty<\gamma<0\), then we find \(-1/3<\beta<0\) and a Type II (sudden) singularity could appear. * If for \(G\to 0^{-}\), \(F(G)\) takes the form of Eq. (321) with \(\alpha<0\) and \(1<\gamma<\infty\), then we obtain \(-1<\beta<-1/3\) and a Type II singularity could occur. * If for \(G\to 0^{-}\), \(F(G)\) assumes the form as in Eq. (321) with \(\alpha>0\) and \(2/3<\gamma<1\), then we obtain \(-\infty<\beta<-1\) and a Type IV singularity could appear. We also require that \(\gamma\neq 2n/(3n-1)\), i.e., \(\beta\neq n\), where \(n\) is a natural number. Let us note that we can generate all the possible Type II singularities as shown above except for \(\beta=-1/3\), i.e., for \(H=h_{s}(t_{s}-t)^{1/3}\) because \(\gamma=0\). In this case, \(G\) takes the form: \[G=24h_{s}^{3}\beta+24h_{s}^{4}(t_{s}-t)^{4/3}<0\,.\] (322) Thus, in order to express \(t\) in terms of \(G\), it is essential to consider the whole expression of \(G\), and by considering the leading term involving \((t_{s}-t)\) in (302) and (303) with \(\rho_{0}=0\), we obtain \[F(G)\simeq\frac{1}{4\sqrt{6}h_{s}^{3}}G(G+8h_{s}^{3})^{1/2}+\frac{2}{\sqrt{6} }(G+8h_{s}^{3})^{1/2}\,, \tag{323}\] which satisfies Eq. (296) in the limit \(t\to t_{s}\). Consequently, the specific model \(F(G)=\sigma_{1}G(G+c_{3})^{1/2}+\sigma_{2}(G+c_{3})^{1/2}\), where \(\sigma_{1}\), \(\sigma_{2}\) and \(c_{3}\) are positive constants, can generate a Type II singularity. ### \(F(r,g)\) Gravity The \(F(R,G)\)-gravity is a very generalized gravitational theory where \(F\) is a generalized function of the Ricci scalar \(R\) and the Gauss-Bonnet invariant is defined in Eq. (141). One can clearly see that the \(F(R,G)\) gravity theory can recover \(F(R)\)-gravity and \(F(G)\)-gravity as special cases. Moreover, with the suitable choice for \(F\) leading to \(F(R,G)=R\), one recovers Einstein's GR as well. Thus, one can see that this modified gravity theory being the generalized version of both \(F(R)\) and \(F(G)\) gravity theories can offer some appealing consequences and very soon of its introduction, \(F(R,G)\) gravity theory received significant attention in the community [630, 623, 625, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645]. In the present section we shall investigate the finite-time future singularities appearing in this generalized modified gravitational theory. This model includes ghosts [626] and the ghost-free modifications have been proposed in Ref. [645]. In this section, however, again for the illustration, we show the structure of the singularities based on the original models. The action of the \(F(R,G)\)-gravity is given by [625; 632; 634]: \[S=\int d^{4}x\sqrt{-g}\left[\frac{F(R,G)}{2\kappa^{2}}+L_{\rm matter}\right]\,, \tag{324}\] For the action (324), one can derive the gravitational equations as follows \[F^{\prime}_{R}\left(R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}\right)= \,\kappa^{2}T^{(\rm matter)}_{\mu\nu}+\frac{1}{2}g_{\mu\nu}\left( F-F^{\prime}_{R}R\right)+\nabla_{\mu}\nabla_{\nu}F^{\prime}_{R}-g_{\mu\nu}\Box F ^{\prime}_{R}\] \[+\left(-2RR_{\mu\nu}+4R_{\mu\rho}{R_{\nu}}^{\rho}-2{R_{\mu}}^{ \rho\sigma\tau}R_{\nu\rho\sigma\tau}+4g^{\alpha\rho}g^{\beta\sigma}R_{\mu \alpha\nu\beta}R_{\rho\sigma}\right)F^{\prime}_{G}\] \[+2\left(\nabla_{\mu}\nabla_{\nu}F^{\prime}_{G}\right)R-2g_{\mu \nu}\left(\Box F^{\prime}_{G}\right)R+4\left(\Box F^{\prime}_{G}\right)R_{\mu \nu}-4\left(\nabla_{\rho}\nabla_{\mu}F^{\prime}_{G}\right)R_{\nu}{}^{\rho}\] \[-4\left(\nabla_{\rho}\nabla_{\nu}F^{\prime}_{G}\right)R_{\mu}{}^ {\rho}+4g_{\mu\nu}\left(\nabla_{\rho}\nabla_{\sigma}F^{\prime}_{G}\right)R^{ \rho\sigma}-4\left(\nabla_{\rho}\nabla_{\sigma}F^{\prime}_{G}\right)g^{\alpha \rho}g^{\beta\sigma}R_{\mu\alpha\nu\beta}\,, \tag{325}\] where \(\nabla_{\mu}\) is the covariant derivative operator associated with the metric tensor \(g_{\mu\nu}\); \(\Box\equiv g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\) denotes the covariant d'Alembertian for a scalar field; \(T^{(\rm matter)}_{\mu\nu}={\rm diag}\left(\rho,p,p,p\right)\) is the energy-momentum tensor of the matter sector that includes all the ordinary matter fluids with \(\rho\) (\(=\sum_{i}\rho_{i}\)) and \(p\) (\(=\sum_{i}p_{i}\)) are the total energy density and pressure of all the fluids, respectively (\(\rho_{i}\), \(p_{i}\) being the representatives of the \(i\)-th fluid), and \[F^{\prime}_{R}=\frac{\partial F(R,G)}{\partial R}\,,\quad F^{\prime}_{G}= \frac{\partial F(R,G)}{\partial G}\,. \tag{326}\] In the background of a spatially flat FLRW Universe (3), the gravitational field equations in (325) can be expressed as \[\rho_{\rm eff}=\frac{3}{\kappa^{2}}H^{2}\,,\quad p_{\rm eff}=-\frac{1}{ \kappa^{2}}\left(2\dot{H}+3H^{2}\right)\,, \tag{327}\] where \(\rho_{\rm eff}\) and \(p_{\rm eff}\), termed as the effective energy density and pressure of the Universe, respectively, are given by \[\rho_{\rm eff}=\frac{1}{F^{\prime}_{R}}\left[\rho+\frac{1}{2\kappa ^{2}}\left\{(F^{\prime}_{R}R-F)-6H\dot{F}^{\prime}_{R}+GF^{\prime}_{G}-24H^{3} \dot{F}^{\prime}_{G}\right\}\right]\,, \tag{328}\] \[p_{\rm eff}=\frac{1}{F^{\prime}_{R}}\left[p+\frac{1}{2\kappa^{2} }\left\{-\left(F^{\prime}_{R}R-F\right)+4H\dot{F}^{\prime}_{R}+2\ddot{F}^{ \prime}_{R}-GF^{\prime}_{G}+16H\left(\dot{H}+H^{2}\right)\dot{F}^{\prime}_{G}+ 8H^{2}\ddot{F}^{\prime}_{G}\right\}\right]\,. \tag{329}\] Let us now proceed towards the investigations of the finite-time singularities in the \(F(R,G)\)-gravity. Similar to the earlier section IV.6 on \(F(G)\), we reconstruct the \(F(R,G)\)-gravity models that can produce finite-time singularities. In order to do so, we consider the pure gravitation action of \(F(R,G)\)-gravity, that means action (324) without matter sector exactly what we have considered in the earlier section IV.6. In this case, from Eqs. (328) and (329), one can write down the gravitational equations as \[0= \,24H^{3}\dot{F}^{\prime}_{G}+6H^{2}F^{\prime}_{R}+6H\dot{F}^{ \prime}_{R}+\left(F-RF^{\prime}_{R}-GF^{\prime}_{G}\right), \tag{330}\] \[0= \,8H^{2}\ddot{F}^{\prime}_{G}+2\ddot{F}^{\prime}_{R}+4H\dot{F}^{ \prime}_{R}+16H\dot{F}^{\prime}_{G}(\dot{H}+H^{2})+F^{\prime}_{R}(4\dot{H}+6H ^{2})+F-RF^{\prime}_{R}-GF^{\prime}_{G}\,. \tag{331}\] We note that in the case of pure gravity (i.e., when no matter sector is present in the gravitational action), the above equations (330) and (331) are linearly dependent. Now, following the similar fashion as adopted in the earlier section (IV.6), with the choice of the proper functions \(P(t)\), \(Z(t)\) and \(Q(t)\) of a scalar field (where we identify the scalar field with time \(t\)), one can rewrite Eq. (324) without \(L_{\rm matter}\) as \[S=\frac{1}{2\kappa^{2}}\int d^{4}x\sqrt{-g}\Bigg{(}P(t)R+Z(t)G+Q(t)\Bigg{)}\,. \tag{332}\] By varying the action in Eq. (299) with respect to \(t\), we obtain \[P^{\prime}(t)R+Z^{\prime}(t)G+Q^{\prime}(t)=0\,, \tag{333}\] where the prime denotes differentiation with respect to \(t\). From Eq. (333), one can in principle express \(t\) as a function of \(R,G\), i.e., \(t=t(R,G)\). Now, with the substitution of \(t=t(R,G)\) in Eq. (299), we can express \(F(R,G)\) as \[F(R,G)=P\left(t(R,G)\right)R+Z\left(t(R,G)\right)G+Q\left(t(R,G)\right)\,. \tag{334}\] Finite-time Singularities We consider, once again, the following expression of the Hubble parameter in (198) [625]. From Eq. (198), one can express the scale factor as \[a(t)=\bar{a}\exp\left(g(t)\right)\,, \tag{335}\] where \(\bar{a}\) is a constant and \(g(t)\) is some differentiable function of \(t\) satisfying \(\dot{g}(t)=H(t)\). Using the conservation law and Eq. (330), we arrive at the following second order differential equation \[P^{\prime\prime}(t)+4\dot{g}^{2}(t)Z^{\prime\prime}(t)-\dot{g}(t)P^{\prime}(t) +(8\dot{g}\ddot{\bar{g}}-4\dot{g}^{3}(t))Z^{\prime}(t)+2\ddot{g}(t)P(t)=0\,, \tag{336}\] where we have implemented the expression of the scale factor as given in Eq. (335). Now, from Eq. (330), one can derive \(Q(t)\) as \[Q(t)=-24\dot{g}^{3}(t)Z^{\prime}(t)-6\dot{g}^{2}(t)P(t)-6\dot{g}(t)P^{\prime}( t)\,. \tag{337}\] If \(P(t)\neq 0\), then \(F(R,G)\) can be written as \[F(R,G)=R\;\psi(R,G)+f(R,G)\,, \tag{338}\] where \(\psi(R,G)\neq 0\) and \(f(R,G)\) is any generic function of \(R\) and \(G\). Now, from Eqs. (328) and (329), we obtain, \[\rho_{\rm eff}=-\frac{1}{2\kappa^{2}g(R,G)}\bigg{[}24H^{3}\dot{F}^{\prime}_{G }+6H^{2}\left(R\frac{d\psi(R,G)}{dR}+\frac{df(R,G)}{dR}\right)+6H\dot{F}^{ \prime}_{R}+(F-RF^{\prime}_{R}-GF^{\prime}_{G})\bigg{]}, \tag{339}\] and \[p_{\rm eff}= \,\frac{1}{2\kappa^{2}g(R,G)}\bigg{[}8H^{2}\ddot{F}^{\prime}_{G} +2\ddot{F}^{\prime}_{R}+4H\dot{F}^{\prime}_{R}+16H\dot{F}^{\prime}_{G}(\dot{H} +H^{2})\] \[+\left(R\frac{d\psi(R,G)}{dR}+\frac{df(R,G)}{dR}\right)(4\dot{H} +6H^{2})+F-RF^{\prime}_{R}-GF^{\prime}_{G}\bigg{]}\,, \tag{340}\] and similar to the earlier Section IV.6, the function \(G\left(H,\dot{H},\ddot{H},...\right)\) can be written as \[G\left(H,\dot{H},\ddot{H},\ddot{H},...\right)=-\frac{1}{\kappa^{2}}\left[2 \dot{H}+3(1+w)H^{2}\right], \tag{341}\] where more explicitly, \[G(H,\dot{H},\ddot{H},...)= p_{\rm eff}-w\rho_{\rm eff}\] \[= \,\frac{1}{2\kappa^{2}\psi(R,G)}\bigg{[}(1+w)(F-RF^{\prime}_{R}- GF^{\prime}_{G})+\left(R\frac{d\psi(R,G)}{dR}+\frac{df(R,G)}{dR}\right)\left(6H^{2 }(1+w)+4\dot{H}\right)\] \[+H\dot{F}^{\prime}_{R}(4+6w)+8H\dot{F}^{\prime}_{G}\left(2\dot{H }+H^{2}(2+3w)\right)+2\ddot{F}^{\prime}_{R}+8H^{2}\ddot{F}^{\prime}_{G}\bigg{]}\,, \tag{342}\] where \(w\) is the EoS parameter of the matter. It is important to note that the use of the above equation (342) demands \(\psi(R,G)\neq 0\) on the solution. Let us now explicitly describe the finite-time singularities allowed in various \(F(R,G)\)-gravity models. #### iv.2.1 (i) Big Rip singularity Let us first discuss the appearance of the Big Rip singularity in this modified gravity model. We consider the Hubble rate of Eq. (198) with \(\beta=1\) and \(H_{s}=0\) which gives the Hubble rate in (59) and consequently, one can derive \(R\) and \(G\) for Eq. (59) as \[R=\frac{6h_{s}}{(t_{s}-t)^{2}}(2h_{s}+1)\,,\quad G=\frac{24h_{s}^{3}}{(t_{s}- t)^{4}}(1+h_{s})\,. \tag{343}\] A very trivial solution of Eq. (336) can be given by \[P(t)=\alpha(t_{s}-t)^{z}\,,\quad Z(t)=\delta(t_{s}-t)^{x}\,, \tag{344}\] where \(\alpha\) and \(\delta\) be any real numbers; \(x=3-h_{s}\), and \(z\) can be found from \[z_{\pm}=\frac{1-h_{s}\pm\sqrt{h_{s}^{2}-10h_{s}+1}}{2}\,. \tag{345}\] Therefore, the most general solution of \(P(t)\) can be given as \[P(t)=\alpha_{1}(t_{s}-t)^{z_{+}}+\alpha_{2}(t_{s}-t)^{z_{-}}\,, \tag{346}\] where \(\alpha_{1}\) and \(\alpha_{2}\) are any real numbers. Now, from Eq. (337), one can derive \[Q(t)=\frac{24h_{s}^{3}\delta(3-h_{s})}{(t_{s}-t)^{h_{s}+1}}+\frac{6h_{s}\alpha _{1}(z_{+}-h_{s})}{(t_{s}-t)^{2-z_{+}}}+\frac{6h_{s}\alpha_{2}(z_{-}-h_{s})}{ (t_{s}-t)^{2-z_{-}}}\,. \tag{347}\] Now for the condition \(0<h_{s}<5-2\sqrt{6}\) or \(h_{s}>2+\sqrt{6}\), the solution of \(F(R,G)\) is given by \[F(R,G)=\bar{\alpha}_{1}R^{1-\frac{z_{+}}{2}}+\bar{\alpha}_{2}R^{1-\frac{z_{-} }{2}}+\bar{\delta}G^{\frac{h_{s}+1}{4}}\,, \tag{348}\] where some factors have been absorbed into the constants. Note also that \(z_{\pm}\neq 0\), otherwise \(h_{s}\) vanishes which is a contradiction. Another solution of Eq. (336) can be given by \[P(t)=\frac{\alpha}{(t_{s}-t)^{z}}\,,\quad Z(t)=\frac{\delta}{(t_{s}-t)^{x}}\,, \tag{349}\] where \(\delta\), \(x\) are real numbers; \(z=x+2\) and \(\alpha\) is given by \[\alpha=\frac{4h_{s}^{2}\delta x(h_{s}-x-3)}{x^{2}+(5-h_{s})x+6}\,. \tag{350}\] From Eq. (337), one finds \[Q(t)=-\frac{6h_{s}}{(t_{s}-t)^{x+4}}\left[4h_{s}^{2}\delta x+\alpha(x+2+h_{s} )\right]\,. \tag{351}\] The solution of the Eq. (333) is given by \[t=t(R,G)=t_{s}-\left[\begin{array}{c}\frac{-\alpha(x+2)R\pm \sqrt{\alpha^{2}(x+2)^{2}R^{2}+24h_{s}\big{(}4h_{s}^{2}\delta x+\alpha(x+2+h_{ s})\big{)}(x+4)\delta xG}}{2\delta xG}\end{array}\right]^{1/2}\,, \tag{352}\] where \(x\neq 0\) and \(\delta\neq 0\). In order to get real solutions, one has to ensure that first of all, we should have \(\Delta=\alpha^{2}(x+2)^{2}R^{2}+24h_{s}\big{(}4h_{s}^{2}\delta x+\alpha(x+2+h_ {s})\big{)}(x+4)\geq 0\) in Eq. (352) and secondly, the entire term inside the third trace of Eq. (352) has to be positive. For \(h_{s}>0\), the principal cases with the real solutions of (352) are the following: * Case I: For \(x>0\), \(\delta>0\), \(1+x\leq h_{s}<x+5+\frac{6}{x}\), we need to use the sign \(+\) in the r.h.s. of Eq. (352). * Case II: For \(-\frac{3}{2}\leq x<0\), \(\delta<0\), \(h_{s}\geq x+1\), we need to use the sign \(+\) in the r.h.s. of Eq. (352). * Case III: For \(-4<x<-\frac{3}{2}\), \(\delta<0\), \(h_{s}>x+5+\frac{6}{x}\), we must use the sign \(+\) in the r.h.s. of Eq. (352). * Case IV: For \(x>0\), \(\delta<0\), \(x+5+\frac{6}{x}>h_{s}\geq 1+x\), we need to use the sign \(-\) in the r.h.s. of Eq. (352). * Case V: For \(-\frac{3}{2}\leq x<0\), \(\delta>0\), \(h_{s}\geq x+1\), we must use the sign \(-\) in the r.h.s. of Eq. (352). * Case VI: For \(-4<x<-\frac{3}{2}\), \(\delta>0\), \(h_{s}>x+5+\frac{6}{x}\), we need to use the sign \(-\) in the right hand side of Eq. (352). * Case VII: For \(x=-4\), \(\delta>0\), we must use the sign \(-\) in the r.h.s. of Eq. (352). * Case VIII: For \(x=-4\), \(\delta<0\), we must use the sign \(+\) in the r.h.s. of Eq. (352). The solution of \(F(R,G)\) is then given by \[F(R,G)=\left(\frac{\alpha}{(t_{s}-t(R,G))^{x+2}}\right)R+\left(\frac{\delta}{(t _{s}-t(R,G))^{x}}\right)G-\left(\frac{6h_{s}}{(t_{s}-t(R,G))^{x+4}}\right) \left[4h^{2}\delta x+\alpha(x+2+h_{s})\right], \tag{353}\] where \(t(R,G)\) is given by Eq. (352). The expression for \(F(R,G)\) in Eq. (353) represents an exact solution of the equation of motion in Eqs. (328) and (329) allowing the Big Rip singularity. Let us show some specific examples of \(F(R,G)\)-gravity models allowing the Big Rip singularity obtained for some specific values of \(\alpha\) and \(x\). * For \(\alpha=1\) and \(x=-2\), we find \[F(R,G)=R+\left(\frac{\sqrt{6}\sqrt{h_{s}(1+h_{s})}}{(1-h_{s})}\right)\sqrt{G} \,,\quad h\neq 1\,.\] (354) * If \(\alpha=0\) and \(x=h_{s}-3\) (this case corresponds to Case (I) \(-\) Case (VI), presented above), then one finds \[F(R,G)=\delta\;G^{\frac{h_{s}+1}{4}}\,,\quad\delta\neq 0\,,\] (355) which is actually equivalent to Eq. (348) for \(\alpha_{1}=\alpha_{2}=0\). * If \(x=-4\), then the model reduces to \[F(R,G)=\frac{16h_{s}^{4}\delta}{(1+2h_{s}^{2})^{2}}\left[(9+21h_{s}+6h_{s}^{2} )-(1+h_{s})^{2}\frac{R^{2}}{G}\right]\,,\quad\delta\neq 0\,.\] (356) Thus, we see that for large values of \(R\) and \(G\), \(F(R,G)\sim\pm\alpha\mp\delta(R^{2}/G)\) with \(\alpha>0\) and \(\delta>0\), then the Big Rip singularity could appear. * For \(x=h_{s}-1\), the model becomes (by absorbing some constants) \[F(R,G)=\delta\left(\frac{R}{G}\right)^{\frac{1-h_{s}}{2}}G\,,\quad\delta\neq 0 \,,\quad h_{s}\neq 1\,.\] (357) Thus, we see that for large values of \(R\) and \(G\), \(F(R,G)\sim\delta G^{\gamma}/R^{\gamma-1}\) with \(\delta\neq 0\) and \(\frac{1}{2}<\gamma<1\) or \(1<\gamma<+\infty\), and the Big Rip singularity could appear. We close the discussion with a general model that allows Big Rip singularity. One can verify that the model \[F(R,G)=\gamma\frac{G^{m}}{R^{n}}\,, \tag{358}\] with \(\gamma\) being any real number, is a solution of Eqs. (328) and (329) in the case of the Big Rip singularity for some value of \(h_{s}\). In general, it is possible to obtain solutions for \(h_{s}>0\) if \(m>0\), \(n>0\) and \(m>n\). For example, the case \(n=2\) and \(m=3\) realizes the Big Rip singularity in \(h_{s}=5\); the case \(n=1\) and \(m=3\) realizes the big rip singularity in \(h_{s}=4+\sqrt{19}\) and so on. This is a generalization of Eq. (357). It is important here to mention that for \(m=-1\) and \(n=-2\), we do not recover a physical scenario because in this case we obtain \(h_{s}=-3\). However, this case (i.e., \(m=-1\) and \(n=-2\)), has a similarity with Eq. (356) allowing the Big Rip singularity. Finally, for the case with \(m=0\) or \(n=0\), we recover the model in Eq. (348). _(ii) Other types of singularities_ Here we discuss other types of singularities that may appear in various \(F(R,G)\)-gravity models. To begin with we consider the following expansion of the Hubble rate in (99). For the above choice of the Hubble rate, an exact solution of the differential Eq. (336) can be found as \[P(t)=-\lambda(4h_{s}^{2})(t_{s}-t)\,,\quad Z(t)=\lambda(t_{s}-t)^{2\beta+1}\,, \tag{359}\] where \(\lambda\) is a constant. The form of \(Q(t)\) from Eq. (337) can now be derived as \[Q(t)=\frac{24h_{s}^{4}\lambda}{(t_{s}-t)^{2\beta-1}}+\frac{48h_{s}^{3}\beta}{( t_{s}-t)^{\beta}}\,. \tag{360}\] Now we examine the cases when \(\beta>1\) or \(\beta<1\). * For \(\beta>1\), one may obtain the asymptotic real solution of Eq. (333): \[t=t(R,G)=t_{s}-2^{1/2\beta}\left[\frac{h_{s}^{2}R+\sqrt{h_{s}^{4}R^{2}+6h_{s}^{4}( 4\beta^{2}-1)G}}{(1+2\beta)G}\right]^{1/2\beta}\,.\] (361) Thus, the mathematical form of \(F(R,G)\) can now be expressed as \[F(R,G)=-4h_{s}^{2}\lambda(t_{s}-t(R,G))R+\lambda(t_{s}-t(R,G))^{1+2\beta}G+24h_ {s}^{4}\lambda(t_{s}-t(R,G))^{1-2\beta}\,,\quad\beta>1\,.\] (362) This is an asymptotic solution of Eq. (341) for \(\beta>1\), when \[-\frac{1}{\kappa^{2}}\left[2\dot{H}+3(1+w)H^{2}\right]\sim-\frac{3(1+w)h_{s}^ {2}}{\kappa^{2}}(t_{s}-t)^{-2\beta}\,.\] (363) Interestingly, for \(\beta\gg 1\), the mathematical form of \(F(R,G)\) is given by \[F(R,G)\simeq\lambda\left(\frac{\alpha G}{R+\sqrt{R^{2}+\gamma G}}-R\right)\,, \quad\alpha>0\,,\quad\gamma>0\,,\quad\lambda\neq 0\,,\] (364) and this is the asymptotic behavior of a \(F(R,G)\) model where a "strong" Type I singularity (\(\beta\gg 1\)) may appear. We consider some other explicit cases where Type I singularity appears as follows. Taking \[F(R,G)=\gamma\frac{G^{m}}{R^{n}}\,,\] (365) and using Eqs. (328) and (329), one can verify that the function \(G\left(H,\dot{H},\ddot{H},...\right)\) in Eq. (342) becomes, \[G(H,\ddot{H},..)\simeq-\frac{3h_{s}^{2}(2m-n-1)(1+w)}{\kappa^{2}(t_{s}-t)^{2 \beta}}\,,\] (366) which under the restriction \(2m-n-1>0\), represents an asymptotic solution of Eq. (341) for \(\beta>1\). Thus, we see that for the model \(F(R,G)\simeq\gamma G^{m}/R^{n}\) with \(m>(n+1)/2\), the Type I singularity may appear. This observation has some important consequences in the context of other modified gravity models because one can see that the modified gravity models where either \(F(R)=R^{n}\) with \(n>1\) or \(F(G)=G^{m}\) with \(m>1/2\) can allow singularities. One can also find some other models allowing the Type I singularities and we can construct these models following section IV.6 as follows. The Type I singularities actually correspond to the asymptotic limits for \(R\) and \(G\) \[R\sim 12H^{2}\,,\quad G\sim 24H^{4}\,.\] (367) Since \(R\) and \(H\) in the asymptotic limits are nothing but the functions of the Hubble parameter only, therefore, one can write that, \[\lim_{t\to t_{s}}24\left(\frac{R}{12}\right)^{2}=\lim_{t\to t_{s}}G\,.\] (368) Now, if we substitute \(G\) for \(R\) in Eq. (317) considering Eq. (368), then we obtain a zero function5. Thus, if we instead substitute \(G\) for \(G/R\), we therefore obtain the following model \[F(R,G)=R-\frac{6G}{R}\,,\] (369) which is an asymptotic solution of Eq. (341) such as Eq. (363). Thus, we see that Type I singularity appears in the model \(F(R,G)\sim R-\alpha(G/R)\) with \(\alpha>0\). Footnote 5: The reason for obtaining the zero function is that, in this case Eq. (317) becomes zero on the singularity solution. * For the Hubble rate in Eq. (99) with \(\beta<1\), it is not possible to express \(G\) and \(R\) as the functions of the same variable (i.e., \(H\) or the same combination of \(H\) and \(\dot{H}\)). However, if we examine the asymptotic behavior of \(G\) and \(R\), we have \[R\simeq\frac{6h_{s}\beta}{(t_{s}-t)^{\beta+1}}\,,\quad G\simeq\frac{24h_{s}^{3 }\beta}{(t_{s}-t)^{3\beta+1}}\,,\] (370) and \[\frac{G}{R}\sim(t_{s}-t)^{-2\beta}\sim G^{\frac{2\beta}{3\beta+1}}\] (371) If we replace \(G\) by \(G/R\) (as given in Eq. (371)) in Eq. (320), then we find that the asymptotic time dependence in Eq. (341) for \(\beta<1\) is exactly same as follows: \[-\frac{1}{\kappa^{2}}\left[2\dot{H}+3(1+w)H^{2}\right]\sim\frac{\alpha}{(t_{s} -t)^{\beta+1}}+\frac{\gamma}{(t_{s}-t)^{2\beta}}\,.\] (372) With this consideration, one can derive a specific \(F(R,G)\)-gravity model (by setting some parameters) from Eq. (320) as \[F(R,G)=R+\frac{3(3\beta+1)}{2(\beta+1)}\frac{G}{R}\,,\] (373) which may encounter with other types of singularities. So, in the model \(F(R,G)\sim R+\bar{\alpha}(G/R)\) with \(\bar{\alpha}>0\), Type II, Type III and Type IV singularities may appear. Then, by substituting \(G\) for \(R\) we get \[F(R,G)\simeq R+\bar{\delta}|R|^{\frac{2\beta}{1+\beta}}\,,\] (374) which is a well-know result in cosmology. Now we have the following observations: * In the model \(F(R\rightarrow\infty)\sim R+\bar{\delta}R^{\frac{2\beta}{1+\beta}}\), for \(0<\beta<1\) and \(\bar{\delta}>0\), a Type III singularity may appear. * In the model \(F(R\rightarrow-\infty)\sim R+\bar{\delta}|R|^{\frac{2\beta}{1+\beta}}\), for \(-1<\beta<0\) and \(\bar{\delta}>0\), a Type II singularity may appear. * In the model \(F(R\to 0^{-})\sim R+\bar{\delta}|R|^{\frac{2\beta}{1+\beta}}\), for \(\beta<-1\) and \(\beta\neq-n\) (being \(n\) a natural number) and \(\bar{\delta}<0\), a Type IV singularity may appear. ### \(F(t)\) Gravity One of the alternative gravitational theories beyond Einstein's GR is the Teleparallel Equivalent of GR (TEGR) where instead of the curvature scalar \(R\) defined by the Levi-Civita connection one uses the concept of torsion scalar \(T\) defined by the Weitzenbock connection [646]. The idea of this gravitational theory was originally introduced by Einstein in 1928 under the name "Fern-Parallelism" or "distant parallelism" or "teleparallelism" [647; 648]. It was found that the modified teleparallel gravity theory \(F(T)\) in which \(F\) is an arbitrary function of the torsion scalar \(T\), can explain the late-time accelerating expansion of the Universe for some suitable choices of \(F(T)\)[168; 184]. Moreover, it was also recognized that the \(F(T)\) models can also take an active role to describe the early inflationary phase [649; 650; 651; 239]. With such appealing consequences, the modified teleparallel gravitation models got massive attention in the astrophysical and cosmological community [652; 653; 243; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256]. We refer to two recent reviews on \(F(T)\) teleparallel gravity models [304; 305]. In Ref. [657; 658], however, it has been proved that superluminal modes appear in the \(F(T)\) gravity and therefore the \(F(T)\) gravity model is physically inconsistent. In this section, however, just for the illustration, we show the structure of the singularities in the framework of the \(F(T)\) gravity. In order to discuss the teleparallelism, we use the orthonormal tetrad components \(e_{A}(x^{\mu})\). We assume that the index \(A\) runs over 0, 1, 2, 3 for the tangent space at each point \(x^{\mu}\) of the manifold and \(e_{A}^{A}\) forms the tangent vector of the manifold. The relation between the metric \(g^{\mu\nu}\) and the tetrad components is given by \(g_{\mu\nu}=\eta_{AB}e_{\mu}^{A}e_{\nu}^{B}\) where \(\eta_{AB}=\text{diag}(-1,1,1,1)\). Compared to the General theory of Relativity where the torsionless Levi-Civita connection is used, in Teleparallelism the curvatureless Weitzenbock connection is used [659], hence, the gravitational field is described by the non-null torsion tensor \[T^{\rho}_{\ \mu\nu}\equiv e_{A}^{\rho}\left(\partial_{\mu}e_{\nu}^{A}- \partial_{\nu}e_{\mu}^{A}\right)\,, \tag{375}\] The torsion scalar \(T\), one of the main ingredients of the teleparallel gravity, is defined by [660; 203]: \[T\equiv S_{\rho}^{\;\;\mu\nu}T^{\rho}_{\;\;\mu\nu}\,, \tag{376}\] where \[S_{\rho}^{\;\;\mu\nu}\equiv\frac{1}{2}\left(K^{\mu\nu}_{\;\;\rho}+\delta^{\mu}_ {\rho}T^{\alpha\nu}_{\;\;\alpha}-\delta^{\nu}_{\rho}T^{\alpha\mu}_{\;\;\alpha} \right)\,, \tag{377}\] and \(K^{\mu\nu}_{\rho}\), the contorsion tensor is given by \[K^{\mu\nu}_{\;\;\;\rho}\equiv-\frac{1}{2}\left(T^{\mu\nu}_{\;\;\rho}-T^{\nu \mu}_{\;\;\;\rho}-T_{\rho}^{\;\;\;\mu\nu}\right)\,. \tag{378}\] The action of the modified teleparallel gravity as [184] \[S=\int d^{4}x|e|\left[\frac{F(T)}{2\kappa^{2}}+L_{\rm matter}\right]\,, \tag{379}\] \(|e|=\det\left(e^{A}_{\mu}\right)=\sqrt{-g}\). Now, varying the action in Eq. (379) with respect to the vierbein vector field \(e^{\mu}_{A}\) one can obtain the gravitational field equations [168] \[\frac{1}{e}\partial_{\mu}\left(eS_{A}^{\;\;\mu\nu}\right)F^{\prime}-e^{\lambda }_{A}T^{\rho}_{\;\;\mu\lambda}S_{\rho}^{\;\;\nu\mu}F^{\prime}+S_{A}^{\;\;\mu \nu}\partial_{\mu}TF^{\prime\prime}+\frac{1}{4}e^{\nu}_{A}F=\frac{\kappa^{2}} {2}e^{\rho}_{A}T^{(\rm M)\;\;\nu}_{\;\;\;\rho}\,, \tag{380}\] where prime denotes the differentiation with respect to the torsion scalar \(T\) and \(T^{(\rm M)\;\;\nu}_{\;\;\;\rho}\) is the energy-momentum tensor describing the entire matter sector. Now, let us assume that the background manifold is described well by the spatially flat FLRW Universe (3). Then, vierbein takes the form\(e^{A}_{\mu}=(1,a(t),a(t))\) where \(a(t)\) is the scale factor of the FLRW Universe. One can quickly derive the line element of the FLRW Universe (3). For this tetrad, using Eqs. (375), (377), and (378), one can derive the torsion scalar \(T=-6H^{2}\). In this flat FLRW background, one can write down the gravitational equations \[H^{2} = \frac{\kappa^{2}}{3}(\rho+\rho_{\rm GDE})=\frac{\kappa^{2}}{3} \rho_{\rm eff}, \tag{381}\] \[\dot{H} = -\frac{\kappa^{2}}{2}(\rho+p+\rho_{\rm GDE}+p_{\rm GDE})=-\frac{ \kappa^{2}}{2}(\rho_{\rm eff}+p_{\rm eff})\,, \tag{382}\] where \(\rho_{\rm eff}\) and \(p_{\rm eff}\) are the total energy density and total pressure of the Universe, respectively, \(\rho\) and \(p\) denote the energy density and pressure of all the perfect fluids comprising the entire matter sector where they satisfy the conservation law \(\dot{\rho}+3H(p+\rho)=0\). Further, \(\rho_{\rm GDE}\) and \(p_{\rm GDE}\) are given by \[\rho_{\rm GDE}=\frac{1}{2\kappa^{2}}\left(-T-F+2TF^{\prime}\right)\,,\quad p _{\rm GDE}=-\frac{1}{2\kappa^{2}}\left[\left(4-4F^{\prime}-8TF^{\prime\prime} \right)\dot{H}-T-F+2TF^{\prime}\right]\,. \tag{383}\] Notice that the effective fluid characterized by \(\rho_{\rm GDE}\) and \(p_{\rm GDE}\) satisfies the continuity equation \(\dot{\rho}_{\rm GDE}+3H(p_{\rm GDE}+\rho_{\rm GDE})=0\). As one can clearly notice that the choice of \(F(T)\) highly influences the gravitational equations. In the following we shall focus on the finite-time future singularities that may appear in the context of modified teleparallel gravity. We consider, as in the previous sections, the following Hubble rate in (59) and (198) [409] \[H \sim \frac{h_{\rm s}}{\left(t_{\rm s}-t\right)^{\beta}}\,,\quad{\rm for }\;\;\beta>0\,, \tag{384}\] \[H \sim H_{\rm s}+\frac{h_{\rm s}}{\left(t_{\rm s}-t\right)^{\beta}}\,, \quad{\rm for}\;\beta<-1\,,\;{\rm and}\;-1<\beta<0\,. \tag{385}\] One can clearly notice that as \(t\to t_{\rm s}\), finite-time singularities may appear depending on the nature of \(\beta\). This can be illustrated as follows: when \(t\to t_{\rm s}\), for \(\beta>0\), both \(H\sim h_{\rm s}\left(t_{\rm s}-t\right)^{-\beta}\) and \(\dot{H}\sim\beta h_{\rm s}\left(t_{\rm s}-t\right)^{-(\beta+1)}\) diverge to infinity; for \(-1<\beta<0\), \(H\) is finite, but \(\dot{H}\) becomes infinity; for \(\beta<-1\), but \(\beta\) is not any integer, both \(H\) and \(\dot{H}\) are finite, but the higher derivatives of \(H\) can diverge to infinity. From Eq. (384) one can derive the evolution of the scale factor as \[a \sim a_{\rm s}\exp\left[\frac{h_{\rm s}}{\beta-1}\left(t_{\rm s}-t \right)^{-\left(\beta-1\right)}\right]\,,\quad\text{for}\;\;0<\beta<1,\;\text{ and}\;\;\beta>1\,, \tag{386}\] \[a \sim a_{\rm c}\left(\frac{t_{\rm s}-t_{\rm c}}{t_{\rm s}-t}\right)^{h_{ \rm s}},\quad\text{for}\;\;\beta=1\,, \tag{387}\] where \(a_{\rm s}\) and \(a_{\rm c}\) are some positive constants. Now, from Eq. (386) one can see that, when \(t\to t_{\rm s}\), for \(\beta\geq 1\), \(a\to\infty\), whilst for \(\beta<0\) and \(0<\beta<1\), \(a\to a_{\rm s}\). Additionally, one can further notice that from \(\rho_{\rm eff}=3H^{2}/\kappa^{2}\) (Eq. (381)) and (384) that, for \(\beta>0\), \(H\to\infty\) and as a result, \(\rho_{\rm eff}=3H^{2}/\kappa^{2}\to\infty\), while on the other hand, for \(\beta<0\), \(H\) asymptotically becomes finite and \(\rho_{\rm eff}\) asymptotically approaches to a finite constant value \(\rho_{\rm s}\). Furthermore, from \(\dot{H}\sim\beta h_{\rm s}\left(t_{\rm s}-t\right)^{-\left(\beta+1\right)}\) and Eq. (385) one can see that for \(\beta>-1\), \(\dot{H}\to\infty\) and as a consequence, \(P_{\rm eff}=-\left(2\dot{H}+3H^{2}\right)/\kappa^{2}\to\infty\). While for \(\beta<-1\), but \(\beta\) is not any integer, \(a\), \(\rho_{\rm eff}\) and \(P_{\rm eff}\) are finite since both \(H\) and \(\dot{H}\) are finite, however, the higher derivatives of \(H\) diverges. Thus, we see that for the Hubble parameter described in Eqs. (384) and (385) the \(F(T)\) gravity models can encounter with the finite-time singularities for the following values of \(\beta\): * For \(\beta\geq 1\), Type I ("Big Rip") singularity appears. * For \(-1<\beta<0\), Type II ("sudden") singularity appears. * For \(0<\beta<1\), Type III singularity appears. * For \(\beta<-1\), but \(\beta\) is not any integer, Type IV singularity appears. In TABLE 4, we summarize the appearance of finite-time singularities out of the expressions of the Hubble parameter as given in Eqs. (384) and (385). Thus, one can see that the above expressions of the Hubble parameter can be used to reconstruct the \(F(T)\) gravity models which may allow the finite-time singularities. We focus on the finite-time singularities when the geometrical sector dominates over the matter sector, that means when \(\rho_{\rm eff}\simeq\rho_{\rm GDE}\) and \(p_{\rm eff}\simeq p_{\rm GDE}\), that means \(w_{\rm eff}=p_{\rm eff}/\rho_{\rm eff}\simeq p_{\rm GDE}/\rho_{\rm GDE}\). Following this, one can have \[w_{\rm eff}\simeq w_{\rm GDE}=-\frac{\left(4-4F^{\prime}-8TF^{\prime\prime} \right)\dot{H}-T-F+2TF^{\prime}}{-T-F+2TF^{\prime}}\,. \tag{388}\] On the other hand, from Eq. (383) one can write \[p_{\rm GDE}=-\rho_{\rm GDE}+I(H,\dot{H})\,, \tag{389}\] where \(I(H,\dot{H})=-\frac{1}{\kappa^{2}}\left[2-2F^{\prime}-4TF^{\prime\prime} \right]\dot{H}\). From Eq.(382) one can quickly write \(p_{\rm GDE}=-\rho_{\rm GDE}-2\dot{H}/\kappa^{2}\), where we have using that the geometrical sector dominates over the matter one. Now, comparing this with (389), we obtain the following differential equation \begin{table} \begin{tabular}{c c c c} Region of \(\beta\) (\(\neq\) 0, \(-1\)) & Type of the Singularity & \(H\) (\(t\to t_{\rm s}\)) & \(\dot{H}\) (\(t\to t_{\rm s}\)) \\ \hline \(\beta\geq 1\) & Type I & \(H\to\infty\) & \(\dot{H}\to\infty\) \\ \(-1<\beta<0\) & Type II & \(H\to H_{\rm s}\) & \(\dot{H}\to\infty\) \\ \(0<\beta<1\) & Type III & \(H\to\infty\) & \(\dot{H}\to\infty\) \\ \(\beta<-1\), but \(\beta\) is not any integer & Type IV & \(H\to H_{\rm s}\) & \(\dot{H}\to 0\) \\ & & & (and higher order \\ & & & derivatives of \(H\) diverge) \\ \hline \hline \end{tabular} \end{table} Table 4: The table summarizes the emergence of the finite-time future singularities for different values of \(\beta\) in the expressions of the Hubble parameter given in Eqs. (384) and (385) along with the behavior of \(H\) and \(\dot{H}\) in the limit of \(t\to t_{\rm s}\). Table courtesy Ref. [409]. \[\dot{H}+\frac{\kappa^{2}}{2}I(H,\dot{H})=0\quad\Longrightarrow\quad\dot{H}\left[F^{ \prime}+2TF^{\prime\prime}\right]=0\,. \tag{390}\] In the following we shall illustrate the the appearance of the finite-time singularities in a specific \(F(T)\) gravity model having the power law form: \[F(T)=AT^{\alpha}\,, \tag{391}\] where \(A\) (\(\neq 0\)) and \(\alpha\) (\(\neq 0\)) are constants. With this choice of \(F(T)\), Eq. (390) becomes \[\left(2\alpha-1\right)H^{2(\alpha-1)}=0\,. \tag{392}\] Notice that for both Eqs. (384) and (385), \(\dot{H}\neq 0\). In the limit of \(t\to t_{\rm s}\), Eq. (392) needs to be satisfied. Now, from Eqs. (384) and (385), one can see that for \(\beta>0\) [here two different cases may arise in this way: (i) \(\beta\geq 1\) (i.e., Type I singularity) and \(0<\beta<1\) (Type III singularity)], \(\alpha<1\), so that Eq. (392) is asymptotically satisfied. While on the other hand, for \(\beta<0\) [here one can similarly explore two different cases: \(-1<\beta<0\) (Type II singularity and \(\beta<-1\) (Type IV singularity)], \(\alpha=1/2\), in which Eq. (392) is always satisfied. At this point it is very crucial to mention that the conditions: (i) for \(\beta>0\) (Type I singularity corresponds to \(\beta\geq 1\) and Type III singularity corresponds to \(0<\beta<1\)), \(\alpha<0\), while (ii) for \(\beta<0\) (Type II singularity corresponds to \(-1<\beta<0\) and Type IV singularity corresponds to \(\beta<-1\)), \(\alpha=1/2\) are "necessary conditions" for the appearance of the finite-time future singularities but they are not the sufficient conditions. Because if \(\alpha<0\), then the Type I singularity with \(\beta\geq 1\) appears rather than the Type III singularity with \(0<\beta<1\), since in the limit of \(t\to t_{\rm s}\), both \(H\) and \(\dot{H}\) with \(\beta\geq 1\) diverge more rapidly than those with \(0<\beta<1\). This is because of the absolute value of the power \(\beta\), namely, the absolute value of \(\beta\) for the Type I singularity (\(\beta\geq 1\)) is larger than the absolute value of \(\beta\) for the Type III singularity (\(0<\beta<1\)). As a consequence, the Type I singularity is realized faster than the Type III singularity, and this results in the appearance of the Type I singularity. In a similar fashion, if \(\alpha=1/2\), then Type IV singularity with \(\beta<-1\) appears compared to the Type II singularity with \(-1<\beta<0\) because again in the limit of \(t\to t_{\rm s}\), \(H\to H_{\rm s}\) and \(\dot{H}\to 0\) with \(\beta<-1\) are realized more quickly than \(H\to H_{\rm s}\) and \(\dot{H}\to\infty\) with \(-1<\beta<0\). This is again because of the absolute value of the power \(\beta\), namely, the absolute value of \(\beta\) for the Type IV singularity (\(\beta<-1\)) is larger than the absolute value of \(\beta\) for the Type II singularity (\(-1<\beta<0\)). Hence, the Type IV singularity appears faster than the Type II singularity, and Type IV singularity appears as a result. Finally, we note that Type V ("\(w\)") singularity can also appear in this model. In the Type V singularity, a specific choice of the scale factor can be considered [372] \[a(t)= a_{\rm s}\left(1-\frac{3\sigma}{2}\left\{\frac{n-1}{n-\left[2/\left(3 \sigma\right)\right]}\right\}^{n-1}\right)^{-1}+\frac{1-2/\left(3\sigma\right) }{n-2/\left(3\sigma\right)}\times na_{\rm s}\left(1-\frac{2}{3\sigma}\left\{ \frac{n-\left[2/\left(3\sigma\right)\right]}{n-1}\right\}^{n-1}\right)^{-1} \left(\frac{t}{t_{\rm s}}\right)^{2/\left(3\sigma\right)}\] \[+a_{\rm s}\left(\frac{3\sigma}{2}\left\{\frac{n-1}{n-\left[2/ \left(3\sigma\right)\right]}\right\}^{n-1}-1\right)^{-1}\left[1-\frac{1-2/ \left(3\sigma\right)}{n-2/\left(3\sigma\right)}\frac{t}{t_{\rm s}}\right]^{n }\,, \tag{393}\] where \(n\)\(\sigma\) are any arbitrary real numbers. In the limit of \(t\to t_{\rm s}\), we have \(H(t\to t_{\rm s})\to 0\) and \(\dot{H}(t\to t_{\rm s})\to 0\). The effective EoS \(w_{\rm eff}=\left(1/3\right)\left(2q-1\right)\to\infty\) where \(q\equiv-\ddot{a}a/\dot{a}^{2}\) is the deceleration parameter of the Universe. Thus, for the power law model (391) with \(\alpha>1\), Eq. (392) can be satisfied asymptotically. Hence, the Type V ("\(w\)") singularity can appear in this model. ### Non-local Gravity Along with the known modified gravitational theories presented in the earlier sections, an addition to the class of modified gravity theories is the non-local gravitational theory, i.e., non-local additions to GR [661]. These non-local corrections naturally arise due to quantum effects and with such new corrections, it is also possible to explain the accelerating expansion of the Universe [661]. In this section we shall describe how the finite-time future singularities may appear in this gravitational theory. The action of the non-local gravity is given by [662] \[S=\int d^{4}x\sqrt{-g}\Bigg{[}\frac{1}{2\kappa^{2}}\left\{R\left(1+f(\Box^{- 1}R)\right)-2\Lambda\right\}+L_{\rm matter}\left(Q;g\right)\Bigg{]}\,, \tag{394}\] where \(R\) is the Ricci scalar, \(g\) is the determinant of the metric tensor \(g_{\mu\nu}\), \(\Box\equiv g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\) with \(\nabla_{\mu}\) being the covariant derivative is the covariant d'Alembertian for a scalar field, and \(f\) is any arbitrary function of \(\Box^{-1}R\), \(\Lambda\) is a cosmological constant, and \(L_{\rm matter}\left(Q;g\right)\) denotes the matter Lagrangian in which \(Q\) stands for the matter fields. Introducing two scalar fields \(\eta\) and \(\xi\), we can rewrite the above action (394) as \[S =\int d^{4}x\sqrt{-g}\Bigg{[}\frac{1}{2\kappa^{2}}\Bigg{\{}R\left( 1+f(\eta)\right)+\xi\left(\Box\eta-R\right)-2\Lambda\Bigg{\}}+L_{\rm matter} \Bigg{]}\] \[=\int d^{4}x\sqrt{-g}\Bigg{[}\frac{1}{2\kappa^{2}}\Bigg{\{}R\left( 1+f(\eta)\right)-\partial_{\mu}\xi\partial^{\mu}\eta-\xi R-2\Lambda\Bigg{\}} +L_{\rm matter}\Bigg{]}\,. \tag{395}\] In the context of a spatially flat FLRW metric (3), one can write down the gravitational field equations as \[0 = -3H^{2}\left(1+f(\eta)-\xi\right)+\frac{1}{2}\dot{\xi}\dot{\eta}- 3H\left(f^{\prime}(\eta)\dot{\eta}-\dot{\xi}\right)+\Lambda+\kappa^{2}\rho\,, \tag{396}\] \[0 = \left(2\ddot{H}+3H^{2}\right)\left(1+f(\eta)-\xi\right)+\frac{1} {2}\dot{\xi}\dot{\eta}+\left(\frac{d^{2}}{dt^{2}}+2H\frac{d}{dt}\right)\left(f (\eta)-\xi\right)-\Lambda+\kappa^{2}p\,, \tag{397}\] where we have considered that the scalar fields \(\eta\) and \(\xi\) depend only on time [662]. Note that here \(\rho\) and \(p\) are respectively the energy density and pressure of the matter sector. One can further write down the equations of motion for the scalar fields \(\eta\) and \(\xi\) as \[0 = \ddot{\eta}+3H\dot{\eta}+6\dot{H}+12H^{2}\,, \tag{398}\] \[0 = \ddot{\xi}+3H\dot{\xi}-\left(6\dot{H}+12H^{2}\right)f^{\prime}( \eta)\,, \tag{399}\] where we have used \(R=6\dot{H}+12H^{2}\). Let us now focus on the emergence of the finite-time singularities in this gravitational theory. #### vi.2.1 Finite-time Singularities We start with the following expression of the Hubble parameter in (59) or (99) (see [662]). The scale factor is also given in (59). Now with the use of \(\ddot{\eta}+3H\dot{\eta}=a^{-3}d\left(a^{3}\dot{\eta}\right)/dt\) and Eq. (398), one can express \(\eta\) as \[\eta=-\int^{t}\frac{1}{a^{3}}\left(\int^{\bar{t}}Ra^{3}d\bar{t}\right)dt\,. \tag{400}\] We note that in the limit \(t\to t_{\rm s}\): for \(\beta>1\), \(\dot{H}\ll H^{2}\), and hence, \(R\sim 12H^{2}\), whilst for \(-1<\beta<0\) and \(0<\beta<1\), \(\dot{H}\gg H^{2}\), and hence, \(R\sim 6\dot{H}\). By applying these relations to Eq. (400) and then taking the leading term in terms of \((t_{\rm s}-t)\), one can express \(\eta\) for different regions of \(\beta\) as follows [662]: \[\eta \sim -\frac{4h_{\rm s}}{\beta-1}\left(t_{\rm s}-t\right)^{-(\beta-1)} +\eta_{\rm c}\quad(\beta>1)\,, \tag{401}\] \[\eta \sim -\frac{6h_{\rm s}}{\beta-1}\left(t_{\rm s}-t\right)^{-(\beta-1)} +\eta_{\rm c}\quad(-1<\beta<0\,,\,0<\beta<1)\,, \tag{402}\] where \(\eta_{\rm c}\) is an integration constant. As discussed in [662], for \(\eta_{\rm c}=0\), the finite-time future singularities depicted by the Hubble parameter in Eq. (59) or (99) do not appear, however, for \(\eta_{\rm c}\neq 0\), finite time future singularities can occur. For \(\eta_{\rm c}\neq 0\), note that, if the power in terms of \((t_{\rm s}-t)\) is negative, i.e. \(-\left(\beta-1\right)<0\), then the first term proportional to \((t_{\rm s}-t)^{-(\beta-1)}\) is the leading one. On the other hand, if the power in terms of \((t_{\rm s}-t)\) is positive, i.e. \(-\left(\beta-1\right)>0\), then the second constant term becomes the leading one. Thus, for \(\beta>1\), the first term is the leading one, i.e., \(\eta\propto(t_{\rm s}-t)^{-(\beta-1)}\), while for \(-1<\beta<0\) and \(0<\beta<1\), the second term is the leading one, i.e., \(\eta\sim\eta_{\rm c}\). For \(\beta=1\) in Eq. (59) or (99), it follows from Eq. (398) that, \[\eta\sim 6h_{\rm s}\big{[}(1+2h_{\rm s})\,/\left(1+3h_{\rm s}\right)\big{]}\ln \left(t_{\rm s}-t\right)+\eta_{\rm c}. \tag{403}\] We continue by taking a particular form of \(f(\eta)\) as [662] \[f(\eta)=f_{\rm s}\eta^{\sigma}\;, \tag{404}\] where \(f_{\rm s}\) and \(\sigma\) are non-zero constants. Now, using \(\ddot{\xi}+3H\dot{\xi}=a^{-3}d\left(a^{3}\dot{\xi}\right)/dt\) and Eq. (399), \(\xi\) can be expressed as \[\xi=\int^{t}\frac{1}{a^{3}}\left(\int^{t}\frac{df(\eta)}{d\eta}Ra^{3}dt\right) dt\,. \tag{405}\] Now we apply the conditions \(R\sim 12H^{2}\) (for \(\beta>1\)) and \(R\sim 6\dot{H}\) (for \(\beta<1\)) to Eq. (405) and taking into account the leading term in terms of \(\left(t_{\rm s}-t\right)\), we obtain [662] \[\xi\sim-f_{\rm s}\left(-\frac{4h_{\rm s}}{\beta-1}\right)^{\sigma }\left(t_{\rm s}-t\right)^{-(\beta-1)\sigma}+\xi_{\rm c}\;,\quad\left(\beta>1 \right), \tag{406}\] \[\xi\sim\frac{6f_{\rm s}h_{\rm s}\sigma\eta_{\rm c}^{\sigma-1}}{ \beta-1}\left(t_{\rm s}-t\right)^{-(\beta-1)}+\xi_{\rm c}\;,\quad\left(-1< \beta<0\;\;,0<\beta<1\right), \tag{407}\] where \(\xi_{\rm c}\) is the constant of integration. Now, if the power in terms of \(\left(t_{\rm s}-t\right)\) is negative, i.e. \(-\left(\beta-1\right)\sigma<0\), the first term proportional to \(\left(t_{\rm s}-t\right)^{-(\beta-1)\sigma}\) is the leading one. On the other hand, if the power in terms of \(\left(t_{\rm s}-t\right)\) is positive, i.e. \(-\left(\beta-1\right)\sigma>0\), then the second constant term becomes the leading one. Therefore, we see that for \(\beta>1\), \(\sigma>0\), \(\xi\propto\left(t_{\rm s}-t\right)^{-(\beta-1)\sigma}\), while for (\(\beta>1\), \(\sigma<0\)) and \(\left(-1<\beta<0\,,\,0<\beta<1\right)\), \(\xi\sim\xi_{\rm c}\). Therefore one arrives at the following three cases [662]: * For \(\beta>1\), \(\sigma>0\): \(\eta\propto\left(t_{\rm s}-t\right)^{-(\beta-1)}\) and \(\xi\propto\left(t_{\rm s}-t\right)^{-(\beta-1)\sigma}\). * For \(\beta>1\), \(\sigma<0\): \(\eta\propto\left(t_{\rm s}-t\right)^{-(\beta-1)}\) and \(\xi\sim\xi_{\rm c}\). * For \(-1<\beta<0\,,\,0<\beta<1\): \(\eta\sim\eta_{\rm c}\) and \(\xi\sim\xi_{\rm c}\). Now, having all of these, following [662] one can further investigate the possibility of the finite time future singularities in this gravitational theory when the Hubble rate is given in Eq. (59) or (99) and the results are as follows * For \(\sigma<0\), with \(\eta_{\rm c}\neq 0\) and \(\xi_{\rm c}=1\), Type I (Big Rip) singularity can occur for \(\beta>1\). * If \(\eta_{\rm c}\neq 0\), and \(f_{\rm s}\eta_{\rm c}^{\sigma-1}\left(6\sigma-\eta_{\rm c}\right)+\xi_{\rm c} -1=0\), then Type II (sudden) singularity can occur for \(-1<\beta<0\). * If \(\eta_{\rm c}\neq 0\), and \(f_{\rm s}\eta_{\rm c}^{\sigma-1}\left(6\sigma-\eta_{\rm c}\right)+\xi_{\rm c} -1=0\), there Type III singularity can occur for \(0<\beta<1\). ### Non-minimal Maxwell-Einstein Gravity In this section, we discuss the finite-time future singularities appearing in the non-minimal Maxwell-Einstein gravity with general gravitational coupling. To begin with, we consider the following action [178; 589]: \[S_{\rm GR}=\int d^{4}x\sqrt{-g}\left[\frac{1}{2\kappa^{2}}R-\frac{1}{4}I(R)F_ {\mu\nu}F^{\mu\nu}\right]\,, \tag{408}\] where \(I(R)\) is any function of the Ricci scalar, \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is the electromagnetic field-strength tensor in which \(A_{\mu}\) denotes the \(U(1)\) gauge field. It is well-known that the coupling between the scalar curvature and the Lagrangian of the electromagnetic field, as shown in action (408), arises in curved spacetime due to one-loop vacuum-polarization effects in Quantum Electrodynamics [663]. Now, taking the variations of the action in Eq. (408) with respect to the metric \(g_{\mu\nu}\) and the \(U(1)\) gauge field \(A_{\mu}\), one can find the gravitational field equations and the equation of motion of \(A_{\mu}\), respectively, as [178; 589] \[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\kappa^{2}T^{\rm(EM)}_{\mu\nu}\,, \tag{409}\] \[-\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}I(R)F^{\mu\nu}\right)=0\,, \tag{410}\] where \(T_{\mu\nu}^{(\rm EM)}\) in Eq. (409) denotes the contribution to the energy-momentum tensor from the electromagnetic field: \[T_{\mu\nu}^{(\rm EM)}= I(R)\left(g^{\alpha\beta}F_{\mu\beta}F_{\nu\alpha}-\frac{1}{4}g_{ \mu\nu}F_{\alpha\beta}F^{\alpha\beta}\right)\] \[+\frac{1}{2}\left[I^{\prime}(R)F_{\alpha\beta}F^{\alpha\beta}R_{ \mu\nu}+g_{\mu\nu}\Box\left(I^{\prime}(R)F_{\alpha\beta}F^{\alpha\beta}\right) -\nabla_{\mu}\nabla_{\nu}\left(I^{\prime}(R)F_{\alpha\beta}F^{\alpha\beta} \right)\right]\,, \tag{411}\] in which the prime refers to the derivative with respect to \(R\); \(\nabla_{\mu}\) denotes the covariant derivative operator associated with \(g_{\mu\nu}\), and \(g_{\mu\nu}\nabla_{\mu}\nabla_{\nu}\) is the covariant d'Alembertian for a scalar field. To understand the dynamics of the Universe within this gravitational theory, we consider the flat FLRW metric and following Ref. [589] we consider the case where only the effects of magnetic fields are present that means the effects of electric fields are negligible. Additionally, we further consider that only one component of the magnetic field \(\vec{B}\) is non-zero, that means other two components are zero. Thus, it follows from \(\mathrm{div}\vec{B}=0\) that the off-diagonal components of the last term of the r.h.s. of Eq. (411) for \(T_{\mu\nu}^{(\rm EM)}\), i.e., \(\nabla_{\mu}\nabla_{\nu}\left[I^{\prime}(R)F_{\alpha\beta}F^{\alpha\beta}\right]\) are zero. Hence, all of the off-diagonal components of \(T_{\mu\nu}^{(\rm EM)}\) are zero. Furthermore, since we have assumed that we only have the magnetic fields as background quantities at the \(0^{\rm th}\) order, therefore, the magnetic fields remain independent of the space components \(\vec{x}\). In the FLRW spacetime, the equation of motion for the \(U(1)\) gauge field in the Coulomb gauge, \(\partial^{j}A_{j}(t,\vec{x})=0\), and the case of \(A_{0}(t,\vec{x})=0\), becomes \[\vec{A}_{i}(t,\vec{x})+\left(H+\frac{\dot{I}}{I}\right)\dot{A}(t,\vec{x})- \frac{1}{a^{2}}\stackrel{{(3)}}{{\Delta}}A_{i}(t,\vec{x})=0\,, \tag{412}\] where \(\stackrel{{(3)}}{{\Delta}}=\partial^{i}\partial_{i}\) is the flat 3-dimensional Laplacian. From Eq. (412) it follows that the Fourier mode \(A_{i}(k,t)\) satisfies the equation \[\vec{A}_{i}(k,t)+\left(H+\frac{\dot{I}}{I}\right)\dot{A}_{i}(k,t)+\frac{k^{2 }}{a^{2}}A_{i}(k,t)=0\,, \tag{413}\] which can alternatively be expressed (by replacing the cosmic time \(t\) in terms of the conformal time \(\eta\)) as \[\frac{\partial^{2}A_{i}(k,\eta)}{\partial\eta^{2}}+\frac{1}{I(\eta)}\frac{dI( \eta)}{d\eta}\frac{\partial A_{i}(k,\eta)}{\partial\eta}+k^{2}A_{i}(k,\eta)=0\,. \tag{414}\] We note that finding the exact solution of Eq. (414) for any arbitrary coupling function \(I(\eta)\) is not possible while with the use of the Wentzel-Kramers-Brillouin (WKB) approximation on subhorizon scales and the long-wavelength approximation on superhorizon scales, and finally matching these solutions at the horizon crossing [664, 665], an approximate solution can be found as \[\left|A_{i}(k,\eta)\right|^{2}=\left|\bar{C}(k)\right|^{2}=\frac{1}{2kI(\eta _{k})}\left|1-\left[\frac{1}{2}\frac{1}{kI(\eta_{k})}\frac{dI(\eta_{k})}{d \eta}+i\right]k\int_{\eta_{k}}^{\eta_{k}}\frac{I(\eta_{k})}{I\left(\widetilde {\widetilde{\eta}}\right)}d\widetilde{\widetilde{\eta}}\right|^{2}\,, \tag{415}\] where \(\eta_{k}\) and \(\eta_{\rm f}\) denotes the conformal time at the horizon-crossing and the conformal time at the end of inflation, respectively. From Eq. (415),one may obtain the amplitude of the proper magnetic fields in the position space \[\left|B_{i}{}^{(\rm proper)}(t)\right|^{2}=\frac{k|\bar{C}(k)|^{2}}{\pi^{2}} \frac{k^{4}}{a^{4}}\,, \tag{416}\] on a comoving scale \(L=2\pi/k\). From Eq. (416) one can see that the proper magnetic fields evolve as \(|B_{i}{}^{(\rm proper)}(t)|^{2}=|\bar{B}|^{2}/a^{4}\), where \(|\bar{B}|\) is a constant [589]. This means that the impact of the coupling function \(I\) on the value of the proper magnetic fields exists only during the inflationary stage. On the other hand, the conductivity of the Universe, \(\sigma_{c}\), is extremely small during the period of inflation since at that time only a few charged particles exist. After the reheating stage, a number of charged particles are produced, so that the conductivity immediately jumps to a large value \(\sigma_{\rm c}\gg H\). Consequently, for a large enough conductivity at the reheating stage, the proper magnetic fields evolve in proportion to \(a^{-2}(t)\) in the radiation-dominated stage and in the subsequent matter-dominated stage [666]. Now, from (411), one can express the effective energy density and effective pressure of the Universe as follows \[\rho_{\rm eff} =\left\{\frac{I(R)}{2}+3\left[-\left(5H^{2}+\ddot{H}\right)I^{ \prime}(R)+6H\left(4H\dot{H}+\ddot{H}\right)I^{\prime\prime}(R)\right]\right\} \frac{|\bar{B}|^{2}}{a^{4}}\,, \tag{417}\] \[p_{\rm eff} =\left[-\frac{I(R)}{6}+\left(-H^{2}+5\dot{H}\right)I^{\prime}(R)- 6\left(-20H^{2}\dot{H}+4\dot{H}^{2}-H\ddot{H}+\ddot{H}\right)I^{\prime\prime}(R)\right.\] \[\left.-36\left(4H\dot{H}+\ddot{H}\right)^{2}I^{\prime\prime\prime }(R)\right]\!\frac{|\bar{B}|^{2}}{a^{4}}\,, \tag{418}\] where the following relations under the assumption of negligible electric fields have been used: \(g^{\alpha\beta}F_{0\beta}F_{0\alpha}-\left(1/4\right)g_{00}F_{\alpha\beta}F^{ \alpha\beta}=|B_{i}{}^{\rm(proper)}(t)|^{2}/2\), and \(F_{\alpha\beta}F^{\alpha\beta}=2|B_{i}{}^{\rm(proper)}(t)|^{2}\). We recall that in terms of the effective energy density and pressure, the Friedmann and Raychaudhuri equations for this gravitational theory can be written as \[\frac{3}{\kappa^{2}}H^{2} = \rho_{\rm eff}\,, \tag{419}\] \[-\frac{1}{\kappa^{2}}\left(2\dot{H}+3H^{2}\right) = p_{\rm eff}\,, \tag{420}\] where \(\rho_{\rm eff}\) and \(p_{\rm eff}\) are given in Eqs. (417) and (418), respectively. Since the Friedmann and Raychaudhuri equations (419) and (420) includes \(I(R)\), therefore, depending on the functional forms of \(I(R)\), the Maxwell-Einstein gravity may allow finite-time future singularities. In the following we discuss the possible finite-time future singularities. (i) _Big Rip singularity_ We consider the Hubble rate in (165). For this Hubble rate, one can derive the scale factor and the Ricci scalar as in (60) and (343). We assume that for the large curvature, \(I(R)\) behaves as \[I(R)\sim I_{s}R^{\alpha}, \tag{421}\] where \(I_{s}\) and \(\alpha\) are constants. Thus, with the choice of \(I(R)\) in Eq. (421), \(\rho_{\rm eff}\) in Eq. (417) behaves as \(\left(t_{s}-t\right)^{-2\alpha+4h_{s}}\) while from the l.h.s. of the Friedmann equation, i.e., Eq. (420) \(\rho_{\rm eff}\) behaves as \(\left(t_{s}-t\right)^{-2}\). From the consistency in \(\rho_{\rm eff}\), we have, \[-2=-2\alpha+4h_{s},\quad\text{i.e.,}\quad\alpha=1+2h_{s}\,. \tag{422}\] Now, from Eq. (420) one can write, \[\frac{3h_{s}^{2}}{\kappa^{2}}= I_{s}\left(12h_{s}^{2}+6h_{s}\right)^{\alpha-2}\] \[\times\left\{\frac{\left(12h_{s}^{2}+6h_{s}\right)^{2}}{2}+3 \left[-\alpha\left(12h_{s}^{2}+6h_{s}\right)\left(h_{s}+5h_{s}^{2}\right)+6 \alpha\left(\alpha-1\right)h_{s}\left(2h_{s}+4h_{s}^{2}\right)\right]\right\} \frac{|\bar{B}|^{2}}{a_{s}^{4}}\,, \tag{423}\] which can be simplified with the use of \(\alpha\) from (422) as \[\frac{3h_{s}^{2}}{\kappa^{2}}=-\frac{I_{s}h_{s}\left(12h_{s}^{2}+6h_{s}\right) ^{\alpha}|\bar{B}|^{2}}{2a_{s}^{4}}\,, \tag{424}\] which clearly infers that \(I_{s}\) must be negative. This tells that \(I(R)\) is also negative and therefore the gauge field becomes ghost and the model becomes physically inconsistent. Thus, it follows from Eqs. (421) and (422) that the Big rip singularity as described through the Hubble rate in Eq. (165) may emerge when for the large curvature, \(I(R)\) behaves as \(R^{1+2h_{s}}\). However, for other choices of \(I(R)\), we do not get the Big rip singularity. We in fact note that for \(I(R)=I_{s}R^{\alpha}\), \(H=h_{s}/\left(t_{s}-t\right)\) is an exact solution. (ii) _Other types of singularities_ We now focus on other types of singularities that may arise for a more general choice of \(I(R)\). Thus, we consider the general case where the Hubble parameter takes the form of (165). For this choice of the Hubble rate, one can find the scale factor and the scalar curvature as follows \[a=a_{s}\exp\left[\frac{h_{s}}{\beta-1}\left(t_{s}-t\right)^{-\left(\beta-1 \right)}\right]\,,\quad R=6h_{s}\left[\beta+2h_{s}\left(t_{s}-t\right)^{-\left( \beta-1\right)}\right]\left(t_{s}-t\right)^{-\left(\beta+1\right)}\,, \tag{425}\] where \(a_{s}\) is a positive constant. Similar to the previous case, we assume that in the large curvature, \(I(R)\) behaves as in Eq. (421). We note that the case with \(\beta>1\) is unphysical because, for \(\beta>1\), \(a\to\infty\) in the limit \(t\to t_{s}\), and as a consequence, \(\rho_{\rm eff}\to 0\) and \(p_{\rm eff}\to 0\) since \(\rho_{\rm eff}\propto a^{-4}\) and \(p_{\rm eff}\propto a^{-4}\). On the other hand, \(H\to\infty\) as \(t\to t_{s}\). Thus, we see that for \(\beta>1\), Eqs. (419) and (420) are not satisfied leading to a unphysical case. Therefore, we restrict ourselves to \(\beta<1\) and explore the possible singularities as follows. * If \(0<\beta<1\) and \(\alpha>0\), then \(\rho_{\rm eff}\) in Eq. (419) evolves as \(\rho_{\rm eff}\propto\left(t_{s}-t\right)^{-\alpha\left(\beta+1\right)}\), however, looking at Eq. (420), we see that \(\rho_{\rm eff}\) evolves as \(\rho_{\rm eff}\propto\left(t_{s}-t\right)^{-2\beta}\). Therefore, from the consistency, we must have, \[-2\beta=-\alpha\left(\beta+1\right)\implies\beta=\frac{\alpha}{2-\alpha}, \quad\text{or}\quad\alpha=\frac{2\beta}{\beta+1}\,.\] (426) Now, from Eq. (420), one can find, \[\frac{3h_{s}^{2}}{\kappa^{2}}=-\frac{I_{s}\left(6h_{s}\beta\right)^{\alpha} \left(1-\beta\right)|\bar{B}|^{2}}{2a_{s}^{4}\left(\beta+1\right)}\,,\] (427) where we have used Eq. (426). Notice that Eq. (427) demands \(I_{s}\) to be negative. Thus, in this case (i.e., \(\alpha>0\) and \(0<\beta<1\)), we see that, in the limit \(t\to t_{s}\), \(a\to a_{s}\), \(R\to\infty\), \(\rho_{\rm eff}\to\infty\), and \(|p_{\rm eff}|\to\infty\). Hence, the Type III singularity appears. * If \(-1<\beta<0\) and \(\left(\beta-1\right)/\left(\beta+1\right)<\alpha<0\), then in the limit \(t\to t_{s}\), \(a\to a_{s}\), \(R\to\infty\), \(\rho_{\rm eff}\to 0\), and \(|p_{\rm eff}|\to\infty\). Although we can see that the final value of \(\rho_{\rm eff}\) is not a finite value but it vanishes (\(\rho_{\rm eff}\to 0\) as \(t\to t_{s}\)), however, this singularity can be considered to be the singularity of Type II. Because, when \(I\) and \(H\) are given by \(I=1+I_{s}R^{\alpha}\) and \(H=H_{s}+h_{s}\left(t_{s}-t\right)^{-\beta}\) (where \(H_{s}\) is a constant), respectively, then in the limit \(t\to t_{s}\), \(\rho_{\rm eff}\to\rho_{s}\), a constant, where \(\rho_{s}\) can be found from Eqs. (417) and (419) as \(\rho_{s}=3H_{s}^{2}/\kappa^{2}=|\bar{B}|^{2}/\left(2_{s}^{4}\right)\). * If \(\beta<-1\), then in the limit \(t\to t_{s}\), \(a\to a_{s}\) and \(R\to 0\). Now, for \(\alpha\geq\left(\beta-1\right)/\left(\beta+1\right)\), in the limit \(t\to t_{s}\), \(\rho_{\rm eff}\to 0\), \(|p_{\rm eff}|\to 0\), and higher derivatives of \(H\) diverge. That means, Type IV singularity appears. We further note here that for \(-1<\beta<0\) and \(\alpha>0\), \(\rho_{\rm eff}\to\infty\), but \(H\to 0\) in the limit \(t\to t_{s}\), which shows that Eq. (419) is not satisfied. That means, the case with \(\alpha>0\) and \(-1<\beta<0\) is not physical in the sense that the Friedmann equation is inconsistent. We also find that If \(\alpha<0\) and \(0<\beta<1\), then \(\rho_{\rm eff}\to 0\), but \(H\to\infty\). Hence, Eq. (419) is again not satisfied. Further, if \(-1<\beta<0\) and \(\alpha\leq\left(\beta-1\right)/\left(\beta+1\right)\), then we have unphysical scenario because in this case, in the limit \(t\to t_{s}\), \(a\to a_{s}\), \(R\to\infty\), \(\rho_{\rm eff}\to 0\), and \(|p_{\rm eff}|\to 0\), but \(\dot{H}\to\infty\). Thus, we see that Eq. (420) is not satisfied. We mention that if \(I(R)\) is a constant6, we do not find any finite-time future singularity. We close this section with two examples of \(I(R)\) as follows. Suppose \(I(R)\) takes the Hu-Sawicki form given by [158] Footnote 6: Let us note that the case \(I(R)=1\) corresponds to the ordinary Maxwell theory. \[I(R)=I_{\rm HS}(R)\equiv\frac{c_{1}\left(R/m^{2}\right)^{n}}{c_{2}\left(R/m^{2} \right)^{n}+1}\,, \tag{428}\] where \(c_{1}\) and \(c_{2}\) are dimensionless constants, \(n\) is a positive constant, and \(m\) denotes a mass scale. We see that the model in Eq. (428) satisfies the following conditions: (i) \(\lim_{R\to\infty}I_{\rm HS}(R)=c_{1}/c_{2}=\text{const}\) and (ii) \(\lim_{R\to 0}I_{\rm HS}(R)=0\). The second example of \(I(R)\) having the same features as with the Hu-Sawicki form of \(I(R)\) in Eq. (428) takes the form [307] \[I(R)=I_{\rm NO}(R)\equiv\frac{\left[\left(R/M^{2}\right)-\left(R_{\rm c}/M^{2} \right)\right]^{2q+1}+\left(R_{\rm c}/M^{2}\right)^{2q+1}}{c_{3}+c_{4}\left\{ \left[\left(R/M^{2}\right)-\left(R_{\rm c}/M^{2}\right)\right]^{2q+1}+\left(R_{ \rm c}/M^{2}\right)^{2q+1}\right\}}\,, \tag{429}\] where \(c_{3}\) and \(c_{4}\) are dimensionless constants, \(q\) is a positive integer, \(M\) denotes a mass scale, and \(R_{c}\) is current curvature. One can check that \(I(R)\) given in Eq. (429) satisfies the conditions: (i) \(\lim_{R\to\infty}I_{\rm NO}(R)=1/c_{4}=\text{const}\) and (ii) \(\lim_{R\to 0}I_{\rm NO}(R)=0.\) If \(\beta<-1\) and \(I(R)\) is given either by \(I_{\rm HS}(R)\) in Eq. (428) or \(I_{\rm NO}(R)\) in Eq. (429), in the limit \(t\to t_{s}\), \(a\to a_{s}\), \(R\to 0\), \(\rho_{\rm eff}\to 0\), and \(|p_{\rm eff}|\to 0\). In addition, higher derivatives of \(H\) diverge. Thus, the Type IV singularity emerges. Thus, we see that the Maxwell theory which is non-minimally coupled to Einstein gravity may produce finite-time future singularities depending on the form of non-minimal gravitational coupling. We conclude that the general conditions on \(I(R)\) for which the finite-time future singularities characterized by Eq. (165) cannot emerge are that in the limit \(t\to t_{s}\), \(I(R)\to\check{I}\) (where \(\check{I}(\neq 0)\) is a finite constant), \(I^{\prime}(R)\to 0\), \(I^{\prime\prime}(R)\to 0\), and \(I^{\prime\prime\prime}(R)\to 0\). **J.1:** Influence of non-minimal gravitational coupling on the finite-time future singularities in modified gravity In this section, we discuss the influence of non-minimal gravitational coupling of the electromagnetic field on the \(F(R)\) gravity theory [589]. This is a generalization of the previous section IV.4. In this case, the total energy density and pressure of the Universe are contributed as, \(\rho_{\rm tot}=\rho_{\rm eff}+\rho_{\rm MG}\) and \(p_{\rm tot}=p_{\rm eff}+p_{\rm MG}\), respectively, where \(\rho_{\rm eff}\) and \(p_{\rm eff}\) are given by Eqs. (417) and (418), respectively. And \(\rho_{\rm MG}\) and \(p_{\rm MG}\) can be found from Eqs. (156) and (157) as follows \[\rho_{\rm MG} =\frac{1}{\kappa^{2}}\left[-\frac{1}{2}f(R)+3\left(H^{2}+\check{H }\right)f^{\prime}(R)-18\left(4H^{2}\check{H}+H\check{H}\right)f^{\prime\prime }(R)\right]\,, \tag{430}\] \[p_{\rm MG} =\frac{1}{\kappa^{2}}\left[\frac{1}{2}f(R)-\left(3H^{2}+\check{H }\right)f^{\prime}(R)+6\left(8H^{2}\check{H}+4\dot{H}^{2}+6H\check{H}+\vec{H} \right)f^{\prime\prime}(R)+36\left(4H\dot{H}+\ddot{H}\right)^{2}f^{\prime \prime\prime}(R)\right]\,. \tag{431}\] In this case, it follows from Eqs. (96), (417), (418), (430) and (431) that the flat Friedmann and Raychaudhuri equations can be given by [589] \[\frac{3}{\kappa^{2}}H^{2}=\rho_{\rm tot} =\left\{\frac{I(R)}{2}+3\left[-\left(5H^{2}+\check{H}\right)I^{ \prime}(R)+6H\left(4H\dot{H}+\check{H}\right)I^{\prime\prime}(R)\right]\right\} \frac{|\bar{B}|^{2}}{a^{4}}\] \[\quad+\frac{1}{\kappa^{2}}\left[-\frac{1}{2}\left(F(R)-R\right)+3 \left(H^{2}+\check{H}\right)\left(F^{\prime}(R)-1\right)-18\left(4H^{2}\dot{ H}+H\check{H}\right)F^{\prime\prime}(R)\right]\,, \tag{432}\] \[-\frac{1}{\kappa^{2}}\left(2\dot{H}+3H^{2}\right)=p_{\rm tot} =\] \[-36\left(4H\dot{H}+\vec{H}\right)^{2}I^{\prime\prime\prime}(R) \bigg{|}\frac{|\bar{B}|^{2}}{a^{4}}+\frac{1}{\kappa^{2}}\bigg{[}\frac{1}{2} \left(F(R)-R\right)-\left(3H^{2}+\check{H}\right)\left(F^{\prime}(R)-1\right)\] \[+6\left(8H^{2}\dot{H}+4\dot{H}^{2}+6H\check{H}+\vec{H}\right)F^{ \prime\prime}(R)+36\left(4H\dot{H}+\ddot{H}\right)^{2}F^{\prime\prime\prime}( R)\bigg{]}\,. \tag{433}\] Now, using Eqs. (432) and (433), we find \[\left[\frac{I(R)}{3}+2\left(-8H^{2}+\check{H}\right)I^{\prime}(R) +6\left(32H^{2}\dot{H}-4\dot{H}^{2}+4H\ddot{H}-\vec{H}\right)I^{\prime\prime}( R)-36\left(4H\dot{H}+\ddot{H}\right)^{2}I^{\prime\prime\prime}(R)\right] \frac{|\bar{B}|^{2}}{a^{4}}\] \[\quad\quad\quad\quad+\frac{1}{\kappa^{2}}\left[2\dot{H}F^{\prime} (R)+6\left(-4H^{2}\dot{H}+4\dot{H}^{2}+3H\ddot{H}+\vec{H}\right)F^{\prime\prime} (R)+36\left(4H\dot{H}+\ddot{H}\right)^{2}F^{\prime\prime\prime}(R)\right]=0\,. \tag{434}\] Recalling that in modified gravity with the ordinary Maxwell theory, if \(F(R)\) behaves as in Eq. (196), i.e., \[F(R)\sim F_{0}R+F_{1}R^{q}\,,\quad\text{where $q$ is a constant}\,, \tag{435}\] a finite-time future singularity appears. Thus, one may investigate the possibility of finite-time singularities when the non-minimal gravitational electromagnetic theory is taken into account in this \(F(R)\)-gravity model. Now, when by putting \(\beta=-u\) in (99), \(H\) behaves as [589] \[H\sim h_{s}\left(t_{s}-t\right)^{u}\,, \tag{436}\] where \(u\) is an positive integer, then there exists no finite-time future singularity. In what follows, we consider the case when \(u\geq 2\). From Eq. (436), one finds that \[R\sim 6\dot{H}\sim-6uh_{s}\left(t_{s}-t\right)^{u-1}\,,\quad a\sim a_{s}\exp \left[-\frac{h_{s}}{u+1}\left(t_{s}-t\right)^{u+1}\right]\,, \tag{437}\] where in the expression of \(R\) we have considered only the leading term. Now, we examine the non-minimal gravitational coupling of the electromagnetic field characterized by \(I(R)\) which gives the solution in Eq. (437). We again consider that \(I(R)\) behaves as in Eq. (421). Concerning Eq. (434), its first and second terms in the left hand side are coming from the non-minimal gravitational electromagnetic coupling and the modified-gravity sectors, respectively. When \(t\) is close to \(t_{s}\), the leading term of the non-minimal gravitational electromagnetic coupling sector evolves as \(\left(t_{s}-t\right)^{\left(u-1\right)\left(\alpha-1\right)-2}\). On the other hand, if \(q\leq 1\), or \(q>1\) and \(u<q/\left(q-2\right)\), then the modified gravity part, i.e., the second term of the l.h.s. of Eq. (434) evolves as \(\left(t_{s}-t\right)^{\left(u-1\right)\left(q-1\right)-2}\). The consistency thus gives \(\alpha=q\). Additionally, if we consider that the leading term of the non-minimal gravitational electromagnetic coupling sector should not diverge in the limit \(t\to t_{s}\), then \(\alpha\) must satisfy the relation \(\alpha\geq\left(u+1\right)/\left(u-1\right)\). Taking only the leading terms in Eq. (434) and using \(\alpha=q\), one gets [589] \[I_{s}=\frac{a_{s}^{4}F_{1}}{|B|^{2}\kappa^{2}}\,. \tag{438}\] When \(q>1\) and \(u\geq q/\left(q-2\right)\), then the leading term of the modified-gravity sector evolves as \(\left(t_{s}-t\right)^{u-1}\). Thus, the consistency here gives \(\alpha=2u/\left(u-1\right)\). In this case, considering only the leading terms in Eq. (434) and using \(\alpha=2u/\left(u-1\right)\), one gets [589] \[I_{s}=\frac{a_{s}^{4}F_{0}}{|B|^{2}\kappa^{2}}\frac{u-1}{6u^{2}\left(u+1 \right)}\left(-6h_{s}u\right)^{-2/\left(u-1\right)}\,. \tag{439}\] Consequently, one may observe that the non-minimal gravitational coupling of the electromagnetic field given by \(I(R)\sim I_{s}R^{\alpha}\) with the specific values of \(I_{s}\) and \(\alpha\) stated above can avoid the finite-time future singularities arising in pure modified gravity. However,, on the contrary, it may also happen that the non-minimal gravitational coupling of the electromagnetic field is unable to remove the finite-time singularity while such coupling could make the singularity stronger (or weaker). We further assume that for the large curvature, \(I(R)\) behaves exactly as in Eq. (421). Now, using the result of Eq. (178), we consider that for large curvature, \(F(R)\) behaves as \(F(R)\propto R^{\bar{q}}\), where \(\bar{q}=1-\alpha_{-}/2<1\). In this case, the Big Rip singularity characterized as in eq. (165) appears. It follows from Eq. (422) that if \(\alpha=1+2h_{s}\), then in the limit \(t\to t_{s}\), \(\rho_{\rm eff}\to\infty\) and \(|p_{\rm eff}|\to\infty\) which means that the singularity becomes stronger when the non-minimal gravitational coupling of the electromagnetic field is taken into account. Let us now consider a more general case. We assume \(H\) evolves as in Eq. (99) with \(0<\beta<1\) which corresponds to a Type III singularity and this singularity appears when \(F(R)\) takes the form of Eq. (185), and \(\alpha>0\), then in the limit \(t\to t_{s}\), \(\rho_{\rm eff}\to\infty\) and \(|p_{\rm eff}|\to\infty\). Thus, we see that the non-minimal gravitational coupling of the electromagnetic field makes the singularity stronger. If \(-1<\beta<0\), then there exists a Type II singularity, which can appear when the \(F(R)\) takes the form of Eq. (187), and \(\left(\beta-1\right)/\left(\beta+1\right)<\alpha<0\), then \(\rho_{\rm eff}\to 0\) and \(|p_{\rm eff}|\to\infty\). For \(|p_{\rm tot}|>|p_{\rm MG}|\) (or \(|p_{\rm tot}|<|p_{\rm MG}|\)), the non-minimal gravitational coupling of the electromagnetic field can make the singularity stronger (weaker). In conclusion, we observe that the non-minimal gravitational coupling in the Maxwell theory could qualitatively influence the future dynamics of the Universe. ### Semi-classical Gravity In Einstein's GR where the gravitational equations are given by \(G_{\mu\nu}=8\pi GT_{\mu\nu}\), the spacetime geometry and the matter distribution are classical in nature. However, in a physical scenario where the matter evolution follows the principles of quantum mechanics, there the energy-momentum tensor should be an operator \(\hat{T}_{\mu\nu}\) in the quantum world. Thus, in order to realize a consistent picture, the spacetime geometry \(G_{\mu\nu}\) needs to be quantized [667; 668], however, this theory is under construction. While on the other hand, an alternative semi-classical approach can be furnish where the spacetime geometry remains classical but it is sourced by the quantum expectation of the energy-momentum tensor operator [669], i.e., \(G_{\mu\nu}=8\pi\langle\psi|\hat{T}_{\mu\nu}|\psi\rangle\) (\(|\psi\rangle\) is the quantum state of matter which evolves with the spacetime) proposed by Moller and Rosenfeld [670; 671]. In this section, we discuss the finite-time future singularities appearing in semi-classical gravity. We follow the notation of [672] where we consider the spatially flat FLRW Universe characterized by the line element \(ds^{2}=dt^{2}-a^{2}(t)\delta_{ij}dx^{i}dx^{j}\), and thus, the stress-tensor for a perfect fluid is given by \(T_{\mu}^{\nu}=-p\delta_{\mu}^{\nu}+(\rho+p)u^{\mu}u_{\nu}\). Then, if one considers some massless fields conformally coupled to gravity, the vacuum stress-tensor acquire an anomalous trace given by [672] \[T_{\rm vac}=\alpha\Box R-\frac{\beta}{2}G\,, \tag{440}\] where \(R\) is the Ricci scalar and \(G\) is the Gauss-Bonnet invariant defined in (141). In terms of the Hubble rate one has \[T_{\rm vac}=6\alpha\left(\ddot{H}+12H^{2}\dot{H}+7H\ddot{H}+4\dot{H}^{2}\right)-1 2\beta(H^{4}+H^{2}\dot{H})\,. \tag{441}\] On the other hand, the coefficients \(\alpha\) and \(\beta\) are fixed by the regularization process. For instance, if one uses adiabatic regularization one has [673] \[\alpha= \,\frac{1}{2880\pi^{2}}(N_{0}+6N_{1/2}+12N_{1})>0\,,\] \[\beta= -\,\frac{1}{2880\pi^{2}}(N_{0}+\frac{11}{2}N_{1/2}+62N_{1})<0\,, \tag{442}\] while point splitting gives [672] \[\alpha= \,\frac{1}{2880\pi^{2}}(N_{0}+3N_{1/2}-18N_{1})\,,\] \[\beta= \,-\,\frac{1}{2880\pi^{2}}(N_{0}+\frac{11}{2}N_{1/2}+62N_{1})\,, \tag{443}\] where \(N_{0}\) being the number of scalar fields, \(N_{1/2}\) the number of four-component neutrinos and \(N_{1}\) the number of electromagnetic fields. What is important, as pointed out in Ref. [674], is that the coefficient \(\alpha\) is arbitrary and it is influenced by the regularization method and also by the fields present in the Universe, but \(\beta\) is independent of the regularization scheme and it is always negative. Now, we are interested in the value of the vacuum energy density, namely \(\rho_{\rm vac}\). Since the trace is given by \(T_{\rm vac}=\rho_{\rm vac}-3p_{\rm vac}\), inserting this expression in the conservation equation \(\dot{\rho}_{\rm vac}+3H(\rho_{\rm vac}+p_{\rm vac})=0\) one gets \[\dot{\rho}_{\rm vac}+4H\rho_{\rm vac}-HT_{\rm vac}=0\,, \tag{444}\] which is a first order linear differential equation which could integrated using the variation of constants method, leading to \[\rho_{\rm vac}=6\alpha\left(3H^{2}\dot{H}+H\ddot{H}-\frac{1}{2}\dot{H}^{2} \right)-3\beta H^{4}+Ca^{-4}\,. \tag{445}\] where \(C\) is a constant of integration which for the flat FLRW spacetime vanishes. This could be understanding as follows: for a static spacetime \(\rho_{\rm vac}\) reduces to \(Ca^{-4}\), and the flat FLRW spacetime reduces to the Minkowski one, for which \(\rho_{\rm vac}=0\). Therefore, in semi-classical gravity the Friedmann equation becomes \[H^{2}=\frac{(\rho+\rho_{\rm vac})\kappa^{2}}{3}\,. \tag{446}\] Now first of all, we consider the simplest case: \(\alpha=0\). The Friedmann equation will become \[H^{2}=\frac{\rho\kappa^{2}}{3}-\beta\kappa^{2}H^{4}\,, \tag{447}\] which implies that \(H^{2}\leq\frac{1}{-\beta\kappa^{2}}=\frac{1}{|\beta|\kappa^{2}}\), because the energy density \(\rho\) must be positive. Differentiating Eq. (447) with respect to the cosmic time and using the conservation equation, one gets the following Raychaudhuri equation \[\dot{H}=-\frac{1}{2}\frac{(\rho+p)\kappa^{2}}{(1-2|\beta|H^{2}\kappa^{2})}\,, \tag{448}\] which for a fluid with linear EoS, \(p=(\gamma-1)\rho\) could be written as \[\dot{H}=-\frac{3\gamma}{2}H^{2}\frac{(1-H^{2}|\beta|\kappa^{2})}{(1-2H^{2}| \beta|\kappa^{2})}\,. \tag{449}\] Then, for a phantom fluid (\(\gamma<0\)), there are two different situations: 1. \(0<H<\frac{1}{\sqrt{2|\beta|\kappa}}\). In this case \(\dot{H}\) becomes positive, and thus, the Hubble rate always increases. Since when \(H\) approaches to \(\frac{1}{\sqrt{2|\beta|\kappa}}\) the velocity of the Hubble rate increases, one can deduce that the value \(\frac{1}{\sqrt{2|\beta|\kappa}}\) is reached in a finite-time \(t_{s}\), at with \(\dot{H}\) diverges (and also the Ricci scalar), meaning that we have a Type IV singularity. 2. \(\frac{1}{\sqrt{2|\beta|\kappa}}<H<\frac{1}{\sqrt{|\beta|\kappa}}\). Now, \(\dot{H}\) becomes negative, and once again the value of the Hubble rate \(\frac{1}{\sqrt{2|\beta|\kappa}}\) is reached in a finite-time, and thus, \(\dot{H}\) becomes singular at finite-time and a Type IV singularity is obtained. In fact, the time \(t_{s}\) could be analytically calculated because Eq. (449) may be integrated and leads to [403] \[-\frac{1}{H(t)}+\frac{2}{h}\ln\left(\frac{h-H(t)}{h-H_{i}}\frac{h+H(t)}{h+H_{ i}}\right)=-\frac{3\gamma}{2}(t-\bar{t})\,,\] (450) where to simplify, we have introduced the notation \(h\equiv\frac{1}{\sqrt{|\beta|\kappa}}\), and we have defined \(\bar{t}=-\frac{2}{3\gamma H_{i}}\) with \(H_{i}=H(0)\). Then, when \(H=\frac{h}{\sqrt{2}}\) one gets \[t_{s}=\bar{t}+\frac{2\sqrt{2}}{3\gamma h}\left[1-\sqrt{2}\ln\left((\sqrt{2}- 1)^{2}\frac{h+H_{i}}{h-H_{i}}\right)\right]>0\,.\] (451) Finally, we consider the case \(\alpha\neq 0\), and we look for future singular solutions whose leading term for the energy density is \(\rho_{s}(t_{s}-t)^{\mu}\). Inserting this expression in the conservation equation \(\dot{\rho}=-3H(\rho+p)\), one gets the leading term of the Hubble rate \[H(t)\sim\frac{\mu}{3\gamma(t_{s}-t)}\,. \tag{452}\] Plugging this expression in the modified Friedmann equation and picking up the leading term, one gets \(\mu=-4\), meaning that \(\gamma\) must be negative (i.e. phantom fluid) in order to have a positive Hubble rate, but in that case one has \(\rho_{s}<0\) when \(\alpha\) is positive and \(\beta\) is negative, obtaining a negative value of the energy density of the form, \[\rho_{s}=-\frac{\mu^{2}}{\gamma^{2}}\left[\alpha\left(\frac{2\mu}{3\gamma}+1 \right)-\beta\frac{\mu^{2}}{27\gamma^{2}}\right]. \tag{453}\] Thus, we see that for positive values of the parameter \(\alpha\), this is unrealistic, and indicates that when \(\alpha>0\) there is no future singular solutions in the expanding phase. However, if one allows all the values of the parameter \(\alpha\), as we can see from the equation (459), then there exists finite-time future singularities. For example, for a phantom fluid when \(\alpha<0\) and \(\beta<0\) with \(\alpha\left(\frac{2\mu}{3\gamma}+1\right)-\beta\frac{\mu^{2}}{27\gamma^{2}}<0\), one gets \(\mu=-4\) but now with \(\rho_{s}>0\). In fact, a very deep study was done in Ref. [403], where it was shown that the Universe bounces, and for \(-1<\frac{\beta}{3\alpha}<0\), the Universe develops a future singularity at finite-time in the contracting phase. On the contrary, for \(\frac{\beta}{3\alpha}<-1\) one gets a Universe bouncing infinitely many times, and thus, without future singularities. Now, we consider the non-linear EoS, \(p=-\rho-f(\rho)\), with \(f(\rho)=\frac{A}{\sqrt{3\kappa}}\rho^{\nu+\frac{1}{2}}\) as in Section II (see also the analysis done in [675]). As discussed in Section II, for this non-linear EoS, the Hubble rate and the energy density evolve respectively as \(H(t)=h_{s}(t_{s}-t)^{-\frac{1}{2\nu}}\) and \(\rho(t)=\rho_{s}(t_{s}-t)^{-1/\nu}\) when \(\nu\) does not vanish, and one can quickly recall that: 1. For \(\nu<-1/2\) and \(A<0\), there is a Type II (Sudden singularity). 2. For \(-1/2<\nu<0\) and \(A<0\), there is a Generalized Sudden singularity. 3. For \(0<\nu<1/2\) and \(A>0\) (phantom fluid), there is a Big Rip singularity. 4. For \(\nu>1/2\) and \(A>0\) (phantom fluid), there is a Big Freeze singularity. Then, allowing all the values of the parameter \(\alpha\) and taking \(\beta<0\), our goal is to investigate if quantum effects are able to avoid or mitigate these singularities. Once again we look for future solutions whose leading term of the energy density is \(\rho(t)\sim\rho_{s}(t_{s}-t)^{\mu}\). Inserting it into the conservation equation \(\dot{\rho}=3Hf(\rho)\), one gets \[H(t)\sim-\frac{\mu\kappa\rho_{s}^{\frac{1}{2}-\nu}}{\sqrt{3}A}(t_{s}-t)^{\frac {\mu}{2}-1-\mu\nu}\equiv\bar{h}_{s}(t_{s}-t)^{a}\,. \tag{454}\] Now, plugging the leading terms of the energy density and the Hubble rate in the semi-classical Friedmann equation (446), one realizes a variety of possibilities: 1. \(\mu<0\) (\(\Longrightarrow A>0\)) and \(a\geq 0\): In this case, for a phantom fluid, the energy density is singular but the Hubble rate vanishes at the singularity or is constant (\(a=0\)). The dominant terms of \(\rho_{\rm vac}\) are \[-3\alpha\dot{H}^{2}\sim-3\alpha a^{2}\bar{h}_{s}^{2}(t_{s}-t)^{2a-2}\,,\] (455) and \[6\alpha H\ddot{H}\sim 6\alpha a(a-1)\bar{h}_{s}^{2}(t_{s}-t)^{2a-2}\,.\] (456) Then, equaling these terms to \(-\rho\) one gets \[2a-2=\mu\Longleftrightarrow\mu\nu=-2\,,\] (457) and \(\rho_{s}=3\alpha a(a-2)\bar{h}_{s}^{2}\), meaning that this solution is only obtained for negative values of the parameter \(\alpha\), because as we will see \(0\leq a<1\). On the other hand, the conditions \(\mu<0\) and \(a\geq 0\) are equivalent to \[-2\leq\mu<0\ \Longleftrightarrow\ 1\leq\nu<\infty\,,\] (458) where we have used the relation \(\mu\nu=-2\). And since \(a=\frac{\mu}{2}-1-\mu\nu=\frac{\mu}{2}+1\) one conclude that \(0\leq a<1\) as we have already explained. Finally, the leading terms of the Hubble rate and the energy density are given by \[H(t)\sim-\frac{2\kappa\rho_{s}^{\frac{1}{2}-\nu}}{\sqrt{3}A\nu}(t_{s}-t)^{- \frac{1}{\nu}+1},\quad\text{and}\quad\rho(t)\sim\rho_{s}(t_{s}-t)^{-\frac{2}{ \nu}}\,.\] (459) 2. \(\mu<0\) (\(\Longrightarrow A>0\)) and \(-1<a<0\): In this case the dominant terms of the energy density of the vacuum are, as in the case \(a>0\), \(-3\alpha\dot{H}^{2}\) and \(6\alpha H\ddot{H}\). In this case, we also have \(\mu\nu=-2\) and \(\rho_{s}=3\alpha a(a-2)\bar{h}_{s}^{2}\), but now \(\alpha\) must be positive because we are assuming that \(a\) is negative. On the other hand, the condition \(-1<a<0\) leads to \[-4<\mu<-2\Longleftrightarrow\frac{1}{2}<\nu<1\,.\] (460) So, in this case for a phantom fluid, the Hubble rate and the energy density evolve as in Eq. (459), but both the quantities are singular at \(t=t_{s}\). 3. \(\mu<0\) (\(\Longrightarrow A>0\)) and \(a=-1\): All the terms of \(\rho_{\rm vac}\) scale as \((t_{s}-t)^{-4}\), which means that \(\mu\) must be \(-4\) and \(\nu=1/2\), so the EoS is linear and this case reduces to the one previously studied. 4. \(\mu<0\) (\(\Longrightarrow A>0\)) and \(a<-1\): Now the dominant term of \(\rho_{\rm vac}\) is \(-3\beta H^{4}\sim-3\beta\bar{h}_{s}^{4}(t_{s}-t)^{4a}\), and the leading terms must satisfy \[\rho_{s}(t_{s}-t)^{\mu}-3\beta\bar{h}_{s}^{4}(t_{s}-t)^{4a}=0\,,\] (461) which means that \(\mu=4a\) and \(\rho_{s}=3\beta\bar{h}_{s}^{4}<0\), showing that no singular solutions arise in this case. Note that the condition \(\mu=4a\) together with \(a<-1\) leads to \[\mu<-4\quad\Longleftrightarrow\quad\frac{1}{4}<\nu<\frac{1}{2}\,.\] (462) Summing up, for a phantom fluid with EoS, \(p=-\rho-\frac{A}{\sqrt{3}\kappa}\rho^{\nu+\frac{1}{2}}\), taking into account the vacuum effects due to conformally coupled massless fields, we have the following observations: 1. For \(1\leq\nu<\infty\), there are future singular solutions where the energy density diverges but not the Hubble rate, when the parameter \(\alpha\) is negative. 2. For \(\frac{1}{2}<\nu<1\), there are future singular solutions where both the energy density and the Hubble rate diverge, when the parameter \(\alpha\) is positive. 3. For \(\nu=\frac{1}{2}\), it is the linear case, and, as we have already seen, only there are singular solutions when the condition \(\alpha\left(\frac{2\mu}{3\gamma}+1\right)-\beta\frac{\mu^{2}}{27\gamma^{2}}<0\) with \(\mu=-4\) and \(\gamma=-A\) is satisfied. 4. For \(\nu<\frac{1}{2}\), no future singular solutions arise. Singularities in braneworld models In this section, we shall discuss the emergence of finite-time singularities of unusual form and nature admitted in the braneworld theory, a fascinating theory where it is argued that our observable Universe could be a \(1+3\)-dimensional surface (named as "brane") immersed into a \(1+3+d\)-dimensional spacetime (named as "bulk") with standard model particles and fields confined on the brane while gravity can freely access the bulk [676, 677]. The braneworld gravity caught the attention of the scientific community and the models have been investigated widely from both theoretical and observational perspectives, see for instance Refs. [678, 679, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714] (also see the review articles on the braneworld gravity and cosmology [715, 716, 717, 717]). One of the pioneering works in braneworld geometry is the Randall-Sundrum scenario described by the following action [715, 716] \[S=\int dx^{5}\sqrt{-g^{(5)}}(2R^{(5)}+\Lambda_{5})+\int dx^{4}\sqrt{-g}\sigma+S_ {\rm matter}\,, \tag{463}\] where \(g^{(5)}_{\mu\nu}\) is the bulk metric, \(R^{(5)}\) the Ricci scalar in the bulk, \(\Lambda_{5}\) is the bulk cosmological constant, \(\sigma\) is the brane tension and \(S_{\rm matter}\) stands for the action of the matter sector as in (244). For this action, corresponding Friedmann equation is given by \[H^{2}=\frac{\kappa^{2}\rho}{3}\left(1+\frac{\rho}{2\sigma}\right)\,. \tag{464}\] As we will see and study in Section VII, when the brane tension is negative, defining \(\rho_{c}=-2\sigma\), one obtains the so-called _holonomy corrected Friedmann equation_. Since, we will study the negative case in Section VII, we consider here \(\sigma>0\). Then, the Friedmann equation in the flat FLRW spacetime can be written as follows \[\rho=\sigma\left(-1+\sqrt{1+\frac{6\kappa^{2}H^{2}}{\sigma}}\,\,\right)\,, \tag{465}\] where one can see that when the brane tension goes to infinite, one recovers the usual Friedmann equation of GR. It is not difficult to see that when the brane tension is positive, one obtains the same kind of singularities as in GR. Here, we only show that for a linear EoS of the form \(p=w\rho\) with \(w<-1\), one has the Big Rip singularity. Effectively, the conservation equation is given by \[\dot{\rho}=-\sqrt{3}\rho\kappa\sqrt{1+\frac{\rho}{2\sigma}}\rho(1+w)\,, \tag{466}\] which for large value of the energy density reduces to \[\dot{\rho}=-\sqrt{\frac{3}{2\sigma}}\kappa\rho^{2}(1+w)\,, \tag{467}\] whose solution is given by \[\rho(t)=-\sqrt{\frac{2\sigma}{3}}\frac{1}{\kappa(1+w)}\frac{1}{t_{s}-t}\,, \tag{468}\] where we have introduced the notation \(t_{s}\equiv t_{0}-\sqrt{\frac{2\sigma}{3}}\frac{1}{\kappa\rho_{0}(1+w)}\), being \(\rho_{0}\) the energy density at the present time \(t_{0}\). Then, since \(w<-1\), we can see that \(t_{s}>t_{0}\), and thus, the singularity appears as a Big Rip one. Now we consider the more elaborated action [718] \[S=M^{3}\sum_{i}\left[\int_{\rm bulk}(\mathcal{R}-2\Lambda_{i})-2\int_{\rm brane }K\right]+\int_{\rm brane}(m^{2}R-2\sigma)+\int_{\rm brane}L(h_{\alpha\beta}, \phi)\,, \tag{469}\] where \(M\) is the fundamental scale of the theory, \(\Lambda_{i}\) is the cosmological constant in the \(i\)-th bulk component, \(\sigma\) the brane tension, \(h_{\alpha\beta}\) the induced metric, \(\mathcal{R}\) the Ricci scalar for the induced metric, \(K\) the extrinsic curvature, \(L(h_{\alpha\beta},\phi)\) corresponds to the presence of matter fields \(\phi\) on the brane interacting with the induced metric and describes their dynamics and \(m^{2}\) arises when one considers quantum effects generated by matter fields residing in the brane. Dealing with the \(Z_{2}\) symmetry of reflection, which requires equal cosmological constants on the two sides of the brane, i.e., \(\Lambda_{1}=\Lambda_{2}\equiv\Lambda\), the corresponding Friedmann equation is given by [718; 719; 720] \[H^{2}+\frac{k^{2}}{a^{2}}=\frac{\rho+\sigma}{3m^{2}}+\frac{2}{l^{2}}\left[1\pm \sqrt{1+l^{2}\left(\frac{\rho+\sigma}{3m^{2}}-\frac{\Lambda}{6}-\frac{C}{a^{4} }\right)}\right]\,, \tag{470}\] where \(k\) denotes the spatial curvature, \(l^{2}=m^{2}/M^{3}\) is the length scale and the term \(C/a^{4}\), sometimes referred to as the _dark radiation_, arising due to the projection of the bulk gravitational degrees of freedom onto the brane. The two new singularities that we will discuss in this Section are connected with the fact that the expression under the square root of Eq. (470) turns to zero at some point during the evolution of the Universe, so that solutions of the cosmological equations cannot be continued beyond this point. These two types of _quiescent_ singularities display the following behavior [718]: 1. The first type of singularity (labeled as 'S1') which is essentially induced by the presence of the dark radiation term in the square root of Eq. (470) arises one of the following two cases: 1. \(C>0\) and the density of matter increases slower than \(a^{-4}\) in the limit \(a\to 0\). Such singularities may appear if the Universe is filled with the matter sector having EoS \(\frac{p}{\rho}<\frac{1}{3}\). An example is the pressure-less matter (dust) for which \(\rho\propto a^{-3}\). In addition, a special case, in which it also occurs, is an empty Universe (i.e., where \(\rho=0\)). 2. The energy density of the Universe is radiation-dominated, that means \(\rho=\frac{\rho_{0}}{a^{4}}\) and in addition, \(C>\rho_{0}\). The above singularities may occur either in the past of an expanding Universe or in the future of a collapsing one, i.e., when the spatial curvature is \(k=1\). 2. A second type of singularity (labeled as 'S2') arises when \[l^{2}\left(\frac{\sigma}{3m^{2}}-\frac{\Lambda}{6}\right)<-1\,.\] (471) In this case, it is important to realize that the combination \((\rho/3m^{2}-C/a^{4})\) decreases monotonically with the expansion of the Universe. As an effect, the expression under the square root of Eq. (470) can become zero at a suitable late-time beyond that the cosmological solution cannot be extended. Let us note that (S2) has some interesting features than (S1), because 1. It may appear during the late-time expansion of the Universe. 2. It may occur even if dark radiation is totally absent, i.e., \(C=0\). Note that in both (S1) and (S2), the scale factor and its first time derivatives remain finite, however, all the higher derivatives of \(a(t)\) with respect to the cosmic time tend to infinity as the singularity is approached. This is due to the fact that when one takes the temporal derivative of Eq. (470) the square root appears in the denominator, and thus, since it vanishes at the singularity, all the derivatives of the Hubble rate tend to infinite at the singular time. Now let us consider the general braneworld without \(Z_{2}\) symmetry. In this case, the Friedmann equation for the brane embedded into the five-dimensional bulk is given by [721] \[m^{2}\left(H^{2}+\frac{k}{a^{2}}-\frac{\rho+\sigma}{3m^{2}}\right)^{2}=4M^{6} \left(H^{2}+\frac{k}{a^{2}}-\frac{\Lambda}{6}-\frac{C}{a^{4}}\right)-\frac{M^ {12}}{36m^{4}}\left[\frac{E/a^{4}}{H^{2}+k/a^{2}-(\rho+\sigma)/3m^{2}}\right]^ {2}\,, \tag{472}\] where \(E\) is a constant of integration. In this case we can see that the singularity (S2) is always present in the past of the expanding brane. The reason for this rests in the negative character of the last term on the right-hand side of Eq. (472), which rapidly grows by absolute value as \(a\to 0\), while the left-hand side of this equation is constrained to remain positive. In the case of an expanding brane, the last term on the right-hand side of Eq. (472) rapidly decays and becomes unimportant. Therefore, provided Eq. (470) is satisfied, the expanding Universe will encounter an (S2) singularity in the future. Finally, we note that the singularities (S1) and (S2) do not appear for \(m=0\), the usual singularities in GR appear only. Effectively, in this situation Eq. (472) turns out to be \[H^{2}+\frac{k}{a^{2}}=\frac{\Lambda}{6}+\frac{C}{a^{4}}+\frac{(\rho+\sigma)^{2 }}{36M^{6}}+\frac{M^{6}E^{2}}{16a^{8}(\rho+\sigma)^{2}}\,, \tag{473}\] which only admits cosmological singularities, when the scale factor vanishes, associated with an infinite density of matter and dark radiation (\(C/a^{4}\)) or the last term in Eq. (473). Singularities in matter creation models The theory of matter creation or particle creation, plays a crucial role in the understanding of the dynamical evolution of our Universe. A consistent framework of the continuous matter creation was initiated by Parker and his collaborators [722; 723; 724; 725; 726; 727; 728], and Zeldovich and others [729; 730; 731; 732; 733; 734] through the investigations of the material content of the Universe. Within this framework, the existing material content of the Universe is a result of the continuous creation of radiation and matter particles due to the gravitational field of the expanding Universe acting on the quantum vacuum. These produced particles have their mass, energy and momentum. The matter creation theory gained significant attention after the pioneering work by Prigogine et al [735] who showed how to insert the creation of matter into Einstein's gravitational equations. This was achieved through the modification of the usual conservation equation as follows [735] \[(nu^{\mu})_{;\mu}=n\Gamma\;, \tag{474}\] where \(n\) is the particle number density, \(u^{\mu}\) is the usual particle four velocity and \(\Gamma\) denotes the particle creation or matter creation rate. The particle creation rate \(\Gamma\) is the heart of this theory since this quantity controls the dynamics of the Universe by modifying its expansion history. From the thermodynamical point of view, as the entropy flux vector of the matter field which allows the particle creation, \(s^{\mu}=n\sigma u^{\mu}\), where \(\sigma\) is the entropy per particle (specific entropy) must satisfy the second law of thermodynamics, i.e., \(s^{\mu}_{;\mu}\geq 0\), therefore, one can derive that \(\Gamma\geq 0\)[736]. According to the Parker's theorem [737], in the radiation dominated era, the production of particles, is heavily suppressed, and hence, this quantity is considered to have no effect during the radiation dominated era. After Prigogine et al [735], the thermodynamics of particle creation was discussed in detail through the covariant formalism [736; 738]. A special attention was given to the 'adiabatic' or 'isentropic' particle creation where the entropy per particle remains constant. The cosmological scenarios driven by such adiabatic particle creation have been investigated widely over the years with many interesting results [89; 334; 335; 336; 337; 338; 339; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 408; 409; 408; 409; 410; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 431; 432; 433; 434; 435; 436; 437; 438; 439; 444; 450; 451; 452; 453; 454; 455; 456; 457; 458; 459; 460; 461; 462; 463; 464; 465; 466; 467; 468; 469; 470; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 488; 489; 490; 483; 489; 481; 483; 484; 485; 486; 487; 488; 489; 491; 492; 493; 494; 495; 496; 497; 498; 499; 500; 501; 502; 503; 504; 505; 506; 507; 508; 509; 510; 511; 512; 513; 514; 515; 516; 517; 518; 519; 520; 521; 522; 523; 524; 525; 526; 527; 528; 529; 530; 531; 532; 533; 534; 535; 536; 537; 538; 539; 540; 541; 542; 543; 544; 545; 546; 547; 548; 549; 559; 560; 561; 562; 563; 564; 565; 571; 572; 573; 574; 575; 576; 577; 578; 579; 580; 581; 582; 583; 584; 585; 586; 587; 588; 589; 590; 587; 589; 591; 592; 593; 594; 595; 596; 597; 598; 599; 599; 599; 599; 599; 599; 599; 599; 599; 600; 601; 602; 603; 604; 605; 606; 607; 608; 609; 610; 611; 612; 613; 614; 615; 616; 617; 618; 619; 620; 621; 622; 623; 624; 63; 638; 639; 640; 637; 638; 639; 65; 656; 667; 639; 668; 670; 671; 672; 673; 674; 675; 676; 677; 68; 689; 690; 680; 681; 682; 683; 684; 685; 686; 687; 688; 689; 689; 683; 685; 689; 691; 686; 692; 687; 688; 689; 693; 694; 695; 696; 697; 698; 699; 700; 699; 710; 711; 712; 713; 714; 715; 716; 717; 718; 719; 720; 721; 722; 723; 724; 725; 726; 727; 728; 729; 730; 731; 732; 733; 734; 735; 736; 737; 738; 739; 740; 741; 742; 743; 744; 745; 746; 747; 748; 749; 751; 752; 753; 754; 755; 756; 757; 758; 759; 761; 759; 762; 763; 764; 765; 766; 767; 768; 769; 777; 788; 780; 781; 782; 783; 784; 785; 786; 787; 787; 788; 789; 790; 791; 792; 793; 794; 795; 796; 797; 798; 799; 800; 801; 802; 803; 803; 804; 805; 806; 807; 8088; 81; 809; 810; 811; 812; 813; 814; 815; 816; 817; 817; 82; 834; 835; 84; 85; 86; 873; 88; 89; 82; 840; 85; 89; 86; 88; 89; 874; 88; 89; 80; 82; 89; 836; 89; 80; 841; 82; 841; 837; 840; 86; 88; 875; 88; 89; 80; 89; 80; 88; 85; 89; 80; 876; 89; 80; 88; 82; 85; 86; 88; 877; 88; 88; 89; 89; 81; 89; 80; 88; 89; 80; 89; 81; 82; 89; 80; 81; 83; 84; 85; 86; 87; 88; 89; 80; 81; 82; 83; 86; 89; 82; 87; 88; 88; 89; 80; 89; 80; 83; 81; 840; 89; 82; 85; 89; 86; 87; 88; 88; 89; 89; 80; 83; 87; 89; 80; 89; 82; 89; 80; 810; 834; 88; 89; 81; 84; 85; 86; 89; 87; 89; 80; 88; 89; 82; 89; 80; 89; 80; 81; 82; 89; 83; 840; 86; 89; 83; 87; 88; 89; 841; 89; 85; 89; 86; 87; 89; 88; 89; 89; 80; 89; 89; 80; 89; 80; 81; 89; 80; 89; 81; 80; 89; 82; 89; 80; 83; 89; 80; 84; 86; 89; 81; 83; 89; 80; 89; 84; 89; 85; 86; 89; 87; 81; 89; 80; 89; 80; 89; 81; which can be integrated into \[H(t)=\frac{\Gamma_{c}}{3}\left[\frac{\frac{H_{0}}{H_{0}-\Gamma_{c}/3}\exp\left( \frac{\Gamma_{c}\gamma}{2}(t-t_{0})\right)}{\frac{H_{0}}{H_{0}-\Gamma_{c}/3} \exp\left(\frac{\Gamma_{c}\gamma}{2}(t-t_{0})\right)-1}\right]\,, \tag{478}\] where \(t_{0}\), and \(H_{0}\) denote, once again, the present values of the cosmic time, respectively, and the Hubble parameter. One can clearly see from Eq. (478) that if for some finite-time \(t=t_{s}\), the denominator of the right hand side of Eq. (478) vanishes, then the model allows a finite-time singularity. The condition for finite-time singularity is, \[\frac{H_{0}}{H_{0}-\Gamma_{c}/3}\exp\left(\frac{\Gamma_{c}\gamma}{2}(t_{s}-t_{0 })\right)-1=0\,, \tag{479}\] which actually provides with the following condition \[t_{s}=t_{0}+\frac{2}{\Gamma_{c}\gamma}\ln\left(\frac{H_{0}-\frac{\Gamma_{c}}{ 3}}{H_{0}}\right)\,, \tag{480}\] where the condition \(H_{0}-\Gamma_{c}/3>0\) must be satisfied. Naturally, one can see that \(t_{s}<t_{0}\) as \(H_{0}-\Gamma_{c}/3<H_{0}\) provided that \(\gamma>0\to p/\rho>-1\) (non-phantom fluid), that is one has a Big Bang singularity. On the contrary, when \(\gamma<0\to p/\rho<-1\) (phantom fluid), we have \(t_{s}>t_{0}\) obtaining a Big Rip singularity. In fact, one could check that \[H(t)=\frac{\Gamma_{c}}{3}\left[\frac{\exp\left(\frac{\Gamma_{c}\gamma}{2}(t-t _{s})\right)}{\exp\left(\frac{\Gamma_{c}\gamma}{2}(t-t_{s})\right)-1}\right]\,, \tag{481}\] with \(H(t)\to\infty\) as \(t\to t_{s}\), which confirms our conclusion. ### Variable Matter Creation Rate For the time dependent matter creation rate, we realize a more generalized cosmological scenario with new possibilities. We consider the following time dependent matter creation rate [345] \[\Gamma(H)=-\Gamma_{c}+mH+\frac{n\Gamma_{c}^{2}}{H}\,, \tag{482}\] where \(m\) and \(n\) are dimensionless parameters and \(\Gamma_{c}\) is a constant. Notice that for specific values of \(m\), \(n\), one can recover a number of matter creation rates, for example, \(\Gamma(H)=-\Gamma_{c}\) for \(m=n=0\); \(\Gamma(H)\propto H\) by setting \(\Gamma_{c}=0\) and \(n=0\); \(\Gamma\propto H^{-1}\) under the choices of \(\Gamma_{c}=0\) and \(m=0\); and several more. Here we restrict to the matter creation scenario when the matter creation rate takes the full expression as in Eq. (482). For this general matter creation rate, assuming a linear EoS for the perfect fluid given by \(p=(\gamma-1)\rho\), the Raychaudhuri equation is given by \[\dot{H}=-\frac{\gamma}{2}\Bigg{[}(3-m)H^{2}+\Gamma_{c}H-\Gamma_{c}^{2}n\Bigg{]}\,. \tag{483}\] Since the dynamical system is depicted by a first order autonomous differential equation \[\dot{H}=F(H)=-\frac{\gamma}{2}\Bigg{[}(3-m)H^{2}+\Gamma_{c}H-\Gamma_{c}^{2}n \Bigg{]}\,, \tag{484}\] thus, to understand the dynamics we only need to find the fixed points of the system, i.e., the points satisfying \(F(H)=0\). And then, given a fixed point, namely \(H_{*}\), when \(\frac{dF(H_{*})}{dH}<0\) the fixed point is asymptotically stable (an attractor) and when \(\frac{dF(H_{*})}{dH}>0\) the fixed point is unstable (a repeller). For the present matter creation rate, i.e., Eq. (482), the fixed points are given by \[H_{\pm}=\frac{\Gamma_{c}}{2(m-3)}\left(1\pm\sqrt{1+4(3-m)n}\right)\,. \tag{485}\] It is also important to note that the nature of the fluid depends on the parameter \(\gamma\). For \(\gamma<0\), it behaves like a phantom fluid while \(\gamma>0\), it behaves like a quintessence fluid. Here, we will consider the case of a phantom fluid (the non-phantom case has been studied in detail in Ref. [345]). A simple calculation shows that for our model \[\frac{dF(H_{\pm})}{dH}=\pm\frac{\gamma\Gamma_{c}}{2}\sqrt{1+4(3-m)n}. \tag{486}\] Then, for a phantom fluid and for a positive \(\Gamma_{c}\), \(H_{+}\) is always an attractor and \(H_{-}\) is always a repeller. In the following we show that there are six different regions in the plane of parameters \((m,n)\): 1. \(\Omega_{1}=\{(m,n):m-3>0,n<0\}\). One has \(H_{+}>0\) and \(H_{-}<0\). In addition, since \(H_{+}\) is an attractor, then for any initial condition \(H_{\rm ini}\) greater than \(H_{+}\), the Universe converges at late times asymptotically to a de Sitter phase with \(H=H_{+}\), and going back in time, the Universe has a Big Bang singularity, because for large values of the Hubble rate, the Raychaudhuri equation will become \(\dot{H}\sim-\frac{7(3-m)}{2}H^{2}\). 2. \(\Omega_{2}=\{(m,n):m-3<0,n\geq 0\}\). One has \(H_{+}<0\) and \(H_{-}>0\). Now, since \(H_{-}\) is a repeller, for any initial condition \(H_{\rm ini}\) greater than \(H_{-}\), the Hubble rate diverges in the future in a finite-time because, once again, for large values of \(H\) the Raychaudhuri equation becomes \(\dot{H}\sim-\frac{7(3-m)}{2}H^{2}\), and thus, a Big Rip singularity is obtained. 3. \(\Omega_{3}=\{(m,n):m-3>0,n>0,4(3-m)n>-1\}\). In this situation one has \(H_{+}>H_{-}>0\), and for any initial condition \(H_{\rm ini}\) greater than \(H_{+}\), there is a Big Bang singularity and the Universe ends in a de Sitter phase with \(H=H_{+}\). For an initial condition between \(H_{-}\) and \(H_{+}\), the Universe is no-singular starting at \(H_{-}\) and ending in an infinite-time to \(H_{+}\). Finally, for an initial condition less than \(H_{-}\), the Universe enters into the contracting phase forever. 4. \(\Omega_{4}=\{(m,n):m-3<0,n\leq 0,4(3-m)n>-1\}\). Now \(H_{+}<H_{-}<0\), and thus, for any initial condition greater than \(H_{-}\), there is a Big Rip singularity. 5. \(\Omega_{5}=\{(m,n):m-3>0,n>0,4(3-m)n<-1\}\). There are no fixed points and \(\dot{H}\) is always negative, meaning that for any initial condition there is a Big Bang singularity. 6. \(\Omega_{6}=\{(m,n):m-3<0,n<0,4(3-m)n<-1\}\). Once again there are no fixed points, but now \(\dot{H}\) is positive, and thus, a Big Rip singularity always occurs. On the other hand, considering once again a phantom fluid but taking \(\Gamma_{c}<0\), one has the following observations: 1. \(\Omega_{1}=\{(m,n):m-3>0,n<0\}\). One has \(H_{-}>0\) and \(H_{+}<0\). In addition, since \(H_{-}\) is an attractor, then for any initial condition \(H_{\rm ini}\) greater than \(H_{-}\), the Universe at late times converges asymptotically to a de Sitter phase with \(H=H_{-}\), and going back in time, the Universe has a Big Bang singularity, because for large values of the Hubble rate, the Raychaudhuri equation becomes \(\dot{H}\sim-\frac{7(3-m)}{2}H^{2}\). 2. \(\Omega_{2}=\{(m,n):m-3<0,n\geq 0\}\). One has \(H_{-}<0\) and \(H_{+}>0\). Now, since \(H_{+}\) is a repeller, then for any initial condition \(H_{\rm ini}\) greater than \(H_{+}\), the Hubble rate diverges in the future in a finite-time because, once again, for large values of \(H\), the Raychaudhuri equation is \(\dot{H}\sim-\frac{7(3-m)}{2}H^{2}\), and thus, a Big Rip singularity occurs. 3. \(\Omega_{3}=\{(m,n):m-3>0,n>0,4(3-m)n>-1\}\). In this situation one has \(H_{+}<H_{-}<0\), and for any initial condition \(H_{\rm ini}\) greater than \(H_{-}\), there is a Big Bang singularity, and the Universe enters in the contracting phase and it ends in a de Sitter phase with \(H=H_{-}\). For an initial condition between \(H_{-}\) and \(H_{+}\), the Universe is non-singular starting at \(H_{+}\) and ending in an infinite-time to \(H_{-}\). Finally, for an initial condition less than \(H_{+}\), the Universe enters into the contracting phase forever. 4. \(\Omega_{4}=\{(m,n):m-3<0,n\leq 0,4(3-m)n>-1\}\). Now \(H_{+}>H_{-}>0\), and thus, for any initial condition greater than \(H_{+}\), there is a Big Rip singularity. 5. \(\Omega_{5}=\{(m,n):m-3>0,n>0,4(3-m)n<-1\}\). There are no fixed points and \(\dot{H}\) is always negative, meaning that for any initial condition there is a Big Bang singularity. 6. \(\Omega_{6}=\{(m,n):m-3<0,n<0,4(3-m)n<-1\}\). Once again there are no fixed points, but now \(\dot{H}\) is positive, and thus, a Big Rip singularity always occurs. Finally, we mention that as the matter creation rate is not properly known, thus, one can consider a different matter creation rate to investigate the possible occurrence of finite-time singularities. ## VII Singularities in Loop Quantum Cosmology An approach to quantum cosmology that could avoid finite time singularities is Loop Quantum Cosmology (LQC). For a general overview on LQC, we refer to the following works [771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 889, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 908, 909, 900, 900, 901, 902, 904, 905, 906, 907, 908, 909, 909, 900, 906, 909, 900, 907, 908, 909, 900, 900, 901, 903, 904, 905, 906, 907, 908, 909, 909, 900, 908, 900, 909, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 900, 908, 909, 900, 900, 901, 902, 903, 905, 906, 907, 908, 909, 909, 900, 909, 900, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 909, 900, 901, 903, 906, 908, 909, 909, 909, 900, 909, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 909, 900, 909, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 909, 900, 909, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 909, 900, 909, 900, 909, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 909, 900, 900, 909, 901, 902, 903, 905, 906, 907, 908, 909, 909, 900, 909, 900, 901, 903, 904, 905, 906, 907, 908, 909, 909, 909, 900, 909, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 909, 900, 909, 900, 901, 902, 904, 905, 906, 907, 908, 909, 909, 909, 900, 909, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 909, 909, 900, 909, 900, 901, 902, 904, 905, 906, 907, 908, 909, 909, 909, 900, 909, 900, 909, 900, 900, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 909, 909, 900, 909, 900, 900, 909, 900, 900, 900, 900, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 900, 909, 900, 909, 900, 909, 900, 900, 90, 900, 900, 900, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 909, 909, 909, 909, 900, 909, 900, 909, 900, 909, 9 ellipse, where the energy density is always bounded by \(0\leq\rho\leq\rho_{c}\), meaning that the singularities of Type I and III are forbidden in LQC. As a consequence, the Big Rip and Big Bang singularities do not exist in this theory, and it is argued that the Big Bang singularity is replaced by a Big Bounce which is produced when the energy density reaches the value \(\rho_{c}\). In fact, from the conservation equation one can see that for a non-phantom fluid, the movement along the ellipse depicted by the holonomy corrected Friedmann equation is clockwise and anti-clockwise for a phantom fluid. On the other hand, the singularity of Type II appears in LQC provided that the value of the energy density at the singularity, namely \(\rho_{s}\) is less than the critical one. This is simple to understand: the Type II singularity appears when the pressure diverges for a finite value of the energy density which happens for the nonlinear EoS \(p=-\rho-f(\rho)\) considered in section II.2, with \(\nu<-1/2\) and \(A>0\). In the same way, the Type IV singularity appears near \(0\cong\rho\ll\rho_{c}\), that is, when the holonomy corrections could be disregarded and GR is recovered. This kind of singularity has been discussed in section II.2. Another approach to LQC is the Dapor-Lieger model which depicts an emergent Universe from a de Sitter regime in the contracting phase [814; 815]. For this model the corresponding Friedmann equation, which is more complicated than in standard LQC, is given by [816; 817] \[H_{\pm}^{2}=\frac{\kappa^{2}\rho}{3(\gamma^{2}+1)}\left(1-\frac{\rho}{\rho_{ max}}\right)\left[1+\frac{2\gamma^{2}}{1\pm\sqrt{1-\frac{\rho}{\rho_{\rm max }}}}\right]\,, \tag{494}\] where \(\gamma\cong 0.2375\) is, once again, the well-known Immirzi parameter and \(\rho_{\rm max}=\frac{\rho_{c}}{4(\gamma^{2}+1)}\) is the maximum value reached by the energy density in this theory. Then, since the curves depicted by this Friedmann equation are also bounded, one gets the same kind of singularities as in standard LQC, that is, only singularities of Type II and IV could appear in LQC. VIII Cosmological finite-time singularities in modified gravity and in interacting multifluid cosmology: dynamical system versus finite-time cosmological singularities Modified gravity in its various forms makes possible the realization of finite-time future cosmological singularities without the need of phantom scalars, as in the case of ordinary GR [1]. In the literature, the GR perspective of DE can marginally accommodate a phantom fluid evolution and cosmic singularities without invoking phantom scalars, however, modified gravity can perfectly generate a phantom DE era and even cosmic singularities. Another interesting perspective along with modified gravity is to take into account the interacting multifluids that can be viscous or not [818; 819; 820; 817; 818; 819; 821; 822; 823; 824; 825; 826; 827; 828; 829; 830]. Most of the multifluid approaches in cosmology invoke an interaction between the DE and DM, which is supported theoretically by the fact that DE dominates over DM after the formation of the galactic structures. Also, the interaction between the dark sectors is further supported by the degeneracy of the DE models, since the DM density parameter \(\Omega_{m}\) cannot accurately be measured [831]. As already argued, cosmology with interacting DM-DE sectors is vastly considered in the literature [16; 533; 534; 535; 536; 537; 538; 539; 540; 541; 542; 543; 544; 545; 548; 549; 551] and it is a quite popular research line. It is notable though that some specific interacting DM-DE models are plagued with instabilities, due to the fact that the growth of matter perturbations can be affected by an existing non-trivial interaction between the components of the dark sectors. On the other hand, baryonic fluids cannot be coupled to any of the components of the dark sector since such a coupling would eventually result in a fifth force, which is not a physically acceptable feature in contemporary physics. Since the discovery of the late-time accelerating epoch of the Universe in the late 90's, many proposals try to model the DE fluid, which is a negative pressure fluid, with a quintessence, de Sitter or slightly phantom EoS parameter. Also, it is possible that finite-time singularities may occur in the Universe, and it is quite hard to model these in standard GR since a phantom scalar would be required, as we already mentioned. Finite-time cosmological singularities are not perfectly understood, since they share the problems of all cosmological singularities, the main problem being the geodesics incompleteness, at least for crushing types of singularities. In this section, we shall study finite-time singularities using the dynamical system approach, in the context of \(F(R)\) gravity and coupled DE-DM models. The study of the cosmological phase space for the aforementioned systems offers many insights for the complete understanding of finite-time singularities, among which the understanding of the fixed points and of the stability of the solutions, and studying cosmological systems in terms of their dynamical systems are quite popular in the literature, see for example [832; 833; 834; 835; 221; 836; 837; 838; 839; 840; 841; 842; 843; 844; 845; 846; 847; 848; 849; 850; 851]. For our analysis in this section, we shall form the \(F(R)\) gravity field equations for a spatially flat FLRW spacetime in terms of an autonomous dynamical system, by using specific choices of variables. The resulting dynamical system will be studied in the vicinity of finite-time future singularities, emphasizing mainly on the Big Rip singularity. An important outcome of our analysis is that a Big Rip singularity in the context of gravity always occurs while the Universe is accelerating. This is due to the fact that for the Big Rip case, the dynamical system is attracted to a stable accelerating cosmological attractor, which is also the final attractor of the asymptotically autonomous gravity dynamical system. For the Big Rip case, we also reveal which is the leading order gravity term which can realize a Big Rip finite-time singularity, and we also consider the occurrence of Type II, III and IV singularities in gravity cosmological systems. We also stress the fact that dynamical systems singularities may not necessarily indicate the presence of a physical finite-time singularity, which is quite important to note. Apart from gravity finite-time singularities, we shall also study multifluid cosmology using again the dynamical systems approach. Specifically, we shall consider a three-fluid system consisting of interacting DE-DM fluids in the presence of a non-interacting baryonic fluid. By using the cosmological equations, we shall construct an autonomous dynamical system, and we shall study the phase space of this system. We shall focus on analyzing the dynamical system singularities, using the dominant balance technique developed in Ref. [852], and we shall examine when a dynamical system singularity is actually a true finite-time singularity of the cosmological system. As we shall show, the cosmological dynamical system has no global attractors which may drive the cosmological system to a finite-time singularity. Finally, the analysis we shall perform indicates that the dynamical system possesses de Sitter fixed points, the occurrence of which depends on the interaction between DE and DM. ### Analysis of Finite-time Singularities in Gravity via the Autonomous Gravity Dynamical System In order to study the phase space of gravity near cosmic finite-time singularities, let us first form the dynamical system of gravity in an autonomous form [850]. Consider the vacuum gravity action (154) with with. Now, by varying the gravitational action (154) with respect to the metric tensor we obtain the field equations for gravity in vacuum, (495) which takes the following form, (496) For a flat FLRW metric, the gravity field equations take the forms in (199) and (200). The autonomous dynamical system of gravity can be formed by using the dimensionless variables,, and, which are defined as follows, (497) We shall choose the -foldings number to quantify the dynamical evolution of the variables,, and, instead of the cosmic time. Using the transformation, (498) and also by expressing the field equations with respect to the variables,, and defined in (497), the vacuum gravity dynamical system takes the form, (499) with the parameter being defined as follows, (500) This parameter contains cosmic time-dependent quantities, or equivalently, variables that depend explicitly on the \(e\)-foldings number \(N\). Obviously, the presence of this parameter renders the dynamical system non-autonomous. The only case for which the \(F(R)\) gravity dynamical system is autonomous is when \(m\) takes constant values. This narrows the options for studying the full phase space thus by choosing \(m\) to be constant, we examine subspaces of the full \(F(R)\) gravity phase space which correspond to specific cosmological evolutions. For example the case \(m=0\) among other things, describes a quasi-de Sitter evolution or the evolution near a quasi-de Sitter fixed point, thus the study of the dynamical system will reveal the behavior of the dynamical system for a quasi-de Sitter evolution. The full analysis of the \(F(R)\) gravity dynamical system was performed in Ref. [850]. We shall focus on the \(m=0\) case which is relevant for the studies of some finite time cosmological singularities. The total EoS of the system in terms of the variables \(x_{i}\) (\(i=1,2,3\)) takes the form [295], \[w_{\rm eff}=-\frac{1}{3}(2x_{3}-1)\,. \tag{501}\] Furthermore, the dimensionless variables \(x_{i}\) (\(i=1,2,3\)) satisfy the following constraint, \[x_{1}+x_{2}+x_{3}=1\,, \tag{502}\] which stems from the Friedmann equation. Let us present the phase space structure of the \(F(R)\) gravity dynamical system for the case \(m=0\) which is of particular importance. Following [850], the fixed points of the dynamical system in terms of a general constant value of the parameter \(m\) read, \[\phi_{*}^{1} = \left(-\frac{\sqrt{-2m}+\sqrt{-2m+20\sqrt{-2m+4}}+2}{4}\,,\ \frac{3\sqrt{-2m}+\sqrt{-2m+20\sqrt{-2m+4}}-2}{4}\,,\ \frac{4-\sqrt{-2m}}{2}\right)\,,\] \[\phi_{*}^{2} = \left(-\frac{\sqrt{-2m}-\sqrt{-2m+20\sqrt{-2m+4}}+2}{4}\,,\ \frac{3\sqrt{-2m}-\sqrt{-2m+20\sqrt{-2m+4}}-2}{4}\,,\ \frac{4-\sqrt{-2m}}{2}\right)\,,\] \[\phi_{*}^{3} = \left(\frac{\sqrt{-2m}-\sqrt{-2m-20\sqrt{-2m+4}}-2}{4}\,,\ \frac{3\sqrt{-2m}-\sqrt{-2m-20\sqrt{-2m+4}}+2}{4}\,,\ \frac{\sqrt{-2m}+4}{2}\right)\,,\] \[\phi_{*}^{4} = \left(\frac{\sqrt{-2m}+\sqrt{-2m-20\sqrt{-2m+4}}-2}{2}\,,\ -\frac{\sqrt{-2m-20\sqrt{-2m+4}}+3\sqrt{-2m}+2}{4}\,,\ \frac{\sqrt{-2m}+4}{2}\right)\,. \tag{503}\] Hence, for \(m\simeq 0\), the fixed points are, \[\phi_{*}^{1}=\phi_{*}^{3}=(-1,0,2)\,,\quad\phi_{*}^{2}=\phi_{*}^{4}=(0,-1,2)\,. \tag{504}\] Neither of the above two fixed points are hyperbolic for \(m=0\), therefore the stability analysis must be performed numerically. This analysis was performed in Ref. [850] and as it was shown that the fixed point \(\phi_{*}^{1}\) is stable, while \(\phi_{*}^{2}\) is unstable. These results are important for the analysis of the dynamical system near finite-time singularity using an analytical approach. These results are important both for physical and mathematical reasons, especially the ones related to the fixed point \(\phi_{*}^{1}=\phi_{*}^{3}=(-1,0,2)\), which is essential for the Big Rip singularity analysis. For this fixed point, since \(x_{3}=2\), the EoS becomes \(w_{\rm eff}=-1\). Thus, we shall show the extremely important result that as the Universe approaches a Big Rip singularity, it accelerates prior to approaching the singularity. #### vi.1.1 Finite-time Singularities of \(F(r)\) Cosmology and its Dynamical System Since we are interested in studying the phase space of vacuum \(F(R)\) gravity near finite-time singularities, let us consider the cases for which the Hubble rate can be approximated as follows, \[H(t)\simeq h_{s}(t_{s}-t)^{-\beta}\,, \tag{505}\] with \(t_{s}\) signifying the time instance at which the singularity occurs, \(\beta\) is a real parameter which determines the type of the finite-time singularity and \(h_{s}\) is some free parameter with dimensions \(\left(\text{GeV}\right)^{-\beta+1}\). Since the finite-time singularity at \(t=t_{s}\) is a future singularity, we have \(t_{s}>t\). Now, according to the following values of \(\beta\), the following types of singularities may be developed, * For \(\beta>1\), a Type I (Big Rip) singularity occurs. * For \(0<\beta<1\), a Type III singularity occurs. * For \(-1<\beta<0\), a Type II (pressure) singularity occurs. * For \(\beta<-1\), a Type IV singularity occurs. As we will show later on in this section, the parameter \(\beta\) crucially affects the behavior of \(F(R)\) gravity near a cosmic singularity. Now let us investigate which is the form of the dynamical system of vacuum \(F(R)\) gravity near a finite-time singularity of the form (505). By expressing the cosmic time as a function of the \(e\)-foldings number, using the definition of the latter, \[N=\int^{t}H(t)dt\,, \tag{506}\] we get, \[t_{s}-t=\left(\frac{(\beta-1)(N-N_{c})}{h_{s}}\right)^{\frac{1}{1-\beta}}\,, \tag{507}\] with \(N_{c}\) being an integration constant which basically depends on the chosen initial conditions. For the form of the Hubble rate (505), the parameter \(m\) can be calculated as a function of the \(e\)-foldings number \(N\), and it has the following form, \[m=-\frac{\beta(\beta+1)}{(\beta-1)^{2}(N_{c}-N)^{2}}\,. \tag{508}\] The above relation is a universal relation which covers all the cases of cosmic singularities that may occur for various values of the parameter \(\beta\), which stems from the Hubble rate (505). However, as \(N\) takes different values for each type of singularity as \(t\) approaches \(t_{s}\), the parameter \(m\) will take distinct values too for the various types of singularities which correspond to different limiting values of the \(e\)-foldings number \(N\). Upon substituting \(m\) from Eq. (508) in Eq. (499), the dynamical system of vacuum \(F(R)\) gravity takes the form, \[\frac{dx_{1}}{dN}= -4+3x_{1}+2x_{3}-x_{1}x_{3}+x_{1}^{2}\,,\] \[\frac{dx_{2}}{dN}= -\frac{\beta(\beta+1)}{(\beta-1)^{2}(N_{c}-N)^{2}}+8-4x_{3}+x_{2} x_{1}-2x_{2}x_{3}+4x_{2}\,,\] \[\frac{dx_{3}}{dN}= \frac{\beta(\beta+1)}{(\beta-1)^{2}(N_{c}-N)^{2}}-8+8x_{3}-2x_{3} ^{2}\,. \tag{509}\] The dynamical system of Eq. (509) is non-autonomous, so it is quite hard to tackle it since most theorems governing the autonomous dynamical systems do not apply. However, as we shall see, in some limiting cases, the dynamical system is rendered autonomous, in the cases of interest, the behavior of the phase space can be revealed near cosmological evolutions developing finite-time singularities. In the next subsections, we shall analyze the behavior of the \(F(R)\) gravity phase space near crushing and non-crushing types of singularities. In the same research line we shall investigate which approximate \(F(R)\) gravity can generate such singularities, based on the fixed points of the phase space near singularities. The major outcome is the behavior of the phase space near Big Rip singularities, in which case in the context of \(F(R)\) gravity the Big Rip singularity always occurs while the Universe is accelerating. #### v.1.2 The Case of the Big Rip Singularity We shall begin our analysis with the most severe type of a finite-time singularity, namely, the Big Rip, which is a crushing type singularity, meaning that geodesics incompleteness occurs. From a practical and quantitative point of view, this case is the easiest to tackle since the dynamical system near the singularity is rendered autonomous. For a full analysis of this study, see Ref. [853]. For the Big Rip singularity, the parameter \(\beta\) in the Hubble rate (505) takes the values \(\beta>1\), so as the cosmic time approaches the singularity time \(t_{s}\), as we can see from Eq. (507), this corresponds to \(N\to\infty\). Therefore, in this case the parameter \(m\) of Eq. (508) approaches zero as the singularity time instance is approached, therefore, remarkably the dynamical system near the Big Rip singularity is rendered asymptotically autonomous. From a mathematical point of view, this is a great simplification, since it will enable us to study the structure of the phase space in a semi-analytic way. This simplification only occurs for the Big Rip singularity which is the most interesting case, thus this is a remarkable feature of the theory. Let us now analyze in some detail the phase space behavior of \(F(R)\) gravity near a Big Rip singularity. Since for a Big Rip singularity, the parameter \(\beta\) takes values \(\beta>1\), we consider first the case that \(\beta\) takes large values, that is, \(\beta\gg 1\). This case offers great simplifications since in this case, the parameter \(m\) becomes \(m\simeq-\frac{1}{(N-N_{c})^{2}}\), therefore the dynamical system can be integrated analytically. The solutions for \(x_{1}(N)\), \(x_{2}(N)\), and \(x_{3}(N)\) in this case are, \[x_{1}(N)= \,\frac{-\frac{3\sqrt{\pi}N_{c}e^{N}\text{Erf}\left(\sqrt{N-N_{c} }\right)}{\sqrt{N-N_{c}}}+2\text{e}^{N_{c}}N-2\text{e}^{N_{c}}N_{c}+2\text{e}^{ N_{c}}}{\text{e}^{N_{c}}(2N_{c}-2N+1)-2\sqrt{\pi}\text{e}^{N}(N-N_{c})^{3/2} \text{Erf}\left(\sqrt{N-N_{c}}\right)}\] \[+\frac{\frac{2\sqrt{\pi}N_{c}^{2}e^{N}\text{Erf}\left(\sqrt{N-N_{ c}}\right)}{\sqrt{N-N_{c}}}+\frac{2\sqrt{\pi}e^{N}\text{Erf}\left(\sqrt{N-N_{c}} \right)}{\sqrt{N-N_{c}}}}{\text{e}^{N_{c}}(2N_{c}-2N+1)-2\sqrt{\pi}\text{e}^{N} (N-N_{c})^{3/2}\text{Erf}\left(\sqrt{N-N_{c}}\right)}\] \[-\frac{2\sqrt{\pi}e^{N}N^{2}\text{Erf}\left(\sqrt{N-N_{c}}\right) }{\sqrt{N-N_{c}}}+\frac{4\sqrt{\pi}N_{c}e^{N}N\text{Erf}\left(\sqrt{N-N_{c}} \right)}{\sqrt{N-N_{c}}}\] \[+\frac{\sqrt{\pi}e^{N}N\text{Erf}\left(\sqrt{N-N_{c}}\right)}{ \sqrt{N-N_{c}}\left(\text{e}^{N_{c}}(2N_{c}-2N+1)-2\sqrt{\pi}e^{N}(N-N_{c})^{3 /2}\text{Erf}\left(\sqrt{N-N_{c}}\right)\right)}\,,\] \[x_{2}(N)= \,\frac{\mathcal{C}_{2}\text{e}^{N_{c}-N}(N_{c}-N)}{2\sqrt{\pi}( N-N_{c})^{3/2}\text{Erf}\left(\sqrt{N-N_{c}}\right)+\text{e}^{N_{c}-N}(2N_{c}-2N+1)}\] \[-\frac{\text{e}^{-N}(N_{c}-N)\left(\frac{\text{e}^{N_{c}}(8N_{c} -8N+1)}{2(N_{c}-N)^{2}}-\frac{4\sqrt{\pi}e^{N}\text{Erf}\left(\sqrt{N-N_{c}} \right)}{\sqrt{N-N_{c}}}\right)}{2\sqrt{\pi}(N-N_{c})^{3/2}\text{Erf}\left( \sqrt{N-N_{c}}\right)+\text{e}^{N_{c}-N}(2N_{c}-2N+1)}\,,\] \[x_{3}(N)= \,\frac{1}{2N_{c}-2N}+2\,, \tag{510}\] with \(\text{Erf}(x)\) being the error function, and furthermore, the parameter \(\mathcal{C}_{2}\) is a freely chosen integration constant. Asymptotically, in the case \(N\to\infty\), the solutions \(x_{1}(N)\), \(x_{2}(N)\), and \(x_{3}(N)\) are, \[x_{1}(N)\simeq-\frac{(N_{c}-N)^{2}}{N^{2}}\simeq-1\,,\quad x_{2}(N)\simeq- \frac{2}{N}\simeq 0\,,\quad x_{3}(N)\simeq 2\,. \tag{511}\] Remarkably, as the Big Rip singularity is approached asymptotically for \(N\to\infty\), the functions approach the phase space point \((x_{1},x_{2},x_{3})=(-1,0,2)\). Recall that from Eq. (504) the trajectories in the phase space of \(F(R)\) gravity approaching a Big Rip singularity are attracted to the stable de Sitter point. Hence, in a nutshell the asymptotically autonomous dynamical system of \(F(R)\) gravity near a Big Rip singularity approaches the unstable de Sitter point \(\phi_{*}^{1}\) of the de Sitter subspace of the \(F(R)\) gravity phase space. In general, this is not the general rule in dynamical systems. Proving that a solution of an autonomous dynamical system is also a solution of the non-autonomous dynamical system is quite difficult formally, but in our case we proved it through the analytic solutions we obtained. Now apart from the major feature apart from the fact that the dynamical system trajectories near a Big Rip singularity are attracted to a stable de Sitter attractor, it is important to highlight another major feature, and specifically that the total EoS parameter near the singularity is \(w_{\text{eff}}\simeq-1\). This stems from the fact that the fixed point \(\phi_{*}^{1}\) has \(x_{3}=2\), hence, the total EoS parameter which is given in Eq. (501) tends to the value \(w_{\text{eff}}\simeq-1\) as the Big Rip singularity is approached. Thus this shows the important feature of our analysis that as the Big Rip singularity is approached, the Universe accelerates, thus the singularity is approached in an accelerating way. In order to further highlight the behavior of the \(F(R)\) gravity phase space near a Big Rip singularity, we shall analyze the dynamical system in a numerical way. In FIG. 1 upper left, we showcase the vector-flow and the trajectories of the phase space in the \(x_{1}-x_{2}\) for \(x_{3}=2\), using various initial conditions near the fixed point \((x_{1},x_{2})=(-1,0)\). It is worth to further study the behavior of the phase space, so in FIG. 1, we present the vector flow and trajectories in the plane for the dynamical system that is obtained for \(x_{3}=2\), for various initial conditions near the point, with the red dot indicating the fixed point \((x_{1},x_{2})=(-1,0)\). In all cases, the trajectories with initial conditions near the Big Rip de Sitter fixed point values, tend asymptotically to the fixed point, which is stable. This means that once the trajectories are attracted to the fixed point, they remain there permanently. Now let us address another important issue, namely that of the behavior of the \(F(R)\) gravity function near the Big Rip singularity. As it is conceivable, only approximate forms of \(F(R)\) gravity can be obtained due to the complexity of the field equations. To this end, we shall utilize the functional forms of the fixed point variables, namely \((x_{1},x_{2},x_{3})=(-1,0,2)\) and their definition in terms of the \(F(R)\) gravity function and the Hubble rate. Since asymptotically we have \(x_{1}\simeq-1\) we get, \[-\frac{\dot{F}^{\prime}}{F^{\prime}H}=-1\,, \tag{512}\] hence, by using Eq. (505) and also the fact that at leading order near the Big Rip singularity the Ricci scalar at leading order is, \[R(t)\simeq 12h_{s}^{2}(t_{s}-t)^{-2\beta}\,, \tag{513}\] we get the following solution for the derivative of the \(F(R)\) gravity function, \[F^{\prime}(R)\simeq\exp\left(\gamma R^{\frac{\beta-1}{2\beta}}\right)+\Lambda _{I}\,, \tag{514}\] with \(\Lambda_{I}\) being an arbitrary integration constant, and the parameter \(\gamma\) is equal to, \[\gamma=\frac{h_{s}}{(\beta-1)(12h_{s}^{2})^{\frac{\beta-1}{2\beta}}}\,. \tag{515}\] Upon integration of Eq. (514) with respect to \(R\), we get the functional form of the \(F(R)\) gravity as the Big Rip singularity is approached, which is, \[F(R)\simeq\Lambda_{\rm I}\,R+\frac{2\beta\gamma^{-\frac{2\beta}{\beta-1}} \Gamma\left(\frac{2\beta}{\beta-1},-R^{\frac{\beta-1}{2\beta}}\gamma\right)}{ \beta-1}+\Lambda_{\rm II}\,, \tag{516}\] with \(\Lambda_{\rm II}\) being an arbitrary integration constant. We can further simplify the \(F(R)\) gravity utilizing the fact that as \(t\to t_{s}\), the Ricci scalar blows up \(R\to\infty\), and also due to the fact that \(\beta>1\), the \(F(R)\) gravity of Eq. (516) can be further approximated as, \[F(R)\simeq\Lambda_{\rm I}\,R-\frac{\left(2\beta\gamma^{\frac{\beta+1}{\beta- 1}-\frac{2\beta}{\beta-1}}\right)R^{\frac{\beta+1}{2\beta}}\mathrm{e}^{\gamma R ^{\frac{\beta-1}{2\beta}}}}{\beta-1}+\Lambda_{\rm II}\,. \tag{517}\] Figure 1: _The vector flow and trajectories in the \(x_{1}-x_{2}\) plane for the \(F(R)\) gravity dynamical system near a Big Rip singularity._ We need to note that for consistency we need to require that the parameter \(\beta\) has the form \(\beta=2n/(2m+1)\), with \(n\) and \(m\) being positive integers. The resulting expression for \(F(R)\) gravity in Eq. (517) is functionally similar to the ones found in Refs. [589, 590], however, the general analytic treatment of finding the \(F(R)\) gravity near general forms of singularities could be quite demanding. To recapitulate the findings of this section, let us highlight the most important findings. As a Big Rip singularity is approached, the \(F(R)\) gravity dynamical system becomes asymptotically autonomous, and its solutions tend asymptotically to the stable de Sitter fixed point of the autonomous \(F(R)\) gravity dynamical system for a de Sitter evolution. This is quite remarkable and it indicates two things: firstly and more importantly that the Universe as it approaches a Big Rip singularity driven by an \(F(R)\) gravity, it approaches in an accelerating way. Simply state that in \(F(R)\) gravity, a Big Rip singularity is approached during the DE era. Secondly the fact that the autonomous de Sitter \(F(R)\) gravity dynamical system and the \(F(R)\) gravity dynamical system near a Big Rip singularity, which is render autonomous near the Big Rip singularity, share the same fixed point solution, or stated differently, the same final attractors in the theory. Let us note that in general, the addition of an \(R^{2}\) term in the \(F(R)\) gravity Lagrangian may significantly affect the development of the singularity since \(R^{2}\) terms are known that they remedy finite-time singularities [394, 397, 589, 590, 584]. In addition, the combined presence of an \(R^{2}\) term may provide a unified description of inflation with DE era [394, 397, 589, 590, 584, 328, 330, 394, 397, 589, 595]. However, the addition of an \(R^{2}\) term greatly obscures the mathematical appearance of the dynamical system and thus makes it impossible to reveal the behavior of the trajectories in this case. Finally, let us note that the correspondence between the Einstein and Jordan frames, may reveal important relationships between the two frames, for example a Big Rip singularity in the Jordan frame may correspond to a Type IV singularity in the Einstein frame [856]. #### v.1.3 The Cases of Type III, Type II and Type IV Singularities We shall now consider the cases of Type III, Type II, and Type IV singularities, in which cases, as the cosmic time tends to the singularity occurring value \(t_{s}\), we have that \(N\to N_{c}\), see Eq. (507). Hence, in this case we have, \[\beta=\frac{2m}{2n+1}\,, \tag{518}\] with \(n\) and \(m\) being positive integers. Let us analyze first the case of the Type III singularity, and since as \(N\to N_{c}\), the parameter \(m\) defined in Eq. (508) diverges. The analytic treatment of both the Type III and Type II singularities is difficult to tackle, however, for the case of the Type IV singularity, things are easier, in the case that \(\beta\ll-1\). In this case, \(m\) becomes \(m\simeq-\frac{1}{(N-N_{c})^{2}}\) and thus the dynamical system (509) has the solutions \(x_{1}(N)\), \(x_{2}(N)\), and \(x_{3}(N)\) which are identical with the ones in Eq. (510), for a general \(N\), but for the case at hand, at \(N=N_{c}\), the parameters \(x_{1}(N)\), \(x_{2}(N)\), and \(x_{3}(N)\) become, \[x_{1}(N) \simeq \left(\frac{4N_{c}}{3}+12\right)(N-N_{c})+2\,,\] \[x_{2}(N) \simeq -\frac{1}{2(N-N_{c})}-3-(\mathcal{C}_{2}-12)(N-N_{c})\,,\] \[x_{3}(N) = \frac{1}{2N_{c}-2N}+2\,, \tag{519}\] and therefore, as \(N\to N_{c}\), the trajectories of the dynamical system approaches \((x_{1},x_{2},x_{3})=(2,-\infty,\infty)\) in the phase space. It is noticeable that although \(x_{2}\) and \(x_{3}\) diverge as the singularity is approached, the Friedmann constraint is still satisfied because the singularities in the variables \(x_{2}\) and \(x_{3}\) cancel. We can reveal the functional form of the \(f(R)\) gravity near the Type IV singularity, simply by solving the differential equation which stems form the condition \(x_{1}=2\). Following the same procedure as in the Big Rip case, as \(N\to N_{c}\) we have at leading order, \[R\simeq 6h_{s}\beta(t_{s}-t)^{-\beta-1}\,, \tag{520}\] hence, the derivative of the \(F(R)\) gravity reads, \[F^{\prime}(R)\simeq\Lambda_{\rm III}+\exp\left(-\gamma_{I}R^{-\frac{1-\beta} {\beta+1}}\right)\,, \tag{521}\] with \(\Lambda_{\rm III}\) being an integration constant and the parameter \(\gamma_{I}\) is, \[\gamma_{I}=\frac{2h_{s}}{(1-\beta)(6h_{s}|\beta|)^{-\frac{1-\beta}{\beta+2}}}\,. \tag{522}\] The integration of Eq. (521) yields the following functional form of the \(F(R)\) gravity, \[F(R)\simeq\Lambda_{\rm III}\,R+\frac{(\beta+1)\gamma_{I}^{\frac{\beta+1}{\beta- \beta}}\Gamma\left(\frac{\beta+1}{\beta-1},R^{\frac{\beta+1}{\beta+1}}\gamma_{I }\right)}{1-\beta}+\Lambda_{\rm IV}\,, \tag{523}\] with \(\Lambda_{\rm IV}\) being an integration constant. We can further simplify the above functional form of the \(F(R)\) gravity by exploiting the fact that as \(N\to N_{c}\) the Ricci scalar tends to zero because \(\beta\ll-1\), hence, we have at leading order, \[F(R)\simeq R+\Lambda_{\rm III}\,R+\frac{(\beta+1)\gamma_{I}^{\frac{\beta+1}{ \beta-\beta}}\Gamma\left(\frac{\beta+1}{\beta-1}\right)}{1-\beta}-\frac{( \beta+1)\gamma_{I}R^{\frac{2\beta}{\beta+1}}}{2\beta}+\Lambda_{\rm IV}\,. \tag{524}\] Furthermore for the extremely soft Type IV singularity \(\beta\ll-1\), we have, \[F(R)\simeq R+\Lambda_{\rm III}\,R-\frac{1}{\gamma_{I}}-\frac{\gamma_{I}R^{2}} {2}+\Lambda_{\rm IV}\,. \tag{525}\] #### vi.2.4 The Case of non-vacuum \(F(r)\) Gravity In the previous subsections we considered the dynamical system of \(F(R)\) gravity near finite-time singularities in the absence of matter perfect fluids. In this subsection, we shall consider the inclusion of perfect matter fluids in the dynamical system of \(F(R)\) gravity. As we shall evince, this will perplex things in the dynamical system when finite-time singularities are approached. With regard to perfect fluids we shall consider non-relativistic matter and radiation perfect fluids. In the presence of matter perfect fluids, the field equations of \(F(R)\) gravity for a FLRW metric read, \[0 = -\frac{F(R)}{2}+3\left(H^{2}+\dot{H}\right)F^{\prime}(R)-18\left( 4H^{2}\dot{H}+H\ddot{H}\right)F^{\prime\prime}(R)+\kappa^{2}\rho_{\rm matter}\,, \tag{526}\] \[0 = \frac{F(R)}{2}-\left(\dot{H}+3H^{2}\right)F^{\prime}(R)+6\left(8 H^{2}\dot{H}+4\dot{H}^{2}+6H\ddot{H}+\ddot{H}\right)F^{\prime\prime}(R)+36 \left(4H\dot{H}+\ddot{H}\right)^{2}F^{\prime\prime}(R)\] (527) \[+\kappa^{2}p_{\rm matter}\,,\] with \(\rho_{\rm matter}\) and \(p_{\rm matter}\) being the total effective energy density and the total effective pressure of all the matter fluids present. In this case we introduce the following dimensionless variables, \[x_{1}=-\frac{\dot{F}^{\prime}(R)}{F^{\prime}(R)H}\,,\quad x_{2}=-\frac{F(R)}{ 6F^{\prime}(R)H^{2}}\,,\quad x_{3}=\frac{R}{6H^{2}}\,,\quad x_{4}=\frac{\kappa ^{2}\rho_{r}}{3FH^{2}}\,,\quad x_{5}=\frac{\kappa^{2}\rho_{M}}{3FH^{2}}\,, \tag{528}\] where \(\rho_{r}\) and \(\rho_{M}\) respectively denotes the energy density of radiation and the matter fluid. In the presence of matter fluids, using Eqs. (526) and the variables (528), the dynamical system (499) of the vacuum \(F(R)\) gravity takes the following form, \[\frac{dx_{1}}{dN}= -4+3x_{1}+2x_{3}-x_{1}x_{3}+x_{1}^{2}+3x_{5}+4x_{4}\,,\] \[\frac{dx_{2}}{dN}= 8+m-4x_{3}+x_{2}x_{1}-2x_{2}x_{3}+4x_{2}\,,\] \[\frac{dx_{3}}{dN}= -8-m+8x_{3}-2x_{3}^{2}\,,\] \[\frac{dx_{4}}{dN}= x_{4}x_{1}-2x_{4}x_{3}\,,\] \[\frac{dx_{5}}{dN}= x_{5}+x_{5}x_{1}-2x_{5}x_{3}\,, \tag{529}\] where the parameter \(m\) is defined in Eq. (500). In the case the Hubble rate is given in Eq. (505), the parameter \(m\) is given in Eq. (508), but in this case, the dynamical system cannot be analytically solved even in the limiting cases of the parameter \(m\) considered in the previous subsections. In addition it is impossible to reveal the behavior of the dynamical system even numerically, except for the variable \(x_{3}\) which behaves in an identical way as in the autonomous dynamical system case. Now let us stress for the first time the difference between a finite-time cosmological singularity and a singularity of the dynamical system variables. If some of the variables \(x_{i}\) of the dynamical system blow up at some finite-time, this singular behavior does not necessarily indicate a finite-time cosmological singularity. Take for example the variable \(x_{1}\), for the vacuum \(F(R)\) gravity case. As the Big Rip cosmological singularity is approached, the variable \(x_{1}\) tends asymptotically to the value \(x_{1}\to-1\) while when the rest of the finite-time singularities are considered, the variable \(x_{1}\to 2\) as the finite-time singularities are approached. Thus, the singularities in the field variables do not necessarily indicate a cosmological singularity, except for the variables \(x_{4}\) and \(x_{5}\) in Eq. (528) which depend explicitly on the energy density. Regarding finite-time dynamical system singularities, these can offer insights toward the complete understanding of the behavior of the trajectories in the phase space. The most formal way to study finite-time dynamical system singularities is by studying the dominant balances of the dynamical system based on a \(\psi\)-series approach near a finite-time dynamical system singularity [852], see also Refs. [853, 857] for recent cosmological applications. This method only applies for autonomous dynamical systems and thus cannot be applied for the \(F(R)\) gravity case or some other modified gravity. Interacting multifluid cosmologies though are relatively simple theories and their dynamical systems is in most cases autonomous, thus can be studied using the dominant balances method. This is the subject of the next subsection. ### Finite-time Cosmological and Dynamical Systems Singularities in Interacting Multifluids Cosmology The dark sector is composed by the DM and DE fluids, which may or may not interact. The perfect fluids can be different in nature, for example the DE fluid may be generated by some non-trivial underlying modified gravity. Without explicitly using the modified gravity we may model the DE fluid in an agnostic way by a dark fluid which interacts with the other dark sector fluids, such as the one of DM, adding an interaction among them. The interaction between the dark sectors though must carefully be chosen because it may have considerable effects on the primordial matter density perturbations [858, 859]. Both the dark sector fluids though cannot interact with the baryonic perfect fluid in order to avoid unobservable fifth force effects. We shall write the field equations corresponding to the three fluids and we shall form them in an autonomous dynamical system way in order to study its trajectories, its singularities and the connection of finite-time dynamical system singularities with the finite-time cosmological singularities. For the dynamical systems singularities, we shall use the dominant balances method developed in [852] which we shall briefly review. For a flat FLRW metric, the three fluid cosmological field equations in a modified gravity cosmology context reads, \[H^{2}=\frac{\kappa^{2}}{3}\rho_{\rm tot}\,, \tag{530}\] with \(\rho_{\rm tot}=\rho_{\rm DM}+\rho_{\rm DE}+\rho_{b}\) denoting the total energy density of the cosmological fluids where \(\rho_{\rm DM}\), \(\rho_{\rm DE}\), \(\rho_{b}\) are respectively the energy density of the pressureless DM, DE and baryons. Now, differentiating Eq. (530) with respect to the cosmic time and using the conservation equation for the total fluid we get, \[\dot{H}=-\frac{\kappa^{2}}{2}\left(\rho_{\rm tot}+p_{\rm tot}\right)\,, \tag{531}\] with \(p_{\rm tot}\) being the total pressure of the fluids, which basically consists of the pressure of the DE fluid since DM and baryons are pressureless. For the DE sector, we shall consider a generalized EoS of the form [380], \[p_{\rm DE}=-\rho_{\rm DE}-A\kappa^{4}\rho_{\rm DE}^{2}\,, \tag{532}\] with \(A\) being a real dimensionless parameter. From the energy-momentum conservation equations, we have, \[\dot{\rho}_{b}+3H\rho_{b} = 0\,,\] \[\dot{\rho}_{\rm DM}+3H\rho_{\rm DM} = Q\,,\] \[\dot{\rho}_{\rm DE}+3H(\rho_{\rm DE}+p_{\rm DE}) = -Q\,, \tag{533}\] where \(Q\) denotes the interaction term among the DE and DM fluids and the sign of \(Q\) determines which fluid gains energy at the expense of the other fluid which loses energy. Here, \(Q>0\) implies that energy flow takes place from DE to DM (i.e. DE fluid loses energy and DM fluid gains energy) and \(Q<0\) indicates that energy flows from DM to DE (i.e. DM fluid loses energy and DE fluid gains energy). We shall assume that \(Q\) has the following form, \[Q=3H(c_{1}\rho_{\rm DM}+c_{2}\rho_{\rm DE})\,, \tag{534}\] which has many phenomenological supports [860, 861, 464, 562, 565]. Note that \(c_{1}\), \(c_{2}\), are the coupling parameters of the interaction model (534) and they are real constants. We can construct an autonomous dynamical system for the cosmological system at hand based on the equations (530), (531), and (533), so we choose the dimensionless variables of the dynamical system as follows, \[x_{1}=\frac{\kappa^{2}\rho_{\rm DE}}{3H^{2}}\,,\quad x_{2}=\frac{\kappa^{2}\rho_ {\rm DM}}{3H^{2}}\,,\quad x_{3}=\frac{\kappa^{2}\rho_{b}}{3H^{2}}\,,\quad z= \kappa^{2}H^{2}\,. \tag{535}\] The variables of the dynamical system \(x_{i}\) (\(i=1,2,3\)) satisfy the Friedmann equation, which is the case at hand is, \[x_{1}+x_{2}+x_{3}=1\,. \tag{536}\] Also the total EoS, \(w_{\rm eff}=\frac{p_{\rm DE}}{\rho_{\rm tot}}\), of the cosmological system, is also expressed in terms of the variables of the dynamical system in the following way, \[w_{\rm eff}=-x_{1}-3Ax_{1}^{2}z\,. \tag{537}\] In view of the cosmological equations (530), (531), and (533) considered together with the dynamical system variables (535), we get \[\frac{dx_{1}}{dN}= -\frac{\kappa^{2}Q}{3H^{3}}+9Ax_{1}^{2}z^{2}+3x_{1}x_{2}+3x_{1}x_ {3}-9Ax_{1}^{3}\,,\] \[\frac{dx_{2}}{dN}= \frac{\kappa^{2}Q}{3H^{3}}-3x_{2}+3x_{2}^{2}+3x_{2}x_{3}-9Ax_{1}^{ 2}x_{2}z\,,\] \[\frac{dx_{3}}{dN}= -3x_{3}+3x_{3}^{2}+3x_{3}x_{2}-9Ax_{1}^{2}x_{3}z\,,\] \[\frac{dz}{dN}= -3x_{2}z-3x_{3}z+9Ax_{1}^{2}z^{2}\,, \tag{538}\] which hold true for a general interaction term \(Q\). Also we used the \(e\)-foldings number as a dynamical variable instead of the cosmic time. If we use the form of the interaction term \(Q\), we further have two terms in the dynamical system (538), which are \[\frac{\kappa^{2}Q}{3H^{3}}=3c_{1}x_{2}+3c_{2}x_{1}\,, \tag{539}\] hence, the dynamical system (538) has two additional linear contributions for the dynamical system variables \(x_{1}\) and \(x_{2}\). #### vi.2.1 Singularity Structure of Autonomous Dynamical Systems Using the Dominant Balances Technique In this subsection we shall introduce the dominant balances technique for studying the finite-time singularities of polynomial autonomous dynamical systems of an arbitrary dimension. This method was introduced by [852], and the theorems that apply may specify the nature of the finite-time dynamical system singularities. Extra work is required in order to reveal whether a finite-time dynamical system singularity is a physical finite-time cosmological singularity though. For brevity we shall refer to the dominant balances framework as "dominant balance analysis". Let a general a \(n\)-dimensional dynamical system having the following form, \[\dot{x}=f(x)\,, \tag{540}\] with \(x\) being some real vector of \(R^{n}\), and also \(f(x)=(f_{1}(x),f_{2}(x),...,f_{n}(x))\) is some vector containing polynomials of \(x\). A finite-time singularity developed by this dynamical system is some sort of a moving singularity, which is always related to and determined by the initial conditions of the vector \(x\). We can quantify the terminology moving singularity, which is a singularity of the form \((t-t_{c})^{-p}\), with \(t_{c}\) being simply an integration constant. An example is given by the differential equation \(\frac{dy}{dx}=\frac{1}{x^{2}y^{2}}\), which has the solution \(y=(\frac{1}{x}-c)^{-1}\), with \(c\) being an integration constant. This solution develops a singularity which mainly depends on the chosen initial conditions, at \(\frac{1}{x}=c\), so this justifies why it is called a moving singularity. Now in order to find whether the autonomous dynamical system (540) develops finite-time singularities is predominantly based on the decomposition, or simply truncation, of the function \(f\) in several possible dominant and subdominant terms, which we will denote as \(\hat{f}(x)\) and \(\hat{f}(x)\), so in effect the dominant terms of the dynamical system yield, \[\dot{x}=\hat{f}(x)\,. \tag{541}\] The dominant terms of the function \(f(x)\) can be chosen in several ways, and also recall that one polynomial term is allowed to be inserted in the dominant vector \(\hat{f}(x)\) each time. Each component \(x_{i}\), \(i=1,2,...,n\) of the vector \(x\) is written as follows, \[x_{1}(\tau)=a_{1}\tau^{p_{1}},\,\,\,x_{2}(t)=a_{2}\tau^{p_{2}},\,\,\,....,x_{n} (t)=a_{n}\tau^{p_{n}}\,, \tag{542}\] and also notice that we require that the full dynamical system solution \(x\) may be written in a form of \(\psi\)-series in terms of \(\tau=t-t_{c}\), with \(t_{c}\) being the time instance that the singularity occurs. Upon substituting the forms of the solutions \(x_{i}\)'s of Eq. (542), in Eq. (541), we equate the powers at each order of the resulting polynomials, for each distinct choice of \(\hat{f}\). By using this procedure, we will be able to determine the parameters \(p_{i}\) (\(i=1,2,...,n\)), by having in mind that the sole solutions accepted are fractional numbers or integers. Accordingly, we form out of the \(p_{i}\) (\(i=1,2,...,n\)) the new vector \(\vec{p}=(p_{1},p_{2},...,p_{n})\), which is an important ingredient of the method we apply. Accordingly we find the parameters \(a_{i}\) (\(i=1,2,...,n\)), which can be determined in a unique way by simply equating the coefficients of the polynomials that result from the dominant part \(\hat{f}\). Using these coefficients we form the vector \(\vec{a}=(a_{1},a_{2},a_{3},....,a_{n})\), which is called dominant balance. For the dominant balance analysis, only non-zero values of the balances are allowed, real or even complex numbers. The two vectors \(\vec{a}\) and \(\vec{p}\) form the balance \((\vec{a},\vec{p})\). Now following the theorem developed by Goriely and Hyde in [852], we have that if the dominant balance contains complex entries, then the autonomous dynamical system under consideration (540) develops no finite-time singularities. In the case that the dominant balance entries are real, then in principle finite-time singularities occur in the dynamical system, meaning that some trajectories in the phase space will certainly blow-up for some initial conditions. But the question is whether these initial conditions that drive the singular trajectories, are generic or correspond to a limited set of initial conditions. In order to further enlighten this situation, one needs extra criteria that need to be fulfilled. To this end, we construct the Kovalevskaya matrix \(K\), defined in the following way, \[K=\left(\begin{array}{ccccc}\frac{\partial f_{1}}{\partial x_{1}}&\frac{ \partial f_{1}}{\partial x_{2}}&\frac{\partial f_{1}}{\partial x_{3}}&...& \frac{\partial f_{1}}{\partial x_{n}}\\ \frac{\partial f_{2}}{\partial x_{1}}&\frac{\partial f_{2}}{\partial x_{2}}& \frac{\partial f_{2}}{\partial x_{3}}&...&\frac{\partial f_{2}}{\partial x_{n }}\\ \frac{\partial f_{3}}{\partial x_{1}}&\frac{\partial f_{3}}{\partial x_{2}}& \frac{\partial f_{3}}{\partial x_{3}}&...&\frac{\partial f_{3}}{\partial x_{n }}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \frac{\partial f_{n}}{\partial x_{1}}&\frac{\partial f_{n}}{\partial x_{2}}& \frac{\partial\hat{f}_{n}}{\partial x_{3}}&...&\frac{\partial f_{n}}{\partial x _{n}}\\ \end{array}\right)-\left(\begin{array}{ccccc}p_{1}&0&0&\cdots&0\\ 0&p_{2}&0&\cdots&0\\ 0&0&p_{3}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&0\\ 0&0&0&\cdots&p_{n}\\ \end{array}\right)\,, \tag{543}\] and it should be evaluated at each non-zero dominant balance \(\vec{a}\) found earlier in the method. The eigenvalues of the Kovalevskaya matrix then, should be of the form \((-1,r_{2},r_{3},...,r_{n})\). Now in the case that the \(r_{2},r_{3},...,r_{n}\) are positive, the dynamical system of Eq. (540) has general trajectories-solutions which will definitely develop a finite-time dynamical system singularity. In this case, all the initial conditions used in this case, drive the dynamical system to a finite-time dynamical system singularity. One the other hand if one of the eigenvalues \(r_{2},r_{3},...,r_{n}\) turns out to be negative, in this case only a limited set of initial conditions will lead the trajectories to a finite-time singularity, since the solution found is not general. It is noticeable that if a singularity is developed by the dynamical system, then the singularity occurs in the same orthant of \(\vec{a}\) in the \(n\)-dimensional phase space spanned by the variables \(x_{i}\). Practically, this means that if the dominant balance value \(a_{2}\) is negative, then the singularity occurs at the orthant \(x_{2}<0\) in the \(R^{n}\) space of the dynamical system variables. Let us here provide a simple example of how the dominant balance method works, and to this end consider the dynamical system, \[\dot{x}_{1}=x_{1}(\alpha+bx_{2})\,,\quad\dot{x}_{2}=cx_{1}^{2}+dx_{2}\,, \tag{544}\] with \(b,c>0\). From the r.h.s. of the above dynamical system, we can form the following 2-dimensional vector field \(f(x_{i})\), \[f(x_{i})=\left(\begin{array}{c}x_{1}(\alpha+b\,x_{2})\\ c\,x_{1}^{2}+dx_{2}\\ \end{array}\right)\,. \tag{545}\] By applying the dominant balances method, we find easily that the only truncation which is dominant, denoted as \(\hat{f}(x_{i})\), is the following, \[\hat{f}(x_{i})=\left(\begin{array}{c}b\,x_{1}\,x_{2}\\ c\,x_{1}^{2}\\ \end{array}\right)\,, \tag{546}\] and note that this dominant balance results in an acceptable dominant balance \((\vec{a},\vec{p})\). Specifically, the only balances found, denoted as \((\vec{a}_{1},\vec{p}_{1})\) and \((\vec{a}_{2},\vec{p}_{1})\), are the following, \[\vec{a}_{1}=\left(\frac{1}{\sqrt{b\,c}},-\frac{1}{b}\right)\,,\quad\vec{a}_{2}= \left(-\frac{1}{\sqrt{b\,c}},-\frac{1}{b}\right)\,,\quad\vec{p}_{1}=(-1,-1)\,. \tag{547}\] Accordingly, the evaluation of the Kovalevskaya matrix \(K\) leads to the eigenvalues \(r_{1}=-1\) and \(r_{2}=2\), hence, due to the main theorem related to the Kovalevskaya matrix, we find that the autonomous dynamical system (547) contains solutions that for general initial conditions are attracted to a finite-time singularity. vi.1.2 Dominant Balance Analysis of Multifluid Cosmology, Dynamical System Finite-time Singularities versus Physical Finite-time Singularities The analysis that follows was firstly considered in Ref. [853]. In general, the functional form of a cosmological dynamical system variable may reveal whether a dynamical system finite-time singularity may or may not be related to a physical finite-time singularity. Let us consider the variables (535) and apparently, if the variable \(z\) for example is finite in terms of \(N\), then the quantity \(H^{2}\) is finite. In such a case, if one of the variables \(x_{i}\) (\(i=1,2,3\)) diverges, then this would simply imply that one of the following energy densities \(\rho_{\rm DE}\), \(\rho_{\rm DM}\), or \(\rho_{b}\) diverges. In turn, such a situation would imply the occurrence of a Big Rip or a Type II singularity in general, however, in reality things are more complicated that it seems. Considering the Friedmann constraint (536) which is satisfied by all the variables \(x_{i}\) (\(i=1,2,3\)), for all the \(e\)-foldings numbers, if finite-time singularities occur in the variables \(x_{i}\) (\(i=1,2,3\)), then these must occur in such a way so that the infinities cancel and the Friedman constraint is formally satisfied. From a physics standpoint, we first consider the case that the baryon density is finite, thus we shall seek singular behaviors in the dark sector variables \(x_{1}\) and \(x_{2}\), in which case these would cancel in the Friedmann constraint. Thus the singular behavior would occur in such a way so that the singularities of the dark sector variables \(x_{1}\) and \(x_{2}\) cancel when their sum is considered. For the corresponding dominant balances, this would simply imply that \(a_{1}=-a_{2}\) and furthermore \(p_{1}=p_{2}\). If a finite-time singularity is found in \(z\), then the singularities should also occur in the dark sector variables, so that the Friedmann constraint is finite. Also note that if \(z\) is singular, this indicates a physical singularity for sure, most likely a Type II, a Big Rip, or Type III. Furthermore, let us not that in all cases the variable \(x_{3}\) must be positive. Apparently, the most easy case to handle mathematically is the occurrence of a finite-time singularity in the variable \(z\). This would signify a pressure singularity probably. In conclusion, the only case that a singularity is acceptable that can be verified for sure, is if the variable \(z\) diverges at finite-time, and in turn this would indicate the possible presence of a finite-time singularity. The type of physical cosmological singularity that occurs in \(z\) depends strongly on the dependence of the Hubble rate on the \(e\)-foldings number \(N\). Let us see this by using the Hubble rate (505), which expressed in terms of the \(e\)-foldings number \(N\), takes the form, \[H(N)\sim(N-N_{c})^{\frac{\beta}{\beta-1}}\,. \tag{548}\] In the case of a Big Rip singularity (\(\beta>1\)) and also for the case of a Type III singularity (\(0<\beta<1\)), as \(t\to t_{s}\), we have that \(N\to\infty\), hence, the Hubble rate diverges as the singularity is approached \(t\to t_{s}\). For both the Type II and Type IV cases, the Hubble rate \(H(N)\) tends to zero. Thus a singular behavior of the dynamical system variable \(z\), indicates either a Big Rip or a Type III singularity physical cosmological singularity. Hence, with certainty, the only case that a dynamical system singularity indicates a physical cosmological finite-time singularity is when \(z\) diverges. It is noticeable that when the Hubble rate diverges, in which case a singularity in \(z\) is developed, the term \(1/H(N)^{2}\) tends to zero, when \(N\to\infty\), in the Big Rip case and when \(N\to N_{c}\), for the case of the Type III singularity. Therefore, the dark sector dynamical system variables \(x_{1}\) and \(x_{2}\) are finite when a Big Rip or a Type III singularity is developed. We used this characteristic example just to make the point stronger, although the Hubble rate (505) is not a solution of the cosmological system. The most difficult dynamical system singularities to interpret are singularities occurring in one of the dark sector dynamical system variables \(x_{1}\) and \(x_{2}\), since it is hard to know if a physical singularity occurs when these two blow-up. The only certain about the latter case is that the singular behavior must definitely cancel when the sum \(x_{1}+x_{2}\) is considered. Below we summarize the main outcomes of the above discussion when the dynamical system (538) is considered: * _Singularity in \(z\):_ This indicates a physical singularity, either a Big Rip or Type III. * _Singularity in \(x_{1}\) and \(x_{2}\):_ Dynamical system singularities, we must have, \(a_{1}=-a_{2}\) and \(p_{1}=p_{2}\). * _Singularity in \(x_{3}\):_ This is not possible. * _Constraints on_ \(x_{3}\)_:_ \(x_{3}>0\) always and non-singular. * _Singularities in_ \(x_{1}\) _and_ \(x_{2}\)_: If_ \(z\) _develops a singularity, then_ \(\rho_{\rm DE}\) _and_ \(\rho_{\rm DM}\) _diverge in such a way so that_ \(\rho_{\rm DM}/H^{2}\) _and_ \(\rho_{\rm DE}/H^{2}\) _are singular, and these two must satisfy_ \(\rho_{\rm DE}=-\rho_{\rm DM}\)_. If the variable_ \(z\) _is regular, and furthermore_ \(x_{1}\) _and_ \(x_{2}\) _are singular, then_ \(\rho_{\rm DE}\) _and_ \(\rho_{\rm DM}\) _are not necessarily singular, but they must satisfy_ \(\rho_{\rm DE}=-\rho_{\rm DM}\)_._ We note that for all the above cases, the Friedmann constraint \(x_{1}+x_{2}+x_{3}=1\) must hold true. Now, in the following, we shall analyze the dominant balances and the corresponding singularities of the multifluid system we considered previously. Before going to that, we will make some clarifications. Firstly we shall assume that \(x_{1}(N)\), \(x_{2}(N)\), \(x_{3}(N)\), and \(z(N)\), near the possible finite-time dynamical system singularities, behave as follows at leading order, \[x_{1}(N)=a_{1}(N-N_{c})^{p_{1}}\,,\quad x_{2}(N)=a_{2}(N-N_{c})^{p_{2}}\,,\quad x _{3}(N)=a_{3}(N-N_{c})^{p_{3}}\,,\quad z(N)=a_{4}(N-N_{c})^{p_{4}}\,, \tag{549}\] hence, we shall seek for dominant balances \((\vec{a},\vec{p})\), with the vectors \(\vec{a}\) and \(\vec{p}\) having the form, \[\vec{a}=(a_{1},a_{2},a_{3},a_{4})\,,\quad\vec{p}=(p_{1},p_{2},p_{3},p_{4})\,. \tag{550}\] We can rewrite the dynamical system (538) in the form \(\frac{d\vec{x}}{dN}=f(\vec{x})\), with \(\vec{x}=(x_{1},x_{2},x_{3},z)\), and also the function \(f(x_{1},x_{2},x_{3},z)\) is defined as, \[f(x_{1},x_{2},x_{3},z)=\left(\begin{array}{c}-c_{1}x_{2}-c_{2}x_{1}+9Ax_{1} ^{2}z^{2}+3x_{1}x_{2}+3x_{1}x_{3}-9Azx_{1}^{3}\\ c_{1}x_{2}+c_{2}x_{1}-3x_{2}+3x_{2}^{2}+3x_{2}x_{3}-9Ax_{1}^{2}x_{2}z\\ -3x_{3}+3x_{3}^{2}+3x_{3}x_{2}-9Ax_{1}^{2}x_{3}z\\ -3x_{2}z-3x_{3}z+9Ax_{1}^{2}z^{2}\end{array}\right)\,. \tag{551}\] Therefore we shall seek for all the consistent truncations of the vector \(f(x_{1},x_{2},x_{3},z)\) defined in Eq. (551), and using the method of dominant balances we shall examine the behavior of the dynamical system near finite-time singularities. #### v.1.3 Consistent Truncation I One truncation of (551) has the following form, \[\hat{f}(x_{1},x_{2},x_{3},z)=\left(\begin{array}{c}9Ax_{1}^{2}(N)z^{2}(N)\\ 3x_{2}^{2}(N)\\ 3x_{3}^{2}(N)\\ 9Ax_{1}^{2}(N)z^{2}(N)\end{array}\right)\,, \tag{552}\] hence, by following the procedure we developed previously, we find the following solution for the vector \(\vec{p}\), \[\vec{p}=\left(-\frac{1}{3},-1,-1,-\frac{1}{3}\right)\,. \tag{553}\] In the same way, for the above \(\vec{p}\), we find the following solutions \(\vec{a}_{1}\) and \(\vec{a}_{2}\) \[\vec{a}_{1}=\left(-\frac{1}{3A^{1/3}},-\frac{1}{3},-\frac{1}{3},-\frac{1}{3A^{ 1/3}}\right)\qquad\mbox{and}\qquad\vec{a}_{2}=\left(\frac{1}{3A^{1/3}},-\frac{1 }{3},-\frac{1}{3},\frac{1}{3A^{1/3}}\right), \tag{554}\] and the corresponding Kovalevskaya matrix for the truncation (552) has the following form, \[K=\left(\begin{array}{cccc}18Ax_{1}z^{2}+\frac{1}{3}&0&0&18Ax_{1}^{2}z\\ 0&6x_{2}+1&0&0\\ 0&0&6x_{3}+\frac{2}{3}&0\\ 18Ax_{1}z^{2}&0&0&18Az_{1}^{2}+\frac{1}{3}\end{array}\right)\,. \tag{555}\] We can find the Kovalevskaya matrix for each of the vectors \(\vec{a}_{i}\) (\(i=1,2\)) appearing in Eq. (554), hence, for \(\vec{a}_{1}\), the Kovalevskaya matrix has the following form, \[K(\vec{a}_{1})=\left(\begin{array}{cccc}-\frac{1}{3}&0&0&-\frac{2}{3}\\ 0&-1&0&0\\ 0&0&-\frac{4}{3}&0\\ -\frac{2}{3}&0&0&-\frac{1}{3}\end{array}\right)\,, \tag{556}\] and also the corresponding eigenvalues for the vector \(\vec{a}_{1}\) are, \[(r_{1},r_{2},r_{3},r_{4})=\left(-1,-\frac{4}{3},-1,\frac{1}{3}\right)\,. \tag{557}\] The Kovalevskaya matrix for the vector \(\vec{a}_{2}\), has the same form as for the vector \(\vec{a}_{1}\), and also the same eigenvalues. In effect, the truncation (552) shows us that the Kovalevskaya matrix contains, apart from \(-1\), other negative eigenvalues, meaning that the singularities do not occur for the general set of initial conditions. #### v.1.4 Consistent Truncation II We can form the second consistent truncation of (551) as follows, \[\hat{f}(x_{1},x_{2},x_{3},z)=\left(\begin{array}{c}9Ax_{1}^{2}(N)z^{2}(N)\\ 3x_{2}(N)x_{3}(N)\\ 3x_{2}(N)x_{3}(N)\\ 9Ax_{1}^{2}(N)z^{2}(N)\end{array}\right)\,, \tag{558}\] which results in the following vector \(\vec{p}\), \[\vec{p}=\left(-\frac{1}{3},-1,-1,-\frac{1}{3}\right)\,, \tag{559}\] and now there is only one vector \[\vec{a}_{1}=\left(-\frac{1}{3A^{1/3}},-\frac{1}{3},-\frac{1}{3},-\frac{1}{3A^ {1/3}}\right). \tag{560}\] In the same way, for the consistent truncation (558), the Kovalevskaya matrix takes the following form, \[K=\left(\begin{array}{cccc}18Ax_{1}z^{2}+\frac{1}{3}&0&0&18Ax_{1}^{2}z\\ 0&3x_{3}+1&3x_{2}&0\\ 0&3x_{3}&3x_{2}+1&0\\ 18Ax_{1}z^{2}&0&0&18Az_{1}^{2}+\frac{1}{3}\end{array}\right)\,, \tag{561}\] which for \(\vec{a}_{1}\) leads to \[K(\vec{a}_{1})=\left(\begin{array}{cccc}-\frac{1}{3}&0&0&-\frac{2}{3}\\ 0&0&-1&0\\ 0&-1&0&0\\ -\frac{2}{3}&0&0&-\frac{1}{3}\end{array}\right)\,, \tag{562}\] and its eigenvalues are, \[(r_{1},r_{2},r_{3},r_{4})=\left(-1,-1,1,\frac{1}{3}\right)\,. \tag{563}\] For the dominant balances, we shall now analyze the development of phase space singularities for the dynamical system (538). It develops finite-time singularities and further investigation is required in order to see whether these occur for general initial conditions or for limited ones. This will be revealed by the form of the eigenvalues (563), hence, since \(r_{2}<0\), this indicates that the singularities do not occur for a general set of initial conditions, but for a limited set of initial conditions. Furthermore, the singularities for the three phase space variables \(x_{1}\), \(x_{2}\), and \(x_{3}\), is not the wanted one, as it can be seen from \(\vec{p}\) in Eq. (559). This is due to the fact that at leading order, the exponents of \(x_{1}\), \(x_{2}\), and \(x_{3}\), are not of the same order, hence, the Friedmann constraint of Eq. (536) is not satisfied in this case. Hence, in this case, the limited set which yields singular solutions, does not yield physically acceptable solutions. Therefore, the mathematically consistent truncation (558) does not yield singular solutions which are generated by a general set of initial conditions. We can form a third consistent truncation of (551), which has the following form, \[\hat{f}(x_{1},x_{2},x_{3},z)=\left(\begin{array}{c}9Ax_{1}^{2}(N)z^{2}(N)\\ 3c_{2}x_{1}(N)\\ 3x_{3}^{2}(N)\\ 9Ax_{1}^{2}(N)z^{2}(N)\end{array}\right)\,, \tag{564}\] which results in \(\vec{p}\) given by, \[\vec{p}=\left(-\frac{1}{3},\frac{2}{3},-1,-\frac{1}{3}\right)\,, \tag{565}\] while the vectors \(\vec{a}_{i}\) (\(i=1,2\)) are, \[\vec{a}_{1}=\left(-\frac{1}{3A^{1/3}},-\frac{3c_{2}}{2A^{1/3}},-\frac{1}{3},- \frac{1}{3A^{1/3}}\right)\qquad\mbox{and}\qquad\vec{a}_{2}=\left(\frac{1}{3A^{ 1/3}},-\frac{3c_{2}}{2A^{1/3}},-\frac{1}{3},\frac{1}{3A^{1/3}}\right). \tag{566}\] and the corresponding Kovalevskaya matrix has the following form, \[K=\left(\begin{array}{cccc}18Ax_{1}z^{2}+\frac{1}{3}&0&0&18Ax_{1}^{2}z\\ 3c_{2}&-\frac{2}{3}&0&0\\ 0&0&6x_{3}+1&0\\ 18Ax_{1}z^{2}&0&0&18Ax_{1}^{2}+\frac{1}{3}\end{array}\right)\,. \tag{567}\] For the vector \(\vec{a}_{1}\), the Kovalevskaya matrix takes the following form, \[K(\vec{a}_{1})=\left(\begin{array}{cccc}-\frac{1}{3}&0&0&-\frac{2}{3}\\ 3c_{2}&-\frac{2}{3}&0&0\\ 0&0&-1&0\\ -\frac{2}{3}&0&0&-\frac{1}{3}\end{array}\right)\,, \tag{568}\] and its eigenvalues are, \[(r_{1},r_{2},r_{3},r_{4})=\left(-1,-1,-\frac{2}{3},\frac{1}{3}\right)\,. \tag{569}\] Hence, this truncation leads to a situation with degeneracies, thus, let us further analyze the case \(\vec{a}_{1}\). As it occurs, the singularity analysis is identical with the truncation (558). Therefore, no general initial conditions can lead to singularities in the dynamical system in this case too, and furthermore, the singular solutions are non-physical since the Friedmann constraint (536) is not satisfied. The conclusion of this section is that the dominant balance analysis does not yield any singular solutions which are generated by general initial conditions. Thus, it is rather compelling to present in more detail the phase space of the dynamical system, which we do in the next subsection. #### vi.1.5 Phase Space Analysis of Multifluid Cosmological Model Now we shall analytically investigate the phase space structure of the dynamical system in order to further understand the various phase space structures that emerge in it. We will focus in finding the fixed points of the dynamical system and we shall examine the stability of the fixed points against linear perturbations, by using the well-known Hartman-Grobman theorem, when it applies. The standard way to analyze autonomous non-linear dynamical system is based on the linearization techniques essentially quantified by the Hartman-Grobman linearization theorem. The latter reveals the stability of the various fixed points and further indicates whether non-trivial topological structures exist in the phase space of the dynamical system, only in the case when the fixed points are hyperbolic. Let us describe in brief the theoretical approach we shall use. Consider the vector field \(\Phi(t)\)\(\epsilon\)\(R^{n}\), which satisfies the differential flow, \[\frac{d\Phi}{dt}=g(\Phi(t))\,, \tag{570}\] with \(g(\Phi(t))\) being a locally Lipschitz continuous map having the form \(g:R^{n}\to R^{n}\). Let the fixed points of the dynamical system (570) be denoted as \(\phi_{*}\) and also let \(\mathcal{J}(g)\) denote the Jacobian matrix which corresponds to the linearized version of the dynamical system (570) near some fixed point, with the Jacobian matrix being equal to, \[\mathcal{J}=\sum_{i}\sum_{j}\left[\frac{\partial\mathrm{f}_{\mathrm{i}}}{ \partial x_{j}}\right]\,. \tag{571}\] In the case of a fixed point being hyperbolic, the Jacobian matrix evaluated at the fixed point reveals the stability of the fixed point. Let us remind here that a fixed point is called hyperbolic only if the spectrum \(\sigma(\mathcal{J})\) of the eigenvalues of the Jacobian matrix is comprised by elements \(e_{i}\) satisfying the condition \(\mathrm{Re}(e_{i})\neq 0\). In this case, following the Hartman-Grobman theorem, the linearized version of the dynamical system, \[\frac{d\Phi}{dt}=\left.\mathcal{J}(g)(\Phi)\right|_{\Phi=\phi_{*}}\left(\Phi- \phi_{*}\right), \tag{572}\] is equivalent topologically, to the dynamical system of Eq. (570), at the vicinity of the hyperbolic fixed points \(\phi_{*}\). Specifically, the Hartman-Grobman theorem guarantees the existence of a homeomorphism \(\mathcal{F}:U\to R^{n}\), in a neighborhood \(U\) of the hyperbolic fixed point \(\phi_{*}\), with \(U\) being an open set. This homeomorphism generates the flow \(\frac{dh(u)}{dt}\), satisfying, \[\frac{dh(u)}{dt}=\mathcal{J}h(u)\,. \tag{573}\] In view of the Hartman-Grobman theorem, the flows in Eqs. (573) and (570) are homeomorphic. Now regarding the stability of the fixed point \(\phi_{*}\), the Hartman-Grobman predicts that in the case that the spectrum of the eigenvalues of the Jacobian matrix are negative, that is \(\mathrm{Re}\left(\sigma(\mathcal{J}(g))\right)<0\), then the fixed point is stable asymptotically. In all the other cases, the fixed point leads to instabilities in the phase space. We shall now apply the Hartman-Grobman theorem on the dynamical system of Eq. (538) with the interaction term being chosen as in Eq. (534). We shall calculate the fixed points and we examine their stability against linear perturbations. Our analysis indicates that for general non-zero values of \(c_{2}\), the analytic study of the fixed points is very complicated and somewhat leads to extended and complicated. So we shall focus on the case \(c_{2}=0\). Interactions like the one in (534), with \(c_{2}=0\) are frequently used in the literature [475]. The case with \(c_{2}\neq 0\) can be dealt numerically, and in TABLE V, we gather our results, for various signs of the free parameters \(c_{1}\) and \(c_{2}\) of the interaction function (534). For all the cases appearing in the TABLE V, the distinct fixed points and physically interesting fixed points are unstable, without the result to be dependent on the values of the free parameter \(A\). \begin{table} \begin{tabular}{|c c c|} \hline **Case No.** & **Region of \(c_{1}\) and \(c_{2}\)** & **Nature of Fixed Points** \\ \hline \hline Case I & \(c_{1}>0\), \(c_{2}>0\), for every \(A\) & Unstable \\ Case II & \(c_{1}>0\), \(c_{2}<0\), for every \(A\) & Unstable \\ Case III & \(c_{1}<0\), \(c_{2}>0\), for every \(A\) & Unstable \\ Case IV & \(c_{1}<0\), \(c_{2}<0\), for every \(A\) & Unstable \\ \hline \end{tabular} \end{table} Table 5: Stability of the Fixed Points of the Multifluid Dynamical System (538) for general values of \(c_{1}\) and \(c_{2}\). Now let us consider the analytic form of the fixed points for \(c_{2}=0\), in which case the fixed points are, \[\phi_{1} =\{x_{1}\to 0,x_{2}\to 0,x_{3}\to 0\}\,\] \[\phi_{2} =\{x_{2}\to 0,x_{3}\to 0,z\to 0\}\,\] \[\phi_{3} =\{x_{1}\to 0,x_{2}\to 0,x_{3}\to 1,z\to 0\}\,\] \[\phi_{4} =\{x_{1}\to c_{1},x_{2}\to 1-c_{1},x_{3}\to 0,z\to 0\}\,\] \[\phi_{5} =\left\{x_{1}\to\frac{3Ac_{1}-\sqrt{9A^{2}c_{1}^{2}-12A(c_{1}-1) c_{1}}}{6Ac_{1}},\right.\] \[\left.\qquad\qquad x_{2}\to\frac{1}{2}\left(3Ac_{1}-\sqrt{3}\sqrt {Ac_{1}(3Ac_{1}-4c_{1}+4)}-2c_{1}+2\right),x_{3}\to 0,z\to c_{1}\right\}\,,\] \[\phi_{6} =\left\{x_{1}\to\frac{\sqrt{9A^{2}c_{1}^{2}-12A(c_{1}-1)c_{1}}+3 Ac_{1}}{6Ac_{1}},\right.\] \[\left.\qquad\qquad x_{2}\to\frac{1}{2}\left(\sqrt{9A^{2}c_{1}^{2 }-12A(c_{1}-1)c_{1}}+3Ac_{1}-2c_{1}+2\right),x_{3}\to 0,z\to c_{1}\right\}\,. \tag{574}\] The Jacobian matrix \(\mathcal{J}\) for the current scenario, has the simple closed form, \[\mathcal{J}=\left(\begin{array}{cccc}3x_{2}+3x_{3}+9Ax_{1}z(2z-3x_{1})&3x_{1 }-3c_{1}&3x_{1}&-9Ax_{1}^{2}(x_{1}-2z)\\ -9Ax_{2}z&3(c_{1}+2x_{2}+x_{3}-3Ax_{1}z-1)&3x_{2}&-9Ax_{1}x_{2}\\ -18Ax_{1}x_{3}z&3x_{3}&3\left(-3Ax_{1}^{2}+x_{2}+2x_{3}-1\right)&-9Ax_{1}^{2}x _{3}\\ 18Ax_{1}z^{2}&-3z&-3z&-3\left(-6Ax_{2}x_{1}^{2}+x_{2}+x_{3}\right)\end{array} \right). \tag{575}\] For the last two cases, the fixed points correspond to de Sitter vacuum, since we have \(z=c_{1}\) at the two fixed points. Regarding the other cases, the variable \(z\) is equal to zero at the fixed point, hence, the most interesting cases from a phenomenological point of view correspond to \(\phi_{5}\) and \(\phi_{6}\), for which \(x_{3}\) tends to zero. Hence, in this case, asymptotically near the fixed point, the baryonic fluid does not contribute to the dynamical evolution of the Universe, asymptotically near the two fixed points. Furthermore, in order to be consistent physically, the interaction constant \(c_{1}\) must be \(c_{1}>0\). The eigenvalues of the Jacobian matrix evaluated at the fixed points, for the first four fixed points, have the following form, \[\phi_{1}^{*} \to\{-3,0,0,3c_{1}-3\}\,\] \[\phi_{2}^{*} \to\{-3,0,0,3c_{1}-3\}\,\] \[\phi_{3}^{*} \to\{-3,3,3,3c_{1}\}\,\] \[\phi_{4}^{*} \to\{3(1-c_{1})-3,-3(1-c_{1}),3(1-c_{1}),3(1-c_{1})\}. \tag{576}\] Regarding the first two fixed points, these are not hyperbolic, hence, we cannot use the Hartman-Grobman theorem for analyzing these two. In contrast, the fixed points \(\phi_{3}^{*}\) and \(\phi_{4}^{*}\) are hyperbolic, but not stable because some eigenvalues are positive. Hence, in this case the fixed points are unstable but physically appealing because these lead to a Hubble rate that tends to zero, so this might describe the late-time era. Now we turn our focus on the fixed points \(\phi_{5}^{*}\) and \(\phi_{6}^{*}\) in which case, the calculation of the eigenvalues results in a complicated algebraic equation, which is too extended to present here. By doing a numerical analysis though, for \(c_{1}>0\) and for \(A\) being positive and negative, we end up with unstable de Sitter fixed points. The instability can be easily explained due to the fact that \(\rho\) enters the EoS as \(-\rho\) and not as \(w_{\rm DE}\rho\), hence, instability occurs. Also, a detailed analysis of the trajectories for the dynamical system at hand verifies the above findings. Indeed, for a wide range of values of the free parameters \(A\) and \(c_{1}\) indicates that there exist limited initial conditions which lead to dynamical system blow-ups at finite-time. Also, there are regions in which the dynamical system has regular trajectories, a fact which is also supported by the stability study of the fixed points. Concluding, the dynamical system (538) which describes a multifluid cosmology with non-trivial interactions between the dark sector fluids, is unstable in general and has no global attractors leading to finite-time singularities, but has limited trajectories leading to finite-time blow-ups of the dynamical system variables. ### Dynamical System Analysis of Exponential Quintessence DE Models In this section, we shall analyze the dynamical system of exponential quintessence models, which are relevant in swampland models. For a full analysis of this model, see [862], on which the presentation of this subsection will be based. Consider the action of a quintessence scalar field model in vacuum, \[\mathcal{S}=\int d^{4}x\sqrt{-g}\left(\frac{1}{\kappa^{2}}R-\frac{1}{2}\partial_{ \mu}\phi\partial^{\mu}\phi-V(\phi)\right)\,. \tag{577}\] The field equations for this quintessence scalar theory in a flat FLRW background are, \[0= \ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)\,,\] \[\frac{3H^{2}}{\kappa^{2}}= \frac{\dot{\phi}^{2}}{2}+V(\phi)\,, \tag{578}\] with the prime denoting differentiation with respect to \(\phi\). We shall be focusing on exponential quintessence models, hence, the scalar potential \(V(\phi)\) takes the form \(V(\phi)=\mathrm{e}^{-\kappa\lambda\phi}\). For this choice of the potential, the field equations (578) can form an autonomous dynamical system by introducing the dimensionless variables, \[x=\frac{\kappa\dot{\phi}}{\sqrt{6}H}\,,\quad y=\frac{\kappa\sqrt{V}}{\sqrt{3}H }\,, \tag{579}\] with the "dot" indicating the differentiation with respect to \(t\). As in the previous sections, we shall use the \(e\)-foldings number \(N\) to quantify the dynamical evolution variable, so by combining Eqs. (578) and (579), we can form the following autonomous dynamical system, \[\frac{dx}{dN}= -3x+\frac{\sqrt{6}}{2}\lambda y^{2}+\frac{3}{2}x\left(x^{2}-y^{2} \right)\,,\] \[\frac{dy}{dN}= -\frac{\sqrt{6}}{2}\lambda xy+\frac{3}{2}y\left(x^{2}-y^{2}\right)\,. \tag{580}\] Furthermore, the dimensionless variables \(x\) and \(y\) satisfy the Friedmann constraint, \[x^{2}+y^{2}=1\,, \tag{581}\] which means that in the variables \(x\) and \(y\) the dynamical system does not develop singularities. The phase space structure of this simple autonomous dynamical system has been performed in detail in Ref. [863]. #### v.1.1 Singularity Structure of the Dynamical System Describing Swampland DE Models In this subsection, we shall use the dominant balance analysis described in a previous section for the dynamical system of Eq. (580), with the Friedmann constraint (581) holding always true. For doing so, we recast the dynamical system of Eq. (580), in the following form, \[\frac{d\vec{x}}{dN}=f(\vec{x})\,, \tag{582}\] with \(\vec{x}=(x,y)\), and furthermore, we defined the vector-valued function \(f(\vec{x})\) as follows, \[f(x,y)=\left(\begin{array}{c}f_{1}(x,y)\\ f_{2}(x,y)\end{array}\right)\,, \tag{583}\] with the \(f_{i}\)'s appearing in Eq. (583) being equal to, \[f_{1}(x,y)= -3x+\frac{\sqrt{6}}{2}\lambda y^{2}+\frac{3}{2}x\left(x^{2}-y^{2} \right)\,,\] \[f_{2}(x,y)= -\frac{\sqrt{6}}{2}\lambda xy+\frac{3}{2}y\left(x^{2}-y^{2} \right)\,. \tag{584}\] We can form several truncations of this vector function \(f(\vec{x})\), one of which is the following, \[\hat{f}(x,y)=\left(\begin{array}{c}-\frac{3xy^{2}}{x^{2}}\\ \frac{3x^{2}y}{2}\end{array}\right)\,. \tag{585}\] By using the method of dominant balances, we find for this truncation that the vector \(\vec{p}\) takes the form, \[\vec{p}=\left(-\frac{1}{2},-\frac{1}{2}\right)\,, \tag{586}\] and accordingly, the following non-trivial vector \(\vec{a}\) is obtained, \[\vec{a}_{1}=\left(-\frac{i}{\sqrt{3}},-\frac{1}{\sqrt{3}}\right),\;\vec{a}_{2}= \left(-\frac{i}{\sqrt{3}},\frac{1}{\sqrt{3}}\right),\;\vec{a}_{3}=\left(\frac {i}{\sqrt{3}},\;-\frac{1}{\sqrt{3}}\right),\;\vec{a}_{4}=\left(\frac{i}{\sqrt{ 3}},\frac{1}{\sqrt{3}}\right)\,. \tag{587}\] All the vectors found above have complex entries, and also the Friedmann constraint (581) is satisfied for all the vectors \(\vec{a}_{i}\). Indeed, at leading order, the expression \(x^{2}+y^{2}\) forming the Friedmann constraint at leading order reads, \[\left(\pm\frac{i}{\sqrt{3}}\right)^{2}\tau^{-\frac{1}{2}}+\left(\pm\frac{1}{ \sqrt{3}}\right)^{2}\tau^{-\frac{1}{2}}\,, \tag{588}\] which vanishes, and we should note that \(\tau=N-N_{c}\). Now let us investigate whether singular solutions can be found. Due to the fact that the vectors \(\vec{a}_{i}\) have complex entries, this indicates that no singular solutions exist, as we have already shown from the Friedmann constraint (581). What now remains to check is whether these solutions are general or not, meaning whether these correspond to general initial conditions or not. The answer to this will be given by the evaluation of the Kovalevskaya matrix \(K\), for the solutions \(\vec{a}_{i}\), and for the truncation (585), \[K=\left(\begin{array}{cc}\frac{1}{2}-\frac{3y^{2}}{2}&-3xy\\ 3xy&\frac{3x^{2}}{2}+\frac{1}{2}\end{array}\right)\,. \tag{589}\] The evaluation of the Kovalevskaya matrix \(K\) for \((x,y)=\vec{a}_{1}\), yields, \[K(\vec{a})=\left(\begin{array}{cc}0&-i\\ i&0\end{array}\right)\,, \tag{590}\] the eigenvalues of which are, \[(r_{1},r_{2})=(-1,1)\,. \tag{591}\] The same set of eigenvalues are found for all the rest of solutions \(\vec{a}_{i}\),\((i=2,3,4)\). The form of the eigenvalues indicates that the dynamical system (580) has not finite-time singularities, and also these non-singular solutions correspond to a general set of initial conditions. However, these non-singular solutions found do not exclude the existence of an actual physical cosmological finite-time singularity. What we proved is that the dynamical system variables \(x\) and \(y\) do not become singular for a general set of initial conditions, which does not exclude the case that an actual physical singularity may be developed. In order to investigate the occurrence of physical singularities, let us assume that the Hubble rate has the general form, \[H(t)=H_{s}(t)+h_{s}(t)(t-t_{s})^{-\beta}\,, \tag{592}\] with the parameter \(\alpha\) taking the general form \(\beta=\frac{2m}{2n+1}\), and \(m,n\) are positive integers, and also \(H_{s}(t)\) and \(h_{s}(t)\) are assumed to be regular at \(t=t_{s}\) and furthermore, for consistency \(H_{s}(t_{s})\neq 0\), \(h_{s}(t_{s})\neq 0\). Moreover, the first and second order derivatives of \(H_{s}(t)\) and \(h_{s}(t)\) are assumed to satisfy the same constraints. Although that the Hubble rate (592) may not be a solution to the field equations, we will examine the case that the dominant part of the solution is of the form (592) near a singularity. The results of the singularity structure of the dynamical system indicate that \(x\sim\kappa\frac{\dot{\phi}}{H}\) and \(y\sim\kappa\frac{\sqrt{V(\phi)}}{H}\), never become singular for all the cosmic times. Now, the singularity structure of the Hubble rate (592), strongly depends on the values of \(\beta\) and specifically, * If \(\beta>1\), a physical Big Rip singularity is developed. * If \(0<\beta<1\), a physical Type III singularity is developed. * If \(-1<\beta<0\), a physical Type II singularity is developed. * If \(\beta<-1\), a physical Type IV singularity is developed. By combining Eqs. (578) and (592), we can express the terms \(\dot{\phi}\) and \(V(\phi)\) as functions of the Hubble rate. Indeed, the two equations needed are, \[\frac{2}{\kappa^{2}}\dot{H}= -\dot{\phi}^{2}\,, \tag{593}\] \[V(\phi)= -\frac{\ddot{H}+6\dot{H}}{\kappa^{2}\lambda\dot{\phi}}\,, \tag{594}\] but the resulting expressions are too lengthy to quote here. Having though these expressions at hand, we shall investigate the occurrence of physical singularities, having in mind that the variables \(x\) and \(y\) never become singular for the dynamical system at hand. Considering the case of a Type IV singularity, this can never occur when \(-2<\beta<-1\), since the term \(\sim(t-t_{s})^{-\beta-2}\) is present in the variable \(y\). On the contrary, when \(\beta<-2\), the two variables \(x\), \(y\) never become singular, so in the case of a Type IV singularity, this can be developed by the physical system for the values of \(\beta\) in the range \(\beta<-2\). Regarding the Type II case, which recall that it occurs for \(-1<\beta<0\), the dynamical system variable \(x\) is singular in this case since it depends on the term \(\sim(t-t_{s})^{-\beta-1}\), therefore a Type II singularity can never occur for the model at hand. Regarding the Type III case, it cannot also occur due to the term \(\sim(t-t_{s})^{\frac{3\beta}{2}-\frac{\alpha}{2}}\), which is singular for \(0<\beta<1\). Regarding, the Big Rip singularity, it can always occur, because for \(\beta>1\), the dynamical system variables \(x\) and \(y\) never develop singular behavior. The results of our analysis are presented in TABLE 6. Therefore, in conclusion, if the dominant behavior of the Hubble rate solution for the model at hand is given in Eq. (592), the Type III, and Type II never occur in the physical system. However, the Type IV and Type I singularities occur when \(\beta<-2\) and for any \(\beta>1\), respectively. ### Dynamical System of Interacting Multifluids in LQC In this section, we shall investigate the occurrence of finite-time singularities in the context of a LQC interacting multifluid system. The full analysis of the subject of this section was performed in Ref. [857] on which the presentation will be based. The three fluids that are considered are a non-interacting baryonic fluid and the interacting DE-DM fluids. Also the DE fluid will contain a bulk viscosity term. The Friedmann equation in the context of LQC in the presence of the three fluids and for a FLRW spacetime, has the following form, \[H^{2}=\frac{\kappa^{2}\rho_{\rm tot}}{3}\left(1-\frac{\rho_{\rm tot}}{\rho_{c }}\right)\,, \tag{595}\] where, once again, \(\rho_{c}\) denotes the critical density of LQC and \(\rho_{\rm tot}=\rho_{\rm DM}+\rho_{\rm DE}+\rho_{b}\) is the total energy density of all the three fluids present in which \(\rho_{\rm DM}\), \(\rho_{\rm DE}\) and \(\rho_{b}\) denote the energy density of DM, DE and baryons, respectively. Upon differentiation with respect to the cosmic time, Eq. (595) yields, \[\dot{H}=-\frac{\kappa^{2}}{2}\left(\rho_{\rm tot}+p_{\rm tot}\right)\left[1-2 \left(\frac{\rho_{\rm tot}+p_{\rm tot}}{\rho_{c}}\right)\right]\,, \tag{596}\] where \(p_{\rm tot}\) denotes the total pressure of the three fluids, which is basically equal to the pressure of the DE fluid, since DM and the baryonic fluids are pressureless. For the DE fluid, we assume the following EoS [380], \[p_{\rm DE}=-\rho_{\rm DE}-A\kappa^{4}\rho_{\rm DE}^{2}\,, \tag{597}\] with \(A\) being a real dimensionless parameter. The energy-momentum conservation yields the following three continuity equations, \[\dot{\rho}_{b}+3H\rho_{b}=0\,,\quad\dot{\rho}_{\rm DM}+3H\rho_{\rm DM}=Q\,,\quad \dot{\rho}_{\rm DE}+3H(\rho_{\rm DE}+p_{\rm DE})=-Q\,, \tag{598}\] where \(Q\) denotes the interaction between the dark fluids, and once again \(Q>0\) indicates that the DE fluid loses energy and \(Q<0\) indicates that the DE fluid gains energy at the expense of the DM fluid. We shall consider the following interaction between the dark sector fluids, \[Q=3H(c_{1}\rho_{\rm DM}+c_{2}\rho_{\rm DE})\,, \tag{599}\] which is phenomenologically interesting [464, 562, 860, 861, 565] and here \(c_{1}\), \(c_{2}\) are the coupling parameters. For the three-fluid LQC system with equations (595) and (596), we can construct an autonomous dynamical system by appropriately introducing some dimensionless variables. After this, we shall investigate whether the resulting polynomial dynamical system develops singularities at some finite-time, using the dominant balance technique we presented in previous sections of this chapter. We shall also discriminate the dynamical system singularities from the actual physical finite-time singularities. The major outcome of this section is the fact that we shall analytically prove that the resulting LQC dynamical system has no finite-time singularities, which verifies that LQC actually leads to non-singular behavior. We now construct the autonomous dynamical system from Eqs. (595), (596) and (598). We choose the dimensionless dynamical system variables as follows, \[x_{1}=\frac{\kappa^{2}\rho_{\rm DE}}{3H^{2}}\,,\quad x_{2}=\frac{\kappa^{2} \rho_{\rm DM}}{3H^{2}}\,,\quad x_{3}=\frac{\kappa^{2}\rho_{b}}{3H^{2}}\,,\quad z =\frac{H^{2}}{\kappa^{2}\rho_{c}}\,, \tag{600}\] which are constrained to satisfy the Friedmann constraint, \[x_{1}+x_{2}+x_{3}-z\left(x_{1}+x_{2}+x_{3}\right)^{2}=1\,, \tag{601}\] for all cosmic times. Furthermore, the total EoS parameter \(w_{\rm eff}\) corresponding to the multifluid system, expressed in terms of the dimensionless dynamical system variables (600), takes the following form, \[w_{\rm eff}=-x_{1}-3A\kappa^{4}\rho_{c}x_{1}^{2}z\,. \tag{602}\] Now we can form the polynomial autonomous dynamical system for the three multi-fluids system, by combining Eqs. (595), (596), (598), and (600), so after some algebra, we get, \[\frac{dx_{1}}{dN}= -\frac{\kappa^{2}Q}{3H^{3}}+9Ax_{1}^{3}z-27Ax_{1}^{2}z+3w_{\rm DE }x_{1}^{2}-3w_{\rm DE}x_{1}-18x_{1}^{3}z+3x_{1}^{2}+3x_{1}x_{2}+3x_{1}x_{3}-3x_ {1}\] \[-18w_{\rm DE}x_{1}^{3}z-18w_{\rm DE}x_{1}^{2}x_{2}-36x_{1}^{2}x_{ 2}-36x_{1}^{2}x_{3}z-18x_{1}x_{2}^{2}z\] \[-54Ax_{1}^{4}z^{2}-54Ax_{1}^{3}x_{2}z^{2}-54Ax_{1}^{3}x_{3}z^{2}- 18w_{\rm DE}x_{1}^{2}x_{3}z-36x_{1}x_{2}x_{3}z-18x_{1}x_{3}^{2}z\,,\] \[\frac{dx_{2}}{dN}= \frac{\kappa^{2}Q}{3H^{3}}+9Ax_{1}^{2}x_{2}z+3w_{\rm DE}x_{1}x_{2} -18x_{1}^{2}x_{2}z+3x_{1}x_{2}+3x_{2}^{2}+3x_{2}x_{3}-3x_{2}\] \[-18w_{\rm DE}x_{1}^{2}x_{2}z-18w_{\rm DE}x_{1}x_{2}^{2}z-36x_{1}x _{2}^{2}z-36x_{1}x_{2}x_{3}z-18x_{2}^{3}z^{2}\] \[-54Ax_{1}^{3}x_{2}z^{2}-54Ax_{1}^{2}x_{2}^{2}z^{2}-54Ax_{1}^{2}x _{2}x_{3}z^{2}-18w_{\rm DE}x_{1}x_{2}x_{3}z-36x_{2}^{2}x_{3}z-18x_{2}x_{3}^{2}z\,,\] \[\frac{dx_{3}}{dN}= 9Ax_{1}^{2}x_{3}z-18w_{\rm DE}x_{1}^{2}x_{3}z+3w_{\rm DE}x_{1}x _{3}-18x_{1}^{2}x_{3}z+3x_{1}x_{3}+3x_{2}x_{3}+3x_{3}^{2}-3x_{3}\] \[-18w_{\rm DE}x_{1}x_{2}x_{3}z-18w_{\rm DE}x_{1}x_{3}^{2}z-36x_{1} x_{2}x_{3}z-36x_{1}x_{3}^{2}z-18x_{2}^{2}x_{3}z-36x_{2}x_{3}^{2}z\] \[-54Ax_{1}^{3}x_{3}z^{2}-54Ax_{1}^{2}x_{2}x_{3}z^{2}-54Ax_{1}^{2} x_{3}^{2}z^{2}-18x_{3}^{3}z\,,\] \[\frac{dz}{dN}= -9Ax_{1}^{2}z^{2}+18w_{\rm DE}x_{1}^{2}z^{2}-3w_{\rm DE}x_{1}z+18 x_{1}^{2}z^{2}-3x_{1}z-3x_{2}z-3x_{3}z\] \[+18w_{\rm DE}x_{1}x_{2}z^{2}+18w_{\rm DE}x_{1}x_{3}z^{2}+36x_{1}x _{2}z^{2}+36x_{1}x_{3}z^{2}+18x_{2}^{2}z^{2}\] \[+54Ax_{1}^{3}z^{3}+54Ax_{1}^{2}x_{2}z^{3}+54Ax_{1}^{2}x_{3}z^{3}+ 36x_{2}x_{3}z^{2}+18x_{3}^{2}z^{2}\,, \tag{603}\] and note that, as in the previous sections in this chapter, we used the \(e\)-foldings number to quantify the dynamical evolution of the variables, instead of the cosmic time. Furthermore, by choosing the interaction term \(Q\) as in Eq. (599), the \(Q\)-containing terms in the dynamical system (603), can be expressed in terms of the variables \(x_{1}\) and \(x_{2}\) in the following way, \[\frac{\kappa^{2}Q}{3H^{3}}=3c_{1}x_{2}+3c_{2}x_{1}\,. \tag{604}\] Now having the dynamical system (603) at hand, we can apply the dominant balance analysis in order to reveal the dynamical system singularities and we shall also investigate whether physical singularities occur too. This is the subject of the next subsection. #### viii.1.1 Dominant Balance Analysis of the three-fluid Cosmological Dynamical System We shall use the method of dominant balances in order to investigate whether the dynamical system (603) develops finite-time singularities. As we shall show, the dynamical system is singularity-free and in fact this result holds true for a general set of initial conditions. One should have in mind of course that there exists a limited set of initial conditions that may lead in principle to finite-time dynamical system singularities, but this limited set is not of interest. Near a singularity, the dimensionless variables \(x_{1}(N)\), \(x_{2}(N)\), \(x_{3}(N)\) and \(z(N)\) at leading order behave as follows, \[x_{1}(N)=a_{1}(N-N_{c})^{p_{1}}\,,\quad x_{2}(N)=a_{2}(N-N_{c})^{p_{2}}\,,\quad x _{3}(N)=a_{3}(N-N_{c})^{p_{3}}\,,\quad z(N)=a_{4}(N-N_{c})^{p_{4}}\,. \tag{605}\] The dynamical system (603) can be rewritten as follows \(\frac{d\vec{x}}{dN}=f(\vec{x})\), with \(\vec{x}\) being \(\vec{x}=(x_{1},x_{2},x_{3},z)\), and also \(f(x_{1},x_{2},x_{3},z)\) is defined to be equal to, \[f(x_{1},x_{2},x_{3},z)=\left(\begin{array}{c}f_{1}(x_{1},x_{2},x_{3},z)\\ f_{2}(x_{1},x_{2},x_{3},z)\\ f_{3}(x_{1},x_{2},x_{3},z)\\ f_{4}(x_{1},x_{2},x_{3},z)\end{array}\right)\,, \tag{606}\] and the corresponding functions \(f_{i}(x_{1},x_{2},x_{3})\), \(i=1,2,3,4\) are, \[f_{1}(x_{1},x_{2},x_{3},z)= \,3c_{1}x_{2}+3c_{2}x_{1}+9Ax_{1}^{3}z-27Ax_{1}^{2}z+3w_{\rm DE} x_{1}^{2}-3w_{\rm DE}x_{1}-18x_{1}^{3}z+3x_{1}^{2}+3x_{1}x_{2}+3x_{1}x_{3}-3x_{1}\] \[-18w_{\rm DE}x_{1}^{3}z-18w_{\rm DE}x_{1}^{2}x_{2}-36x_{1}^{2}x_{2 }-36x_{1}^{2}x_{3}z-18x_{1}x_{2}^{2}z\] \[-54Ax_{1}^{4}z^{2}-54Ax_{1}^{3}x_{2}z^{2}-54Ax_{1}^{3}x_{3}z^{2}- 18w_{\rm DE}x_{1}^{2}x_{3}z-36x_{1}x_{2}x_{3}z-18x_{1}x_{3}^{2}z\,,\] \[f_{2}(x_{1},x_{2},x_{3},z)= \,3c_{1}x_{2}+3c_{2}x_{1}+9Ax_{1}^{2}x_{2}z+3w_{\rm DE}x_{1}x_{2}- 18x_{1}^{2}x_{2}z+3x_{1}x_{2}+3x_{2}^{2}+3x_{2}x_{3}-3x_{2}\] \[-18w_{\rm DE}x_{1}^{2}x_{2}z-18w_{\rm DE}x_{1}x_{2}^{2}z-36x_{1}x _{2}x_{3}z-18x_{2}^{3}z\] \[-54Ax_{1}^{3}x_{2}z^{2}-54Ax_{1}^{2}x_{2}^{2}z^{2}-54Ax_{1}^{2}x_{ 2}x_{3}z^{2}-18w_{\rm DE}x_{1}x_{2}x_{3}z-36x_{2}^{2}x_{3}z-18x_{2}x_{3}^{2}z\,,\] \[f_{3}(x_{1},x_{2},x_{3},z)= \,9Ax_{1}^{2}x_{3}z-18w_{\rm DE}x_{1}^{2}x_{3}z+3w_{\rm DE}x_{1}x _{3}-18x_{1}^{2}x_{3}z+3x_{1}x_{3}+3x_{2}x_{3}+3x_{3}^{2}-3x_{3}\] \[-18w_{\rm DE}x_{1}x_{2}x_{3}z-18w_{\rm DE}x_{1}x_{3}^{2}z-36x_{1}x _{2}x_{3}z-36x_{1}x_{2}^{2}z-18x_{1}^{2}x_{3}z-36x_{2}x_{3}^{2}z\] \[-54Ax_{1}^{3}x_{3}z^{2}-54Ax_{1}^{2}x_{2}x_{3}z^{2}-54Ax_{1}^{2} x_{3}^{2}z^{2}-18x_{3}^{3}z\,,\] \[f_{4}(x_{1},x_{2},x_{3},z)= \,-9Ax_{1}^{2}z^{2}+18w_{\rm DE}x_{1}^{2}z^{2}-3w_{\rm DE}x_{1}z+ 18x_{1}^{2}z^{2}-3x_{1}z-3x_{2}z-3x_{3}z\] \[+18w_{\rm DE}x_{1}x_{2}z^{2}+18w_{\rm DE}x_{1}x_{3}z^{2}+36x_{1}x _{2}z^{2}+36x_{1}x_{3}z^{2}+18x_{2}^{2}z^{2}\] \[+54Ax_{1}^{3}z^{3}+54Ax_{1}^{2}x_{2}z^{3}+54Ax_{1}^{2}x_{3}z^{3}+ 36x_{2}x_{3}z^{2}+18x_{3}^{2}z^{2}\,. \tag{607}\] Now we shall seek for consistent truncations of the vector-valued function \(f(x_{1},x_{2},x_{3},z)\) defined in Eq. (606), and investigate whether singular solutions exist or not. Among many distinct truncations of the vector function \(f(x_{1},x_{2},x_{3},z)\), one is the following, \[\hat{f}(x_{1},x_{2},x_{3},z)=\left(\begin{array}{c}3x_{1}(N)x_{2}(N)\\ 3(w_{\rm DE}+1)x_{1}(N)x_{2}(N)\\ -54Ax_{1}^{2}(N)x_{3}^{2}(N)z^{2}(N)\\ 54Ax_{1}^{2}(N)z^{3}(N)x_{2}(N)\end{array}\right)\,. \tag{608}\] The corresponding vector \(\vec{p}\), which obviously corresponds to the solution \(\rho_{\rm tot}=\rho_{c}\), is easily found, \[\vec{p}=(-1,-1,-1,1)\,, \tag{609}\] and the corresponding vectors \(\vec{a}\) are, \[\vec{a}_{1}=\left(-\frac{1}{3(w_{\rm DE}+1)},-\frac{1}{3},-\frac{1}{3},\frac{w_ {\rm DE}+1}{\sqrt{-2A}}\right)\qquad\mbox{and}\qquad\vec{a}_{2}=\left(-\frac{1 }{3(w_{\rm DE}+1)},-\frac{1}{3},-\frac{1}{3},-\frac{w_{\rm DE}+1}{\sqrt{-2A}} \right). \tag{610}\] The corresponding Kovalevskaya matrix is given by \[K(\vec{a}_{1})=\left(\begin{array}{cccc}0&-\frac{1}{w_{\rm DE}+1}&0&0\\ -(w_{\rm DE}+1)&0&0&0\\ -2(w_{\rm DE}+1)&0&-1&\frac{2\sqrt{-2A}}{3(w_{\rm DE}+1)}\\ -\frac{6(w_{\rm DE}+1)^{2}}{\sqrt{-2A}}&-\frac{3(w_{\rm DE}+1)}{\sqrt{-2A}}&0 &2\end{array}\right)\,, \tag{611}\] with eigenvalues \[(r_{1},r_{2},r_{3},r_{4})=(-1,2,1,-1)\,. \tag{612}\] In conclusion, the major outcome of this section is the fact that LQC utterly erases any finite-time singularities that may have occurred in the classical interacting three fluid system. We have to take into account that the variables \(x_{i}\) (\(i=1,2,3\)) diverge when the Hubble rate vanishes, and this happens when \(\rho_{\rm tot}=\rho_{c}\) or \(\rho_{\rm tot}=0\). That means, the singularity, in the \(x_{i}\) variables, exhibits that the total energy density coincides with the critical one, and thus, this singularity in the variables \(x_{i}\) is not a physical singularity in the sense that neither the Hubble rate diverges nor the corresponding energy densities diverge for our interacting three fluid system in the context of LQC. We shall conclude the analysis by finding analytically the fixed points of the dynamical system. Also, we shall reveal the behavior of the total EoS parameter of Eq. (602), in terms of the coefficients \(c_{1}\) and \(c_{2}\) appearing in the interaction term \(Q\). We can find the fixed points of the dynamical system which well-known linearization techniques for dynamical systems. In the case that the fixed point is hyperbolic, in which case the eigenvalues of the matrix that results after the linearization of the dynamical system contain non-zero real part, the stability of the fixed point can be determined. In the case of a hyperbolic fixed point, when the eigenvalue of the linearization matrix has a negative real part, then the fixed stable, and if the real part is negative, then the fixed point is unstable. We shall denote the fixed points of the dynamical system (603) as \(\phi_{*}\), and the Jacobian of the linearized dynamical system near the fixed points as, \(\mathcal{J}(g)\), with the latter being equal, \[\mathcal{J}=\sum_{i}\sum_{j}\left[\frac{\partial f_{i}}{\partial x_{j}}\right]\,. \tag{613}\] Now we shall solve the equation \(f(x_{1},x_{2},x_{3},z)=0\), with \(f(x_{1},x_{2},x_{3},z)\) appearing in Eq. (606), and this process will reveal the fixed points of the dynamical system (603), which are, \[\phi_{1}^{*} =\left\{x_{1}\to 0,x_{2}\to 0,x_{3}\to 0\right\}\,,\] \[\phi_{2}^{*} =\left\{x_{1}\to 0,x_{2}\to 0,x_{3}\to 0,z\to 0\right\}\,,\] \[\phi_{3}^{*} =\left\{x_{1}\to 0,x_{2}\to 0,x_{3}\to 1,z\to 0\right\}\,,\] \[\phi_{4}^{*} =\left\{x_{1}\to\frac{-\sqrt{(c_{1}-c_{2}-w_{\rm DE})^{2}+4c_{1} w_{\rm DE}}-c_{1}+c_{2}+w_{\rm DE}}{2w_{\rm DE}},\right.\] \[\left.\qquad\qquad x_{2}\to\frac{\frac{c_{1}^{2}}{w_{\rm DE}}+ \frac{c_{1}\sqrt{(c_{1}-c_{2}-w_{\rm DE})^{2}+4c_{1}w_{\rm DE}}}{w_{\rm DE}}- \frac{c_{1}c_{2}}{w_{\rm DE}}+c_{1}}{2c_{1}},x_{3}\to 0,z\to 0\right\}\,,\] \[\phi_{5}^{*} =\left\{x_{1}\to\frac{\sqrt{(c_{1}-c_{2}-w_{\rm DE})^{2}+4c_{1} w_{\rm DE}}-c_{1}+c_{2}+w_{\rm DE}}{2w_{\rm DE}},\right.\] \[\left.\qquad\qquad x_{2}\to\frac{\frac{c_{1}^{2}}{w_{\rm DE}}- \frac{c_{1}c_{2}}{w_{\rm DE}}-\frac{c_{1}\sqrt{(c_{1}-c_{2}-w_{\rm DE})^{2}+4c _{1}w_{\rm DE}}}{w_{\rm DE}}+c_{1}}{2c_{1}},x_{3}\to 0,z\to 0\right\}\,. \tag{614}\] Apparently, the fixed points \(\phi_{1}^{*}\) and \(\phi_{2}^{*}\) are not hyperbolic, and the fixed points \(\phi_{3}^{*}\), \(\phi_{4}^{*}\) and \(\phi_{5}^{*}\) are hyperbolic. Omitting the exact form of the eigenvalues for brevity, a thorough analysis of the phase space indicates that the fixed points are unstable, regardless the values of the parameters \(w_{\rm DE}\), \(c_{1}\), \(c_{2}\). Also the total EoS in Eq. (602) must be studied for various values of the free parameters. We shall consider the case that we fix the DE EoS parameter \(w_{\rm DE}\) to have various values, varying from quintessential values (\(w_{\rm DE}=-0.5\)) to phantom values (\(w_{\rm DE}=-1.5\)). A numerical analysis of the total EoS parameter (602) indicates that in the context of the three fluid interacting cosmology, it is possible to realize various cosmological evolutions, for example quintessential, phantom and also matter dominated eras The Choice of the DE EoS and its Physical Implications For the analysis performed in the previous subsections, the choice of the DE EoS was chosen to be that of Eq. (597). However, a more general EoS can be used, which may include higher powers of the Hubble rate and also further effects of viscosity might be included. In this subsection we report the implications of the chosen EoS (597) for DE phenomenology. Using the formalism of [415] which we extend in the context of LQC, a comparison of the classical theory with the LQC one can be done for the chosen EoS (597). We note that when the LQC effects are not considered, the EoS of Eq. (597) drives the cosmological system to finite-time singularities of the Type III form [374]. On the other hand, in the case of LQC, with one fluid, the singularities are eliminated [779], and as we demonstrated previously, the same applies in the three fluid system in which DM and DE are interacting with each other. Thus, LQC eliminates the finite-time singularities. Our findings and comparisons are presented in TABLE 7. ## IX The Avoidance of Finite-Time Future Singularities Near the singularity, especially in the case of Type I and Type III, the Hubble rate becomes very large, which means that the curvature and the temperature become very large and therefore the quantum effect and the thermal effect become important. One may naturally wonder whether it is possible to avoid the finite-time singularities from the cosmological picture so that we realize a non-singular Universe. In this section, we shall examine the possibility of avoiding the finite-time singularities in the cosmological scenarios. More specifically, we shall discuss what could happen with the future singular dark Universe when the effects of quantum or thermal radiation are included. This section is based on Refs. [439, 864, 865, 454]. ### Little Rip Scenario The possibility of a phantom DE EoS (i.e., \(w_{\rm DE}<-1\)) is very hard to exclude from the cosmological picture [866]. The phantom DE can solve the Hubble constant tension [867] quite excellently. However, one of the serious features that the phantom DE carries is the following. For \(w_{\rm DE}<-1\), the DE density increases with the increasing scale factor and both of them can blow up at some finite-time in the future leading to a Big Rip singularity [1]. However, this future singularity can be avoided even if \(w_{\rm DE}<-1\) and this heuristic scenario, dubbed as "Little Rip" was introduced in Ref. [454]. With such fantastic proposal, the Little Rip scenario received significant attention to the scientific community [864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 404]. In this section, we shall describe how the Big Rip singularity could be avoided even if \(w_{\rm DE}\) is less than \(-1\). In order to realize a non-singular scenario, one can proceed with a non-singular scale factor \(a(t)\) having the following form \[a={\rm e}^{f(t)}\,, \tag{615}\] where \(f(t)\) is any arbitrary non-singular function satisfying \(\dot{f}=H\). From the Friedmann equation \(H^{2}=(\kappa^{2}/3)\rho\), one can derive that \(\dot{f}^{2}(t)=(\kappa^{2}/3)\rho\). And from the condition that \(\rho\) is an increasing function of \(a\) which implies \(\frac{d\rho}{da}=\frac{6}{\kappa^{2}\dot{a}}\dot{f}\ddot{f}>0\), gives the restriction \[\ddot{f}>0\,, \tag{616}\] because we are dealing with an expanding Universe, i.e., with \(\dot{f}=H>0\). Thus, one may note that all Little Rip scenarios which are described by the evolution of the scale factor given by Eq. (615), with non-singular \(f\) should satisfy the Eq. (616). To proceed further in this direction, let us now consider the well-known non-linear EoS [380] \[p=-\rho-f(\rho)\quad\Longrightarrow\quad w=-1-\frac{f(\rho)}{\rho}\,, \tag{617}\] where \(f(\rho)>0\) ensures that the \(\rho\) increases with the scale factor, because in that case \(w<-1\). From the conservation equation, \(\dot{\rho}+3H(p+\rho)=0\), one can find the scale factor as \[a=a_{0}\exp\left(\int_{\rho_{0}}^{\rho}\frac{d\rho}{3f(\rho)} \right)\,, \tag{618}\] where we set \(a_{0}\), the present value of the scale factor to be unity and \(\rho_{0}\) denotes the present value of the energy density. Now, using the conservation equation \(\dot{\rho}=3Hf(\rho)\) and the Friedmann equation \(H^{2}=\frac{\kappa^{2}\rho}{3}\), one finds \[t-t_{0}=\frac{1}{\sqrt{3}\kappa}\int_{\rho_{0}}^{\rho}\frac{d \rho}{\sqrt{\rho}f(\rho)}\,. \tag{619}\] We assume that \(f(\rho)\) is given by the power law \(f(\rho)=\frac{A}{\sqrt{3}\kappa}\rho^{\nu+\frac{1}{2}}\), where \(A>0\) and \(\nu\) are constants. We select \(\nu=0\) in the above form (i.e., we consider \(f(\rho)=\frac{A}{\sqrt{3}\kappa}\sqrt{\rho}\)) and for which one can find the time from (619) as \[t-t_{0}=\frac{1}{A}\,\ln\frac{\rho}{\rho_{0}}\,, \tag{620}\] which clearly shows that \(\rho\to\infty\) is not reached in some finite-time. That means even if we have \(w<-1\) but we can avoid the future singularity and it is known as the Little Rip scenario. Now, solving (618) for \(f(\rho)=\frac{A}{\sqrt{3}\kappa}\sqrt{\rho}\), one can express \(\rho\) as a function of \(a\) as [864]: \[\rho(a)=\rho_{0}\left(1+\frac{\sqrt{3}A}{2\kappa\sqrt{\rho_{0}}} \ln a\right)^{2}\,, \tag{621}\] and consequently, using the first Friedmann equation, one can explicitly find the scale factor in terms of the time as [864]: \[a(t)=\exp\left[\frac{2\kappa\sqrt{\rho_{0}}}{\sqrt{3}A}\left\{ \exp\left(\frac{A}{2}(t-t_{0})\right)-1\right\}\right]\,, \tag{622}\] which does not exhibit any finite-time singularity in the future. #### vi.2.1 Little Rip in Viscous Universe The Little Rip scenario can also be realized in the viscous Universe as well. We refer to section III.1 for the basic dynamical equations for a viscous Universe. In the viscous Universe, the effective pressure term as in Eq. (35) or (36) which includes the bulk viscosity component plays the crucial role. We generalize (617) in presence of the bulk viscosity as \[p_{\rm eff}=-\rho-f(\rho)-\eta(H)\,, \tag{623}\] where notice that we have generalized the last term in the right hand side of Eq. (36) as \(3\xi(t)H\to\eta(H)\) in which \(\xi(t)\) refers to the coefficient of the bulk viscosity. We investigate the cosmological scenario with \[\eta(H)=\bar{\eta}=\text{constant}\,. \tag{624}\] For the constant \(\eta(H)\), using the conservation equation \(\dot{\rho}+3H(p_{\rm eff}+\rho)=0\), one can find the scale factor as \[a=\exp\left(\frac{1}{3}\int_{\rho_{0}}^{\rho}\frac{d\rho}{\bar{ \eta}+f(\rho)}\right)\,, \tag{625}\] where we have explicitly set \(a_{0}=1\). Further, using the conservation and Friedmann equations, one can express the time as \[t-t_{0}=\frac{1}{\sqrt{3}\kappa}\int_{\rho_{0}}^{\rho}\frac{d\rho}{ \sqrt{\rho}\left[\bar{\eta}+f(\rho)\right]}\,. \tag{626}\] Now, for the model \(f(\rho)=\frac{A}{\sqrt{3}\kappa}\sqrt{\rho}\) (in this case, the dimension of \(A\), in natural units, is of the Planck's mass), we can solve \[t-t_{0}=\frac{2}{A}\ln\left(\frac{\sqrt{3}\kappa\bar{\eta}+A\sqrt {\rho}}{\sqrt{3}\kappa\bar{\eta}+A\sqrt{\rho_{0}}}\right)\,. \tag{627}\] and consequently, \[\rho(t)=\frac{1}{A^{2}}\left[\left(\sqrt{3}\kappa\bar{\eta}+A \sqrt{\rho_{0}}\right)\exp\!\left(\frac{A}{2}(t-t_{0})\right)-\sqrt{3}\kappa \bar{\eta}\right]^{2}\,. \tag{628}\] Thus, we can see that in order to realize \(\rho\to\infty\), \(t\to\infty\) is essential. This is the Little Rip scenario in the viscous Universe. However, it is important to mention that the above Little Rip scenario is dependent on the above EoS (623) with the constant \(\eta(H)\). Interestingly, one can also realize the Little Rip phenomenon in a more generalized scenario where in addition \(\eta(H)\) is not strictly a constant. To illustrate this, we consider the following example: \[f(\rho)=\frac{A}{\sqrt{3}\kappa}\,\rho^{\nu+1/2},\qquad\xi(t)= \frac{b}{3\kappa^{2}}\,\rho^{\gamma}\,, \tag{629}\] where \(\nu\), \(b\) and \(\gamma\) are constants.7 Thus, for the above choice, one can solve for the cosmic time as Footnote 7: One can quickly note that for \(\nu=0\) and \(\gamma=-1/2\), we realize the previous scenario \(f(\rho)=A\sqrt{\rho}\) and \(\eta(H)=\text{constant}\). \[t-t_{0}=\int_{\rho_{0}}^{\rho}\frac{d\rho}{\rho\left(A\,\rho^{ \nu}+b\,\rho^{\gamma}\right)}\,. \tag{630}\] which can be expressed after integration in terms of the hypergeometric function as \[t-t_{0}=\left[\frac{\nu\,\rho^{-\gamma}}{\gamma b(\gamma-\nu)} +\frac{\rho^{-\gamma}}{\gamma b(\gamma-\nu)}\left\{-\nu+(\nu-\gamma)\,_{2}F_{ 1}\left(1,\frac{\gamma}{\gamma-\nu},1+\frac{\gamma}{\gamma-\nu},-\frac{A}{ \beta}\rho^{\nu-\gamma}\right)\right\}\,\right]. \tag{631}\] For constant bulk viscous coefficient, i.e., \(\xi(t)=\text{constant}\) (equivalently, \(\gamma=0\) in Eq. (629)) together with \(\nu=0\) in Eq. (629), we obtain (note that in that case \(\nu\) and \(\gamma\) have units of the Planck mass): \[t-t_{0}=\frac{1}{(A+b)}\ln\frac{\rho}{\rho_{0}}\,, \tag{632}\] and this presents the Little Rip phenomenon because \(\rho\to\infty\) requires \(t\to\infty\), that means the energy density does not diverge at some finite-time [864]. ### Geometrical invariants to remove the finite-time future singularities In Section IV we have seen that different modified gravity theories may lead to a variety of finite time future singularities. However, with the proper choice of the geometrical invariants describing the underlying gravitational theory, it is possible to remove such finite-time singularities. In this section we shall describe briefly how one can avoid such finite time future singularities from various modified gravity theories. In the context of \(F(R)\) gravity theory, one can reconstruct the \(F(R)\) gravity models in such a way so that the finite-time future singularities may disappear from the picture. However, such reconstructions should be consistent with the existing theories of the Universe and the reconstructed \(F(R)\) model should pass the essential tests of gravity. This point was first raised in Ref. [879] and subsequently discussed in Refs. [589, 590]. We recall that the general action of \(F(R)\) gravity in the Jordan frame that depends solely on the curvature [i.e. Eq. (154) without matter] having the form \[S=\int d^{4}x\sqrt{-g}\;\frac{F(R)}{2\kappa^{2}}\,, \tag{633}\] Introducing the auxiliary fields \(A\), \(B\), one can rewrite the action (633) as Eq. (201) and consequently as Eq. (202). Now, using the conformal transformation \(g_{\mu\nu}=e^{\sigma}g_{\mu\nu}\) where \(\sigma=-\ln F^{\prime}(A)\), one can obtain the action in the Einstein frame in Eq. (205) and the corresponding potential \(V(A)\) in Eq. (206) [see section IV.5.1 for more details]. Dealing with the model \[F(R)=A-\gamma R^{-n}+\eta R^{2}. \tag{634}\] we consider the following case \(-1<n<-\frac{1}{2}\), \(\gamma<0\), and \(\eta>0\) and with these choices, the field \(\sigma\) turns out to be \(\sigma=-\ln\left(1+n\gamma A^{-n-1}+2\eta A\right)\) where \(1+n\gamma A^{-n-1}+2\eta A>0\), and the potential has the following form \[V(A)=\frac{(n+1)\gamma A^{-n}+\eta A^{2}}{\left(1+n\gamma A^{-n-1}+2\eta A \right)^{2}}. \tag{635}\] Now, since \[\frac{d\sigma}{dA}=-\frac{F^{\prime\prime}(A)}{F^{\prime}(A)}=-\frac{-n(n+1) \gamma A^{-n-2}+2\eta}{1+n\gamma A^{-n-1}+2\eta A}, \tag{636}\] then there is a branch point, at \(A=A_{0}\equiv\left\{\frac{n(n+1)\gamma}{2\eta}\right\}^{\frac{1}{n+2}}\) or \(\sigma=\sigma_{0}\equiv-\ln\left(1+(n+2)\left(\frac{2\eta}{n+1}\right)^{\frac {n+1}{n+2}}(n\gamma)^{\frac{1}{n+2}}\right)\) where \(\frac{d\sigma}{dA}=0\) or \(F^{\prime\prime}(A)=0\). For small values of \(A\), the behavior of the potential \(V(A)\) approaches to \(\frac{A^{n+2}}{(n+1)\gamma}\). However, when \(A\) is very large, then \(V(A)\) approaches to a constant, \(V(A)\rightarrow\frac{1}{4\eta}\). Also, the potential \(V(A)\) vanishes at \(A=A_{1}\equiv\left\{-\frac{(n+1)\gamma}{\eta}\right\}^{1/(n+2)}\) (since \(0<\frac{n\gamma}{2}<-\gamma\), hence, \(A_{0}<A_{1}\)). Further, looking at the expression for \(V^{\prime}(A)\) given by \[V^{\prime}(A)=\frac{\left\{-n(n+1)\gamma A^{-n-2}+2\eta\right\}A\left\{1-(n+ 2)\gamma A^{-n-1}\right\}}{\left(1+n\gamma A^{-n-1}+2\eta A\right)^{3}}, \tag{637}\] one can notice that \(V^{\prime}(A)\) has an extremum at \(A=A_{0}\). Now, as there is a branch point at \(\sigma=\sigma_{0}\), then if we start from the small curvature, the growth of the curvature stops at \(R=A_{0}\), where \(\sigma=\sigma_{0}\). In fact, at the branch point, where \(h^{\prime\prime}(A)=0\), the mass \(m_{\sigma}\) (\(\propto\frac{d^{2}V}{d\sigma^{2}}\)) of \(\sigma\) becomes infinite since \[\frac{d^{2}V}{d\sigma^{2}}=\frac{F^{\prime}(A)}{F^{\prime\prime}(A)}\frac{d}{ dA}\left(\frac{F^{\prime}(A)}{F^{\prime\prime}(A)}\frac{dV(A)}{dA}\right)=-\frac{3}{F^ {\prime\prime}(A)}+\frac{A}{F^{\prime}(A)}+\frac{2F(A)}{F^{\prime}(A)^{2}} \rightarrow+\infty. \tag{638}\] Note also that \(F^{\prime\prime}(0)<0\) when \(A<A_{0}\). Then the growth of \(\sigma\) is finished at \(\sigma=\sigma_{0}\). Hence, with the addition of the \(R^{2}\) term, cosmic doomsday does not occur but the Universe ends up in a de Sitter phase. The same situation arises for the Horava-Lifshitz \(F(R)\) gravity too. As explored in Ref. [400], the Horava-Lifshitz \(F(R)\) gravity has a quite rich cosmological structure and the inclusion of \(R^{2}\) in \(F(R)\) could avoid the finite time future singularities from the picture. On the other hand, recalling the \(F(T)\) gravitational theory, it is also possible to construct some models of \(F(T)\) gravity where the finite-time future singularities can be avoided. We consider the correction of the power law model \(F(T)=AT^{\alpha}\) of Eq.(391) as follows \[F(T)=AT^{\alpha}+BT^{\delta}, \tag{639}\] where \(BT^{\delta}\) (\(\delta\neq 0\), \(B\neq 0\)) is the correction term. This model has some interesting features. For \(\delta>1\), all four types finite time future singularities can be avoided in this model [409]. We emphasize that the model where \(T^{2}\) is treated as the correction term, which is obtained for \(\delta=2\), the minimum integer satisfying the condition \(\delta>1\), therefore removes all four types of finite time future singularities. This has similarity with the \(F(R)\) gravity theory where it has been observed that the inclusion of \(R^{2}\) term could cure the finite time future singularities [589, 590, 879]. Apart from the above models, there are other type of models which can avoid the finite-time future singularities: * **Exponential model:** In the exponential model of type \[F(T)=C\exp(\lambda T),\] (640) where \(C\) and \(\lambda\) are nonzero constants, we do not realize any finite-time future singularities [409]. * **Logarithmic model:** In the logarithmic model of the form \[F(T)=D\ln\left(\gamma T\right),\] (641) where \(D\) (\(\neq 0\)) and \(\lambda\) (\(>0\)) are constants, we do not realize any finite-time future singularities [409]. In the context of non-local gravity (section IV.1), one can also avoid the finite time future singularities. We consider the scenario where a correction term of the form \(uR^{2}/\left(2\kappa^{2}\right)\) where \(u\) (\(\neq 0\)) is any real number, is added to the action (394) as follows [662]: \[S=\int d^{4}x\sqrt{-g}\Bigg{[}\frac{1}{2\kappa^{2}}\bigg{\{}R\left(1+f(\Box^{ -1}R)\right)+uR^{2}-2\Lambda\bigg{\}}+\mathcal{L}_{\rm matter}\left(Q;g \right)\Bigg{]}\,. \tag{642}\] Along with the two fields \(\eta\), \(\xi\) as in section IV.1, introducing a scalar field \(\zeta\). the action in Eq. (642) can be expressed as \[S=\int d^{4}x\sqrt{-g}\Bigg{[}\frac{1}{2\kappa^{2}}\bigg{\{}R\left(1+f(\eta) \right)-\partial_{\mu}\xi\partial^{\mu}\eta-\xi R+u\left(2\zeta R-\zeta^{2} \right)-2\Lambda\bigg{\}}+\mathcal{L}_{\rm matter}\Bigg{]}\,. \tag{643}\] Note that by varying action (643) with respect to \(\zeta\), one obtains \(\zeta=R\) and hence by substituting \(\zeta=R\) in (643), one gets back (642). Now, in the context of a flat FLRW universe, the gravitational field equations for the action (642) can be written as [662] \[-3H^{2}\left(1+f(\eta)-\xi\right)+\frac{1}{2}\dot{\xi}\dot{\eta} -3H\left(f^{\prime}(\eta)\dot{\eta}-\dot{\xi}\right)+\Theta+\Lambda+\kappa^{2} \rho_{\rm m}=0\,, \tag{644}\] \[\left(2\dot{H}+3H^{2}\right)\left(1+f(\eta)-\xi\right)+\frac{1}{ 2}\dot{\xi}\dot{\eta}+\left(\frac{d^{2}}{dt^{2}}+2H\frac{d}{dt}\right)\left(f (\eta)-\xi\right)+\Xi-\Lambda+\kappa^{2}p_{\rm m}=0\,, \tag{645}\] where \(\rho_{\rm m}\), \(p_{\rm m}\) are respectively the energy density and pressure of the matter sector; \(\Theta\) and \(\Xi\) are respectively the contributions from the additional term \(uR^{2}/\left(2\kappa^{2}\right)\) given by \[\Theta\equiv u\left(-6H^{2}R+\frac{1}{2}R^{2}-6H\dot{R}\right)=18u \left(-6H^{2}\ddot{H}+\dot{H}^{2}-2H\ddot{H}\right)\,, \tag{646}\] \[\Xi\equiv u\left[2\left(2\dot{H}+3H^{2}\right)R-\frac{1}{2}R^{2} +2\ddot{R}+4H\ddot{R}\right]=6u\left(9\dot{H}^{2}+18H^{2}\dot{H}+2\ddot{H}+12 H\ddot{H}\right)\,. \tag{647}\] Now, for the Hubble parameter as in (59) or (99), in the limit \(t\to t_{\rm s}\), \(\Theta\) of Eq. (646) and \(\Xi\) of Eq. (647) are approximately given by \[\Theta\sim 18u\left[-6h_{\rm s}^{2}\beta\left(t_{\rm s}-t\right)^{-\left(3 \beta+1\right)}+h_{\rm s}^{2}\beta^{2}\left(t_{\rm s}-t\right)^{-2\left(\beta +1\right)}-2h_{\rm s}^{2}\beta\left(\beta+1\right)\left(t_{\rm s}-t\right)^{-2 \left(\beta+1\right)}\right]\,, \tag{648}\] \[\Xi\sim 6u\left[9h_{\rm s}^{2}\beta^{2}\left(t_{\rm s}-t\right)^{-2\left(\beta +1\right)}+18h_{\rm s}^{3}\beta\left(t_{\rm s}-t\right)^{-\left(3\beta+1\right) }+2h_{\rm s}\beta\left(\beta+1\right)\left(\beta+2\right)\left(t_{\rm s}-t \right)^{-\left(\beta+3\right)}+12h_{\rm s}^{2}\beta\left(\beta+1\right)\left( t_{\rm s}-t\right)^{-2\left(\beta+1\right)}\right]\,. \tag{649}\] We now examine the r.h.s. of Eq. (396) with \(\Theta\) in Eq. (646). For \(\beta>1\), \(\sigma<0\), the first term of Eq. (648), i.e. \(-108uh_{\rm s}^{2}\beta\left(t_{\rm s}-t\right)^{-(3\beta+1)}\), becomes the leading term. As \(u\neq 0\), \(h_{\rm s}\neq 0\) and \(\beta\neq 0\), therefore, this leading term does not vanish, which means that the additional \(R^{2}\) term can remove the finite-time future singularities. On the other hand, for \(-1<\beta<0\,,\,0<\beta<1\), the second and third terms of Eq. (648) become the leading terms. Again since \(u\neq 0\), \(h_{\rm s}\neq 0\), \(\beta\neq 0\) and \(\beta\neq-2\), hence, these leading terms do not vanish. This means that the additional \(R^{2}\) term can remove the finite-time future singularities. We remark that similar to the \(F(R)\) gravity, where the inclusion of a correction term \(R^{2}\) could cure the singularities [662], in the non-local gravity too, this situation happens. ### Scalar field models avoiding finite-time future singularities According to [865], it is possible to construct scalar field phantom DE models where the big rip singularity can be avoided. Considering a phantom scalar field \(\phi\) with potential \(V(\phi)\) as in [865], in the background of a FLRW universe, the energy density and pressure of the phantom field, are respectively given by [119] \[\rho_{\phi}=-\dot{\phi}^{2}/2+V(\phi),\quad p_{\phi}=-\dot{\phi}^{2}/2-V(\phi), \tag{650}\] and the equation of state of this phantom field, \(w=p_{\phi}/\rho_{\phi}\) takes the form \[w_{\phi}=\frac{\dot{\phi}^{2}/2+V(\phi)}{\dot{\phi}^{2}/2-V(\phi)}, \tag{651}\] which satisfies \(w_{\phi}<-1\) if \(\dot{\phi}^{2}/2<V(\phi)\). The equation of motion of the phantom field is given by \[\ddot{\phi}+3H\dot{\phi}-a^{-2}\nabla^{2}\phi-V^{\prime}(\phi)=0, \tag{652}\] where prime denotes the derivative with respect to \(\phi\) and \(\nabla^{2}\) denotes the spatial Laplacian, i.e. \(\nabla^{2}\phi=\partial_{x}^{2}\phi+\partial_{y}^{2}\phi+\partial_{x}^{2}\phi\). In this model scenario, the dynamics of the universe heavily depends on the potential function \(V(\phi)\). As argued in [865], the following gaussian potential has some interesting consequences in the context of finite time singularities: \[V(\phi)=V_{0}e^{-(\phi^{2}/\sigma^{2})}\, \tag{653}\] where \(V_{0}\) and \(\sigma\) are constants. For this particular choice of the potential, the evolution of the scalar field, the density parameter of phantom scalar field and its equation of state have been numerically investigated in [865]. From the evolution of the equation of state \(w_{\phi}\), one finds that [865]: during the initial stages of evolution, the phantom scalar field is frozen by the expansion and it behaves like the cosmological constant (i.e. \(w_{\phi}\simeq-1\)). After that the field starts to evolve quite rapidly towards the maximum of its potential and the energy density of the phantom field becomes dominant where \(w_{\phi}\) crosses the phantom divide line \(w_{\phi}=-1\) and becomes more negative. Finally, in the very late phase of the universe, the field comes to rest at the maximum of the potential and the accelerating expansion begins with \(w_{\phi}=-1\). In the late phase, as \(w_{\phi}\) does not go beyond \(-1\), hence no future singularity appears. In fact, the universe approaches to a de Sitter phase. On the other hand, it is interesting to note that for \(F(R)=R+\alpha R^{2}\) (i.e. the Starobinksy model [880]) where \(\alpha\) is a dimensionless constant, one can derive the corresponding quintessence scalar field potential of exponential type [881]. As due to the presence of \(R^{2}\) in \(F(R)\), no finite-time future singularities appear [879], therefore, we conclude that quintessence scalar field model with exponential potential could avoid the finite-time future singularities. ### Inhomogeneous equation of state In the context of various modified gravity theories, using the effective energy density \(\rho_{\rm eff}\) and pressure \(p_{\rm eff}\), one can derive an effective equation of state \(p_{\rm eff}-w\rho_{\rm eff}=G(H,\dot{H},\ddot{H})\), see Eq. (158) for the \(F(R)\) gravity case [while this is true for other gravity theories as well] in which \(G(H,\dot{H},\ddot{H})=-\frac{1}{\kappa^{2}}\left(2\dot{H}+3(1+w)H^{2}\right)\), see Eq. (159). The explicit form of \(G(H,\dot{H},\ddot{H})\) that involves the geometric invariants and their time derivatives is given in Eq. (160). Therefore, the modified gravity theories can lead to various inhomogeneous equation of state parameters depending on the choice of the underlying gravitational theory. In this section we shall investigate the conditions on \(G(H,\dot{H},\ddot{H})\) that may prevent the appearance of finite time future singularities. If Eq. (159) is found to be inconsistent for kind of finite time future singularities, then one can conclude that no finite time future singularities are realized in this case. We consider the case when \(H\) evolves as (100). In that case, the r.h.s. of (159) behaves as [589] \[G(H,\dot{H},\ddot{H})=-\frac{1}{\kappa^{2}}\left(2\dot{H}+3(1+w)H^{2}\right) \sim\left\{\begin{array}{ll}-\frac{3(1+w)h_{s}^{2}}{\kappa^{2}}\left(t_{0}-t \right)^{-2\beta}&\text{when}\ \ \beta>1\\ -\frac{2\beta h_{s}+3(1+w)h_{s}^{2}}{\kappa^{2}}\left(t_{0}-t\right)^{-2}&0< \beta<1\\ -\frac{2\beta h_{s}}{\kappa^{2}}\left(t_{0}-t\right)^{-\beta-1}&0>\beta>-1 \end{array}\right.\,. \tag{654}\] Now for \(\beta>-1\), \(\beta\neq 0\), which correspond to Type I, II and III singularities, notice that the l.h.s. of (654), i.e., \(G\left(H,\dot{H},\cdots\right)\) of (159) diverges. Thus, in order to avoid the appearance of such finite time singularities, \(G\) must be bounded and in this case, Eq. (159) becomes inconsistent with the behavior of the r.h.s. of (654). An example in this direction is the following [589] \[G\left(H,\dot{H},\cdots\right)=G_{0}\left(\frac{1+aH^{2}}{1+bH^{2}}\right) \tag{655}\] where \(a\) and \(b\) are positive real numbers and \(G_{0}\) is a constant. We further note that for \(\beta>0\) corresponding to Type I or III singularity, the r.h.s. of (654) becomes negative. Hence, if \(G\left(H,\dot{H},\cdots\right)\) is positive for large \(H\), we do not realize any finite time future singularity. Another chance is that if \(G\left(H,\dot{H},\cdots\right)\) contains the term like \(\sqrt{1-a^{2}H^{2}}\), which becomes imaginary for large \(H\), then Eq. (654) becomes inconsistent. Thus, singularities where the curvature blows up (Type I, II, III) could be avoided. We remark that such mechanism could be applied even if we have the phantom EoS, i.e. \(w<-1\). In that case one can add an extra term \(G_{1}(H)=G_{0}\left(\sqrt{1-H^{2}/H_{0}^{2}}-1\right)\) to \(G\left(H,\dot{H},\cdots\right)\). Here, \(G_{0}\) and \(H_{0}\) are real numbers. If \(H_{0}\) is considered to be large enough, \(G_{1}(H)\) is not relevant for the small curvature but relevant for large scale and hence the possibility of the curvature singularity is avoided. ### Future Singularities with the Account of Quantum Effects In this subsection we will extend our analysis of singularities performed when we have studied semi-classical gravity. It is well known that, when the Universe approaches the future singularity, its curvature and other geometrical invariants grow up. As a result, the quantum effects may change the behavior of the future spacetime singularity. For example, one can show that quantum effects may change the structure of future singularity, see [591, 675] (see also [882, 883, 884, 418, 885, 886, 887, 888, 882]). We will use simple qualitative arguments of Ref. [299] to show the role of quantum effects in conformally-invariant theories to future singularity. It is well-known that the generalized conformal anomaly \(T_{A}\) has the following form: \[T_{A}=b\left(\mathcal{F}+\frac{2}{3}\Box R\right)+b^{\prime}G+b^{\prime\prime} \Box R\,, \tag{656}\] where \(G\) is the Gauss-Bonnet invariant in (141) and \(\mathcal{F}\) denotes the square of the 4D Weyl tensor, given by \[\mathcal{F}=\frac{1}{3}R^{2}-2R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu \nu\rho\sigma}\,. \tag{657}\] In case when matter is conformally-invariant and there appear \(N_{0}\) scalars, \(N_{1/2}\) spinors, \(N_{1}\) vector fields, \(N_{2}\) (\(=1\)) gravitons, and \(N_{\text{HD}}\) higher-derivative conformal scalars, \(b\) and \(b^{\prime}\), which are obtained using adiabatic regularization, take the following forms, \[b=\frac{N_{0}+6N_{1/2}+12N_{1}+611N_{2}-8N_{\text{HD}}}{120(4\pi)^{2}}\,,\quad b ^{\prime}=-\frac{N_{0}+11N_{1/2}+62N_{1}+1411N_{2}-28N_{\text{HD}}}{360(4\pi)^ {2}}\,. \tag{658}\] As given in (658), for the usual matter, \(b\) is positive and \(b^{\prime}\) is negative. An exception is the higher-derivative conformal scalar. Note that the value of \(b^{\prime\prime}\) can always be shifted with the addition of \(R^{2}\) to the classical action. If one writes the energy density \(\rho_{A}\) and pressure \(p_{A}\) corresponding to the trace anomaly \(T_{A}\), then one has \(T_{A}=-\rho_{A}+3p_{A}\). Now with the use of the energy conservation law in the FLRW Universe, one gets \[0=\frac{d\rho_{A}}{dt}+3H\left(\rho_{A}+p_{A}\right)\,, \tag{659}\] which can be expressed by eliminating \(p_{A}\) as \[T_{A}=-4\rho_{A}-\frac{1}{H}\frac{d\rho_{A}}{dt}\,, \tag{660}\] and with the help of integration, \(\rho_{A}\) can be found as [374]: \[\rho_{A}=-\frac{1}{a^{4}}\int dta^{4}HT_{A}\,. \tag{661}\] Now, using the above expression and identifying \(\rho_{\rm eff}=\rho_{A}\), we will consider the FLRW equation, later. Before considering the FLRW equation, as in [299], however, we first consider the trace of the Einstein equation, for simplicity, by including the trace anomaly, as follows, \[R=-\frac{\kappa^{2}}{2}\left(T_{\rm matter}+T_{A}\right)\,. \tag{662}\] Here, \(T_{\rm matter}\) refers to the trace of the matter energy-momentum tensor. Now, for the FLRW Universe, \(\mathcal{F}\) and \(G\) are as follows \[\mathcal{F}=0\,,\quad G=24\left(\dot{H}H^{2}+H^{4}\right)\,. \tag{663}\] What we would like to show is that, if there is a singularity, then the trace equation (663) cannot be consistent. In particular, we will show that the contribution from the conformal anomaly in the r.h.s. of Eq. (662) is more singular than the scalar curvature in the l.h.s. of Eq. (662). Note that a rigorous study on the quantum effects may be done following [428], however, this needs an extensive numerical investigations depending on the particles content of the Universe as well as effective dark fluid. Now we assume that \(H\) behaves as in (99) or (100) and neglect the contribution from matter and put \(T_{\rm matter}=0\). In the case of Type I singularity, the scalar curvature \(R\), which is given by \(R=12H^{2}+6\dot{H}\), \(R\) behaves as \(R\sim\left(t_{s}-t\right)^{-2\beta}\) and in the case of Type III singularity, \(R\) behaves as \(R\sim\left(t_{s}-t\right)^{-\beta-1}\). On the other hand, in the case of Type II singularity, \(R\) behaves as \(R\sim\left(t_{s}-t\right)^{-\beta-1}\). When Type IV singularity appears, if \(H_{s}(t)\neq 0\) in (100), \(R\) is finite but if \(H_{s}(t)=0\), \(R\sim\left(t_{s}-t\right)^{-\beta-1}\). In the case of Type I singularity, near the singularity, \(t\sim t_{s}\), as seen from Eq. (663), the Gauss-Bonnet invariant \(G\) behaves as \(G\sim 24H^{4}\sim\left(t_{s}-t\right)^{-4\beta}\) and therefore \(G\) becomes very large and the contribution from the matter \(T_{\rm matter}\) in (662) can be neglected. On the other hand, one finds \(\Box R\sim\left(t_{s}-t\right)^{-2\beta-2}\). Then since \(R\sim\left(t_{s}-t\right)^{-2\beta}\), \(T_{A}\) becomes much larger than \(R\) and therefore Eq. (662) cannot be satisfied. This shows that the quantum effects coming from the conformal anomaly remove Type I singularity. In the case of Type II singularity, we find that \(G\) behaves as \(G\sim 24\dot{H}H^{2}\sim\left(t_{s}-t\right)^{-3\beta-1}\). Since \(R\sim\left(t_{s}-t\right)^{-\beta-1}\), the Gauss-Bonnet term in \(T_{A}\) is less singular and therefore negligible compared with \(R\) and the contribution from the matter. Therefore, the Gauss-Bonnet term in \(T_{A}\) does not help to prevent Type II singularity. Note, however, \(\Box R\) behaves as \(\Box R\sim\left(t_{s}-t\right)^{-\beta-3}\), which is more singular than the scalar curvature. Then if \(2b/3+b^{\prime\prime}\neq 0\), the contribution from \(T_{A}\) becomes much larger than \(R\) near the singularity \(t\sim t_{s}\) and Eq. (662) cannot be satisfied. Therefore, if \(2b/3+b^{\prime\prime}\neq 0\), even Type II singularity can also be prevented when the quantum effects due to conformal anomaly are included. In the case of Type III singularity, the Gauss-Bonnet invariant behaves as \(G\sim 24\dot{H}H^{2}\sim\left(t_{s}-t\right)^{-3\beta-1}\) and \(\Box R\) behaves as \(\Box R\sim\left(t_{s}-t\right)^{-\beta-3}\). Because the scalar curvature behaves as \(R\sim\left(t_{s}-t\right)^{-\beta-1}\), both of the terms, \(\Box R\) and \(G\), are more singular than the scalar curvature \(R\) and Type III singularity is also prevented. Thus, we demonstrated that quantum effects may remove finite-time future singularities. Note that account of quantum gravity effects in specific models is also known to remove the Big Rip singularity [139]. ### FLRW equation including Trace Anomaly Here we consider the FLRW equation including the effective energy density \(\rho_{A}\) induced by the trace anomaly in (661). By using (656), (663), and (661), we find the explicit form of \(\rho_{A}\) as follows \[\rho_{A}=-\frac{1}{a^{4}}\int dta^{4}H\left\{24b^{\prime}\left(\dot{H}H^{2}+H^ {4}\right)-6\left(b+b^{\prime\prime}\right)\left(\frac{d^{2}}{dt^{2}}+3H\frac {d}{dt}\right)\left(\dot{H}+2H^{2}\right)\right\}\,. \tag{664}\] Then the FLRW equation is given by \[\frac{3}{\kappa^{2}}H^{2}=\rho+\rho_{A}\,. \tag{665}\] Here \(\rho\) is the energy density corresponding to the phantom DE, which behaves as \(\rho\sim\rho_{0}a^{-3(1+w)}\) with \(w<-1\). By combining (664) and (665) and assuming \(\rho=\rho_{0}a^{-3(1+w)}\) (we take \(a_{0}=1\)), we obtain \[\frac{3}{\kappa^{2}}H^{2}a^{4}=\rho_{0}a^{1-3w}-\int dta^{4}H\left\{24b^{\prime }\left(\dot{H}H^{2}+H^{4}\right)-6\left(b+b^{\prime\prime}\right)\left(\frac{d ^{2}}{dt^{2}}+3H\frac{d}{dt}\right)\left(\dot{H}+2H^{2}\right)\right\}\,. \tag{666}\] Differentiating the above expression with respect to \(t\), we obtain \[\frac{3}{\kappa^{2}}\left(\dot{H}+4H^{2}\right)=\left(1-3w\right)\rho_{0}a^{-3 (1+w)}-\left\{24b^{\prime}\left(\dot{H}H^{2}+H^{4}\right)-6\left(b+b^{\prime \prime}\right)\left(\frac{d^{2}}{dt^{2}}+3H\frac{d}{dt}\right)\left(\dot{H}+2 H^{2}\right)\right\}\,. \tag{667}\] If we assume \[H(t)\sim\widetilde{h}_{s}\left(t_{s}-t\right)^{\widetilde{\beta}}\,, \tag{668}\] with a positive constant \(\widetilde{h}_{s}\), we find \[\dot{H}\sim\left(t_{s}-t\right)^{\widetilde{\beta}-1}\,,\quad H^{2}\sim \left(t_{s}-t\right)^{2\widetilde{\beta}}\,,\quad a^{-3(1+w)}\sim\left\{ \begin{array}{ll}\mathrm{e}^{\,\frac{3(1+w)\widetilde{h}_{s}}{\beta+1}\widetilde {h}_{s}\left(t_{s}-t\right)^{\widetilde{\beta}+1}}&\mathrm{if}\,\,\widetilde{ \beta}\neq-1\\ \left(t_{s}-t\right)^{3(1+w)\widetilde{h}_{s}}&\mathrm{if}\,\,\widetilde{ \beta}=-1\end{array}\right.\,,\] \[\dot{H}H^{2}\sim\left(t_{s}-t\right)^{3\widetilde{\beta}-1}\,,\quad H^{4}\sim \left(t_{s}-t\right)^{4\widetilde{\beta}}\,,\quad\frac{d^{2}\dot{H}}{dt^{2}} \sim\left(t_{s}-t\right)^{\widetilde{\beta}-3}\,,\quad H\frac{d\dot{H}}{dt} \sim\left(t_{s}-t\right)^{2\widetilde{\beta}-2}\,,\] \[\frac{d^{2}H^{2}}{dt^{2}}\sim\left(t_{s}-t\right)^{2\widetilde{\beta}-2}\,, \quad H\frac{dH^{2}}{dt}\sim\left(t_{s}-t\right)^{3\widetilde{\beta}-1}\,. \tag{669}\] By using the above behaviors, we investigate what kind of singularity can be realized or prohibited. First, we consider the case of Type I singularity, where \(\widetilde{\beta}\leq-1\). If \(\widetilde{\beta}<-1\), the energy density \(\rho=\rho_{0}a^{-3(1+w)}\) of matter grows up very rapidly when \(t\to t_{s}\), other terms in (667) cannot cancel the term and therefore Eq. (667) is not satisfied. If \(\widetilde{\beta}=-1\), the l.h.s. of (667) behaves as \[\frac{3}{\kappa^{2}}\left(\dot{H}+4H^{2}\right)\sim\frac{3}{\kappa^{2}}\left( \widetilde{h}_{s}+4\widetilde{h}_{s}^{2}\right)\left(t_{s}-t\right)^{-2}>0\,. \tag{670}\] On the other hand, the first term in the r.h.s. of (667), which comes from the matter, as found in (669), \[\left(1-3w\right)\rho_{0}a^{-3(1+w)}\sim\left(1-3w\right)\rho_{0}\left(t_{s}- t\right)^{3(1+w)\widetilde{h}_{s}}>0\,, \tag{671}\] and the second term in the r.h.s. of (667), which comes from the trace anomaly, behaves as \[-\left\{24b^{\prime}\left(\dot{H}H^{2}+H^{4}\right)-6\left(b+b^{ \prime\prime}\right)\left(\frac{d^{2}}{dt^{2}}+3H\frac{d}{dt}\right)\left( \dot{H}+2H^{2}\right)\right\}\] \[= -24b^{\prime}\left(\widetilde{h}_{s}^{3}+\widetilde{h}_{s}^{4} \right)+6\left(b+b^{\prime\prime}\right)\left(6\widetilde{h}_{s}+18\widetilde{h }_{s}^{2}+12\widetilde{h}_{s}^{3}\right)\left(t_{s}-t\right)^{-4}\,. \tag{672}\] Because \(b>0\) and \(b^{\prime}<0\) as in (658) and \(b^{\prime\prime}\) is arbitrary, as long as \(b+b^{\prime\prime}\geq 0\), the second term in the r.h.s. of (667) is positive. Compared with (670) and (672), we find that the l.h.s. of (667) cannot balance with the second term in the r.h.s. of (667). Furthermore, because both of the first and second terms of (667) are positive as \(b+b^{\prime\prime}\geq 0\), they cannot cancel each other. This tells that the behavior with \(\widetilde{\beta}=-1\) does not satisfy Eq. (667). By combining the results with \(\widetilde{\beta}<-1\) and \(\widetilde{\beta}=-1\), we find that the Type I singularity is prohibited. We now consider Type III singularity, where \(-1<\widetilde{\beta}<0\). We should note that in Eq. (667), in the limit of \(t\to t_{s}\), the first term in the r.h.s. is finite and the l.h.s. and the second term in the r.h.s. diverge. Therefore we may neglect the first term in the r.h.s. of (667). Because now \(2\widetilde{\beta}>\widetilde{\beta}-1>2\widetilde{\beta}-1\), the l.h.s. cannot balance with the second term in the r.h.s. and therefore (667) cannot be satisfied and the Type III singularity is prohibited. In case of Type II, where \(0<\widetilde{\beta}<-1\), n the limit of \(t\to t_{s}\), the first term in the r.h.s. is finite, again. On the other hand, in the limit, the l.h.s. (667) diverges as \(\left(t_{s}-t\right)^{\widetilde{\beta}-1}\) and the second term in the r.h.s. behaves as \(\left(t_{s}-t\right)^{\widetilde{\beta}-3}\) if \(b+b^{\prime\prime}\geq 0\), or \((t_{s}-t)^{3\bar{\beta}-1}\) if \(b+b^{\prime\prime}=0\), the l.h.s. cannot balance with the second term in the r.h.s. and therefore (667) cannot be satisfied, again and the Type II singularity is prohibited. By combining the above results, we find the Type I, II, and III singularities are prohibited by the quantum effects coming from the trace anomaly. The Type IV singularity can be allowed in general when there is any matter. When the matter can be neglected, to see what could happen, we solve (667) by putting \(\rho_{0}=0\) and by adjusting \(b^{\prime\prime}\) so that \(b+b^{\prime\prime}=0\). Then we can rewrite (667) as follows, \[1= -\frac{\dot{H}\left(1+8\kappa^{2}b^{\prime}H^{2}\right)}{4H^{2} \left(1+2\kappa^{2}b^{\prime}H^{2}\right)}\,, \tag{673}\] which can be integrated to be \[t-t_{s}=\frac{1}{4H}+\frac{3\kappa\sqrt{-2b^{\prime}}}{8}\ln \left|\frac{1+\kappa\sqrt{-2b^{\prime}}H}{1-\kappa\sqrt{-2b^{\prime}}H}\right|\,. \tag{674}\] The solution has two branches for positive \(H\), that is, the region \(0<H<\frac{1}{\kappa\sqrt{-2b^{\prime}}}\) and \(\kappa\sqrt{-2b^{\prime}}<H<+\infty\). When \(H\to 0\), \(t\) goes to the plus infinity and \(H\rightarrow\sqrt{-2b^{\prime}}\), \(t\) goes to the plus infinity, again. That is, there is a local minimum in the region \(0<H<\frac{1}{\kappa\sqrt{-2b^{\prime}}}\). In the region, \(\kappa\sqrt{-2b^{\prime}}<H<+\infty\), when \(H\rightarrow\sqrt{-2b^{\prime}}\), \(t\) goes to the plus infinity and after that \(t\) monotonically decreases as a function of \(H\) and when \(H\rightarrow+\infty\), \(t-t_{s}\) vanishes as \(t-t_{s}\sim\frac{1}{H}\). Then there could be three scenarios for the cosmology. In one scenario, the Universe starts from the finite \(t\) corresponding to the local minimum in the region \(0<H<\frac{1}{\kappa\sqrt{-2b^{\prime}}}\). After that, there are two possibilities. In one possibility, when \(t\) increases, \(H\to 0\) and in another possibility, when \(t\) increases, \(H\rightarrow\frac{1}{\kappa\sqrt{-2b^{\prime}}}\). In another scenario, the Universe starts at \(t\to t_{s}\) with \(H\rightarrow+\infty\). After that, \(H\) monotonically decreases and goes to \(H\rightarrow\frac{1}{\kappa\sqrt{-2b^{\prime}}}\) when \(t\rightarrow+\infty\). ### Future Singularities with the Account of Thermal Effects We may remind that the large Hubble rate \(H\) means the large temperature of the Universe. The Hawking radiation effectively should be generated at the apparent horizon of the FLRW Universe [885; 886]. Eventually, it should give an important contribution to the energy density of the late-time Universe, especially right before the Rip time. In other words, at a large temperature that may even diverge at the Rip time, there should appear thermal radiation. #### vi.7.1 Type I Singularity with Thermal Effects: Transition to Type II Singularity Near the Type I (Big Rip) singularity, the temperature of the Universe becomes large and we may expect the generation of thermal radiation as in the case of the Hawking radiation. The Hawking temperature \(T\) is proportional to the inverse of the radius \(r_{\rm H}\) of the apparent horizon and the radius \(r_{\rm H}\) is proportional to the inverse of the Hubble rate \(H\). Therefore the temperature \(T\) is proportional to the Hubble rate \(H\). As well-known in statistical physics, the energy-density \(\rho_{\sf t,rad}\) of the thermal radiation is proportional to the fourth power of the temperature. Then when \(H\) is large enough, we may assume that the energy-density of the thermal radiation is given by \[\rho_{\sf t,rad}=\alpha H^{4}\,, \tag{675}\] with a positive constant \(\alpha\). At the late time, the FLRW equation should be modified by the account of thermal radiation, \[\frac{3}{\kappa^{2}}H^{2}=\rho+\alpha H^{4}\,. \tag{676}\] Here \(\rho\) is the energy density corresponding to the phantom DE, which behaves as \(\rho\sim\rho_{0}a^{-3(1+w)}\) with \(w<-1\). At the late-time but much before the Big Rip time, the first term in the equation (676) dominates and therefore the Universe expands to the Big Rip singularity, where the Hubble rate \(H\) behaves as in (99) with \(\beta=1\). Then near the Big Rip time \(t_{s}\), the second term in (676) should dominate and we obtain \[\frac{3}{\kappa^{2}}H^{2}\sim\alpha H^{4}\,, \tag{677}\] whose non-trivial solution is given by \[H^{2}=H^{2}_{\rm crit}\equiv\frac{3}{\kappa^{2}\alpha}\,. \tag{678}\] As \(H\) goes to a constant, we might expect that the spacetime goes to the asymptotically de Sitter spacetime but it is not true. Even in the de Sitter spacetime, the scale factor \(a\) becomes larger and larger as an exponential function of \(t\), then the first term in the equation (676) should dominate finally. The Hubble rate \(H\) is, however, already larger than \(H_{\rm crit}\), there is no solution of (676). Then the Universe should end up at the finite-time with some kind of the singularity. For more quantitative analysis, we solve (676), with respect to \(H^{2}\) as follows, \[H^{2}=\frac{\frac{3}{\kappa^{2}}\pm\sqrt{\frac{9}{\kappa^{4}}-4\alpha\rho_{0}a ^{-3(1+w)}}}{2\alpha}\,. \tag{679}\] As \(H^{2}\) is a real number, we find that there is a maximum for the scale factor \(a\), \[a\leq a_{\rm max}\equiv\left(\frac{9}{4\kappa^{4}\alpha\rho_{0}}\right)^{- \frac{1}{3(1+w)}}\,. \tag{680}\] Then we consider the behavior of \(a\) or \(H\) around the maximal \(a=a_{\rm max}\) by writing the scale factor \(a\) as \[a=a_{\rm max}\mathrm{e}^{N}\,. \tag{681}\] Here, \(N\) corresponds to the \(e\)-folding number but \(N\) should be negative because \(a<a_{\rm max}\). Furthermore, as we are interested in the region \(a\sim a_{\rm max}\), we assume \(|N|\ll 1\). Then by using \(H=\frac{dN}{dt}\), Eq. (679) can be rewritten as \[\left(1\mp\frac{1}{2}\sqrt{3\left(1+w\right)N}\right)dN\sim dt\sqrt{\frac{3}{2 \alpha\kappa^{2}}}\,, \tag{682}\] which can be integrated as \[N\mp\frac{1}{3}\left(-N\right)^{\frac{3}{2}}\sqrt{-3\left(1+w\right)}\sim- \left(t_{\rm max}-t\right)\sqrt{\frac{3}{2\alpha\kappa^{2}}}\,. \tag{683}\] Here \(a=a_{\rm max}\) when \(t=t_{\rm max}\). Because we are assuming \(|N|\ll 1\), Eq. (683) can be rewritten as \[N\sim-\left(t_{\rm max}-t\right)\sqrt{\frac{3}{2\alpha\kappa^{2}}}\mp\frac{ \sqrt{-3\left(1+w\right)}}{3}\left(\left(t_{\rm max}-t\right)\sqrt{\frac{3}{2 \alpha\kappa^{2}}}\,\right)^{\frac{3}{2}}\,. \tag{684}\] Because \(H=\frac{dN}{dt}\), we find \[H\sim \sqrt{\frac{3}{2\alpha\kappa^{2}}}\mp\frac{\sqrt{-3\left(1+w \right)}}{2}\left(\sqrt{\frac{3}{2\alpha\kappa^{2}}}\right)^{\frac{3}{2}} \left(t_{\rm max}-t\right)^{\frac{1}{2}}\,,\] \[\dot{H}\sim \mp\frac{\sqrt{-3\left(1+w\right)}}{4}\left(\sqrt{\frac{3}{2 \alpha\kappa^{2}}}\right)^{\frac{3}{2}}\left(t_{\rm max}-t\right)^{-\frac{1}{2 }}\,. \tag{685}\] Then in the limit \(t\to t_{\rm max}\), although \(H\) is finite but \(\dot{H}\) diverges. Therefore, the Universe ends up with Type II singularity at \(t=t_{\rm max}\). Thus, we demonstrated that the account of thermal effects near the Big Rip singularity changes the Universe evolution to the finite-time Type II singularity. #### vi.2.2 Type III Singularity with the Account of Thermal Effects: Transition to Type II Singularity The scale factor which generates Type III singularity can be expressed as \[a(t)=a_{s}\mathrm{e}^{-\frac{t_{s}}{1-\beta}(t_{s}-t)^{1-\beta}}\,, \tag{686}\] with \(a_{s}\), \(t_{s}\), \(\beta\), and \(h_{s}\) some constants. In order to generate Type III singularity we restrict the value of \(\beta\) as \[0<\beta<1\,. \tag{687}\] Then the Hubble rate \(H\) is given by \[H=h_{s}\left(t_{s}-t\right)^{-\beta}\,. \tag{688}\] Hence, in the limit \(t\to t_{s}\), \(H\) diverges but the scale factor \(a\) is finite. From Eq. (97) it follows \[\rho_{\rm eff}=\frac{3h_{s}^{2}}{\kappa^{2}}\left(t_{s}-t\right)^{-2\beta}\,, \quad p_{\rm eff}=-\frac{1}{\kappa^{2}}\left(-2\beta h_{s}\left(t_{s}-t\right)^ {-\beta-1}+3h_{s}^{2}\left(t_{s}-t\right)^{-2\beta}\right)\,. \tag{689}\] By deleting \(\left(t_{s}-t\right)\), we find the following EoS, \[p_{\rm eff}=-\rho_{\rm eff}-\frac{2h_{s}\beta}{\kappa^{2}}\left(\frac{\kappa^ {2}\rho_{\rm eff}}{3h_{s}^{2}}\right)^{\frac{\beta+1}{2\beta}}\,. \tag{690}\] Using (686) and (689), one gets \[\rho_{\rm eff}=\frac{3}{\kappa^{2}}h_{s}^{2}\Bigg{(}\frac{1-\beta}{h_{s}}\ln \left(\frac{a_{s}}{a(t)}\right)\Bigg{)}^{-\frac{2\beta}{1-\beta}}\,. \tag{691}\] With the account of the thermal radiation, instead of (676), we have \[\frac{3}{\kappa^{2}}H^{2}=A\left(\ln\left(\frac{a_{s}}{a(t)}\right)\right)^{- B}+\alpha H^{4}\,,\quad A\equiv\frac{3}{\kappa^{2}}\left(\frac{h_{s}^{2}}{(1- \beta)^{\beta}}\right)^{\frac{1}{1-\beta}}\,,\quad B\equiv\frac{2\beta}{1- \beta}>0\,. \tag{692}\] Then instead of (679), we obtain \[H^{2}=\frac{\frac{3}{\kappa^{2}}\pm\sqrt{\frac{9}{\kappa^{4}}-4\alpha A\left( \ln\left(\frac{a_{s}}{a(t)}\right)\right)^{-B}}}{2\alpha}\,. \tag{693}\] Then in order that \(H^{2}\) to be real, we find that there is a maximum \(a_{\rm max}\) for \(a(t)\), \[a(t)\leq a_{\rm max}\equiv a_{s}{\rm e}^{-\left(\frac{9}{4A\alpha x^{2}} \right)^{-\frac{1}{B}}}<a_{s}\,. \tag{694}\] Because \(a_{\rm max}\) is smaller than \(a_{s}\), we find that dark Universe with the future Type III singularity is transited to the one with Type II singularity due to the account of thermal effects. #### vi.2.3 Thermal Radiation for Type II and Type IV Singularities When we consider Type II and Type IV singularities, \(H_{s}(t)\) in (100) is finite and positive, \(0<H=H_{s}(t_{s})<\infty\) When one considers general matter, the first FLRW equation where usual matter and the thermal radiation as in (676) are included, is given by \[\frac{3}{\kappa^{2}}H^{2}=\rho+\alpha H^{4}\,. \tag{695}\] Here, \(\rho\) is the matter energy-density. In case of Type II or Type IV singularity, if \(H_{s}(t_{s})\neq 0\), near the singularity, the l.h.s. goes to a finite value \(\frac{3}{\kappa^{2}}H^{2}\to\frac{3}{\kappa^{2}}H_{s}(t_{s})^{2}\) and the contribution from the thermal radiation in the r.h.s. also becomes finite, \(\alpha H^{4}\to\alpha H_{s}(t_{s})^{4}\). Therefore, the thermal radiation does not change the structure of the singularity. Even if \(H_{s}(t_{s})=0\), the r.h.s. behaves as \(\left(t_{s}-t\right)^{-4\beta}\) and the contribution from the thermal radiation behaves as \(\left(t_{s}-t\right)^{-2\beta}\). Because \(\beta<0\), the contribution from the thermal radiation is less dominant and therefore the thermal radiation does not change the structure of the singularity. ### Combination of Quantum Effect and Thermal Effect We now combine the quantum effect and the thermal effect. In Subsection IX.5 the trace part of the Einstein equation is used. As the radiation is usually conformal, the trace part of the energy-momentum tensor of the radiation should vanish and the thermal radiation does not contribute to the trace equation. We should be, however, more careful in the present situation. The energy density of the thermal radiation is only determined by the temperature. Therefore, the Universe expands and its volume with the thermal radiation increases, the total energy should also be increased if the temperature is not changed or increases as in the case of Type I (Big Rip) or Type III singularity. In other words, say, in the phantom Universe, there should exist effectively negative pressure. The energy of the thermal radiation is not conserved because the expansion produces the new thermal radiation. We should note, however, in order that the effective pressure, which includes the effect of the expansion, is consistent with the FLRW equations, the energy-density of the thermal radiation and the effective pressure must satisfy the conservation law \[0=\frac{d\rho_{\sf u\_rad}}{dt}+3H\left(\rho_{\sf u\_rad}+p_{\sf u\_rad} \right)\,. \tag{696}\] To show the conservation law, we may start from the first FLRW equation where usual matter and the thermal radiation as in (676) are included, \[\frac{3}{\kappa^{2}}H^{2}=\rho+\rho_{\sf u\_rad}\,,\quad\rho_{\sf t\_rad}= \alpha H^{4}\,. \tag{697}\] Here \(\rho\) is matter energy-density. By considering the derivative of Eq. (697) with respect to time \(t\), we obtain, \[\frac{6}{\kappa^{2}}H\dot{H}=\dot{\rho}+4\alpha H^{3}\dot{H}\,. \tag{698}\] Then by using the standard conservation law for matter, \[0=\dot{\rho}+3H\left(\rho+p\right)\,, \tag{699}\] with the matter pressure, and combining (697) and (698), we obtain \[-\frac{1}{\kappa^{2}}\left(2\dot{H}+3H^{2}\right)=p-\alpha\left(H^{4}+\frac{4 }{3}H^{2}\dot{H}\right)\,, \tag{700}\] which is nothing but the second FLRW equation and we can identify the effective pressure of the thermal radiation as follows, \[p_{\sf u\_rad}=-\alpha\left(H^{4}+\frac{4}{3}H^{2}\dot{H}\right)\,. \tag{701}\] Thus, effectively, the energy density and the effective pressure of the thermal radiation satisfy the conservation law (696) or we can find the exact and unique form of the effective pressure in (701) directly by using the conservation law (696) and assuming the form of the energy density of the radiation in (675). Then the trace part \(T_{\sf t\_rad}=-\rho_{\sf u\_rad}+3p_{\sf t\_rad}\) of the energy-momentum tensor for the radiation including the effect of the expansion of the Universe is given by \[T_{\sf t\_rad}=-4\alpha\left(H^{4}+H^{2}\dot{H}\right)\,. \tag{702}\] Let us assume the behavior of the Hubble rate \(H\) as in (99). Then in the case of Type I (Big Rip) case (\(\beta\geq 1\)), near the singularity, we find \(T_{\sf t\_rad}\sim\left(t_{s}-t\right)^{-4\beta}\), whose behavior is not so changed from that of \(T_{A}\) although we need to compare \(b^{\prime}\) with \(\alpha\) to see which term is the dominant one. In case of Type II singularity (\(-1<\beta<0\)), we find \(T_{\sf t\_rad}\sim\left(t_{s}-t\right)^{-3\beta-1}\). As \(R\sim\left(t_{s}-t\right)^{-\beta-1}\), the contribution from \(T_{\sf t\_rad}\) is negligible. In case \(b^{\prime\prime}\neq 0\), which is arbitrary and can be put to vanish if we do not add \(R^{2}\) term, the contribution from \(\Box R\) in \(T_{A}\) dominates and the Type II singularity does not occur. In case of Type III singularity (\(0<\beta<1\)), we find \(T_{\sf t\_rad}\sim\left(t_{s}-t\right)^{-3\beta-1}\), whose behavior is not changed from that of the Gauss-Bonnet invariant \(G\) in \(T_{A}\) but weaker than the behavior of \(\Box R\). Then if \(b^{\prime\prime}\neq 0\), the contribution from the thermal radiation is less dominant than that of the conformal anomaly \(T_{A}\). If \(b^{\prime\prime}=0\), the contribution from \(T_{\sf t\_rad}\) is not changed from that from \(T_{A}\) and we need to compare \(b^{\prime}\) with \(\alpha\) to see which could be dominant, again. Thus, we demonstrated that when quantum effects dominate over thermal effects then future singularities are removed. However, in some cases which depend on the specific features of the theory under consideration, the dominant contribution is due to the thermal effects. In this case, the most possible future of the Universe is the Type II singularity. ### Quantum Effects May Change the Occurrence of a Finite-time Future Singularity In this section, for the scalar-tensor theory, which often generates the Big Rip singularity, we consider the quantum correction found in Ref. [887]. The action of the scalar-tensor theory with a single scalar is given by, \[L=\frac{1}{2\kappa^{2}}\left(R+\frac{\tilde{\gamma}}{2}g^{\mu\nu}\partial_{\mu} \phi\partial_{\nu}\phi-V(\phi)\right)\,, \tag{703}\] where \(\tilde{\gamma}=\pm 1\). When \(\tilde{\gamma}=1\), the scalar field is the phantom universe and therefore often generates the Big Rip singularity It would be interesting to investigate the quantum properties of such scalar-tensor gravity. The calculation of the one-loop effective action in the model (703) has been performed as follows, \[W_{\rm 1-loop}= -\frac{1}{2}\ln\frac{L^{2}}{\mu^{2}}\int d^{4}x\sqrt{-g}\left\{ \frac{5}{2}V^{2}-\tilde{\gamma}\left(V^{\prime}\right)^{2}+\frac{1}{2}\left(V ^{\prime\prime}\right)^{2}+\left[\frac{\tilde{\gamma}}{2}V-2V^{\prime\prime} \right]\phi_{,\mu}\phi^{,\mu}-\left[\frac{13}{3}V+\frac{\tilde{\gamma}}{12}V^ {\prime\prime}\right]R\right. \tag{704}\] \[\left.+\frac{43}{60}R_{\alpha\beta}^{2}+\frac{1}{40}R^{2}-\frac{ \tilde{\gamma}}{6}R\phi_{,\mu}\phi^{,\mu}+\frac{5}{4}\left(\phi_{,\mu}\phi^{, \mu}\right)^{2}\right\}\,.\] The above one-loop action is found in Ref. [887]. We may regard this effective action (704) as a finite quantum correction to the classical one. In (704), the cut-off \(L\) should be identified with the corresponding physical quantity like the GUT scale or the Planck scale. When the universe is approximated by the de Sitter spacetime, the natural choice is \(L^{2}=|R|\) because the curvature is large enough and constant [888, 889]. On the other hand, in the case that \(|V|\gg|R|\), \(L^{2}\) should be identified with \(|V|\). The phantom terms may be induced even if \(V=0\) and \(\tilde{\gamma}=-1\), which corresponds to the canonical scalar. This happens if the universe goes through a region with negative curvature. By the fine-tuning of \(V\), there may appear the quantum gravity-induced phantom theory, which subsequently may change the universe's evolution. Here, we consider the action where \(L^{2}\) is replaced with \(|R|\) as a simple example, \[W_{\rm 1-loop}= -\frac{1}{2}\int d^{4}x\sqrt{-g}\ln\frac{|R|}{\mu^{2}}\left\{ \frac{5}{2}V^{2}-\tilde{\gamma}\left(V^{\prime}\right)^{2}+\frac{1}{2}\left(V ^{\prime\prime}\right)^{2}+\left[\frac{\tilde{\gamma}}{2}V-2V^{\prime\prime} \right]\phi_{,\mu}\phi^{,\mu}\right. \tag{705}\] \[\left.-\left[\frac{13}{3}V+\frac{\tilde{\gamma}}{12}V^{\prime \prime}\right]R+\frac{43}{60}R_{\alpha\beta}^{2}+\frac{1}{40}R^{2}-\frac{ \tilde{\gamma}}{6}R\phi_{,\mu}\phi^{,\mu}+\frac{5}{4}\left(\phi_{,\mu}\phi^{, \mu}\right)^{2}\right\}\,.\] The variations of this action with respect the scalar field \(\phi\) and the metric \(g_{\mu\nu}\) are given by \[\frac{1}{\sqrt{-g}}\frac{\delta W_{\rm 1-loop}}{\delta\phi}= -\frac{1}{2}\ln\frac{|R|}{\mu^{2}}\left\{\left\{\frac{5}{2}V^{2}- \tilde{\gamma}\left(V^{\prime}\right)^{2}+\frac{1}{2}\left(V^{\prime\prime} \right)^{2}\right\}^{\prime}+\left[\frac{\tilde{\gamma}}{2}V-2V^{\prime\prime} \right]^{\prime}\phi_{,\mu}\phi^{,\mu}\right. \tag{706}\] \[\left.-2\nabla_{\mu}\left\{\left[\frac{\tilde{\gamma}}{2}V-2V^{ \prime\prime}\right]\phi^{,\mu}\right\}-\left[\frac{13}{3}V+\frac{\tilde{ \gamma}}{12}V^{\prime\prime}\right]^{\prime}R+\frac{\tilde{\gamma}}{3}\nabla _{\mu}\left\{R\phi^{,\mu}\right\}-5\nabla_{\mu}\left\{\left(\phi_{,\rho}\phi^ {,\rho}\right)\phi^{,\mu}\right\}\right\}\,,\] \[\frac{1}{\sqrt{-g}}\frac{\delta W_{\rm 1-loop}}{\delta g_{\mu\nu}}= -\frac{1}{2}\ln\frac{|R|}{\mu^{2}}\left[\frac{1}{2}g^{\mu\nu} \left\{\frac{5}{2}V^{2}-\tilde{\gamma}\left(V^{\prime}\right)^{2}+\frac{1}{2} \left(V^{\prime\prime}\right)^{2}+\left[\frac{\tilde{\gamma}}{2}V-2V^{\prime \prime}\right]\phi_{,\mu}\phi^{,\mu}\right.\] \[-\left[\frac{13}{3}V+\frac{\tilde{\gamma}}{12}V^{\prime\prime} \right]R+\frac{43}{60}R_{\alpha\beta}^{2}+\frac{1}{40}R^{2}-\frac{\tilde{ \gamma}}{6}R\phi_{,\mu}\phi^{,\mu}+\frac{5}{4}\left(\phi_{,\mu}\phi^{,\mu} \right)^{2}\right\}\] \[-\left[\frac{\tilde{\gamma}}{2}V-2V^{\prime\prime}\right]\phi^{, \mu}\phi^{,\nu}+\left[\frac{13}{3}V+\frac{\tilde{\gamma}}{12}V^{\prime\prime} \right]R^{\mu\nu}-\left(\nabla^{\mu}\nabla^{\nu}-g^{\mu\nu}\nabla^{2}\right) \left[\frac{13}{3}V+\frac{\tilde{\gamma}}{12}V^{\prime\prime}\right]\] \[-\frac{43}{30}R_{\rho}^{\mu}R^{\nu\rho}+\frac{43}{60}\left\{\left( \nabla_{\alpha}\nabla^{\nu}R^{\alpha\mu}+\nabla_{\alpha}\nabla^{\mu}R^{\alpha \nu}\right)-\nabla^{2}R^{\mu\nu}-g^{\mu\nu}\nabla_{\rho}\nabla_{\sigma}R^{\rho \sigma}\right\}+\frac{1}{20}RR^{\mu\nu}\] \[+\frac{1}{20}\left(\nabla^{\mu}\nabla^{\nu}-g^{\mu\nu}\nabla^{2} \right)R+\frac{\tilde{\gamma}}{6}R^{\mu\nu}\phi_{,\rho}\phi^{,\rho}-\frac{ \tilde{\gamma}}{6}\left(\nabla^{\mu}\nabla^{\nu}-g^{\mu\nu}\nabla^{2}\right) \left(\phi_{,\rho}\phi^{,\rho}\right)\] \[+\frac{\tilde{\gamma}}{6}R\partial^{\mu}\phi\partial^{\nu}\phi- \frac{5}{2}\phi_{,\rho}\phi^{,\rho}\phi^{,\mu}\phi^{,\nu}+\left(-R^{\mu\nu}+ \nabla^{\mu}\nabla^{\nu}-g^{\mu\nu}\nabla^{2}\right)\left[-\frac{1}{2R} \left\{\frac{5}{2}V^{2}-\tilde{\gamma}\left(V^{\prime}\right)^{2}+\frac{1}{2} \left(V^{\prime\prime}\right)^{2}\right.\] \[\left.+\left[\frac{\tilde{\gamma}}{2}V-2V^{\prime\prime}\right] \phi_{,\rho}\phi^{,\rho}-\left[\frac{13}{3}V+\frac{\tilde{\gamma}}{12}V^{ \prime\prime}\right]R+\frac{43}{60}R_{\alpha\beta}^{2}+\frac{1}{40}R^{2}- \frac{\tilde{\gamma}}{6}R\phi_{,\rho}\phi^{,\rho}+\frac{5}{4}\left(\phi_{,\rho} \phi^{,\rho}\right)^{2}\right\}\right]\,.\] In the case when the Big Rip singularity occurs, the curvature becomes quickly very large. This means that quantum effects (e.g., quantum gravity effects) become important not only for the early universe but also for the future universe. These quantum effects may even become dominant when the universe approaches the Big Rip singularity. In fact, the quantum correction becomes dominant because \(W_{\rm 1-loop}\) contains higher derivative terms, when we may neglect the classical terms. In order to simplify the situation more, we assume that the curvature and the scalar field \(\phi\) are constant \(R_{\mu\nu}=\frac{3}{l^{2}}g_{\mu\nu}\), \(R=\frac{12}{l^{2}}\), and \(\phi=c\). We also choose the potential \(V(\phi)\)as the exponential function of \(\phi\), \(V(\phi)=V_{0}{\rm e}^{-2\frac{\phi}{\phi_{0}}}\). Then by using (706) and (707), we obtain \[0 = \frac{1}{\sqrt{-g}}\frac{\delta W_{\rm 1-loop}}{\delta\phi} \tag{708}\] \[= -\frac{1}{2}\ln\frac{|R|}{\mu^{2}}\left[-\frac{4}{\phi_{0}}\left( \frac{5}{2}-\frac{4\tilde{\gamma}}{\phi_{0}^{2}}+\frac{8}{\phi_{0}^{4}}\right) V_{0}^{2}{\rm e}^{-\frac{4c}{\phi_{0}}}+\frac{2}{\phi_{0}}\left(\frac{13}{3}+ \frac{\tilde{\gamma}}{3\phi_{0}^{2}}\right)V_{0}{\rm e}^{-\frac{2c}{\phi_{0} }}\frac{12}{l^{2}}\right]\,.\] \[0 = \frac{1}{\sqrt{-g}}\frac{\delta W_{\rm 1-loop}}{\delta g_{\mu\nu}}\] (709) \[= g^{\mu\nu}\left[-\frac{1}{4}\ln\left(\frac{12}{l^{2}\mu^{2}} \right)\left\{\left(\frac{5}{2}-\frac{4\tilde{\gamma}}{\phi_{0}^{2}}+\frac{8} {\phi_{0}^{4}}\right)V_{0}^{2}{\rm e}^{-\frac{4c}{\phi_{0}}}-\left(\frac{13}{3 }+\frac{\tilde{\gamma}}{3\phi_{0}^{2}}\right)V_{0}{\rm e}^{-\frac{2c}{\phi_{0} }}\frac{12}{l^{2}}+\frac{147}{5l^{4}}\right\}\right.\] \[\left.+\frac{1}{8}\left(\frac{5}{2}-\frac{4\tilde{\gamma}}{\phi_ {0}^{2}}+\frac{8}{\phi_{0}^{4}}\right)V_{0}^{2}{\rm e}^{-\frac{4c}{\phi_{0}} }+\frac{3}{2l^{2}}\left(\frac{13}{3}+\frac{\tilde{\gamma}}{3\phi_{0}^{2}} \right)V_{0}{\rm e}^{-\frac{2c}{\phi_{0}}}-\frac{441}{40l^{4}}\right]\,.\] Eq. (708) can be solved with respect to \(l^{2}\) as follows, \[R=\frac{12}{l^{2}}=2\left(\frac{5}{2}-\frac{4\tilde{\gamma}}{\phi_{0}^{2}}+ \frac{8}{\phi_{0}^{4}}\right)\left(\frac{13}{3}+\frac{\tilde{\gamma}}{3\phi_{ 0}^{2}}\right)^{-1}V_{0}{\rm e}^{-\frac{2c}{\phi_{0}}}\,. \tag{710}\] We should note, however, that Eq. (709) is not consistent with the expression in Eq. (710) in general. Then, Eq. (709) might be regarded as an equation determining \(\mu\). We should also note that the r.h.s. in (710) is not always positive. In the case \(\tilde{\gamma}>0\), when \(\tilde{\gamma}^{2}<5\), the r.h.s. in Eq. (710) is positive, but when \(\tilde{\gamma}^{2}>5\), it is positive if \(\phi_{0}^{2}>\frac{4}{5}\left(\tilde{\gamma}+\sqrt{\tilde{\gamma}^{2}-5}\right)\) or \(\phi_{0}^{2}<\frac{4}{5}\left(\tilde{\gamma}-\sqrt{\tilde{\gamma}^{2}-5}\right)\). On the other hand, in a phantom case \(\tilde{\gamma}<0\), the r.h.s. in (710) is positive if \(\phi_{0}^{2}>-\frac{\tilde{\gamma}}{13}\). Anyway, there may occur an (asymptotically) de Sitter solution. Thus, the universe becomes a quantum de Sitter space before entering the Big Rip singularity. This qualitative discussion indicates that the finite time future singularity may never occur (or, at least may become milder) under the conjecture that quantum effects become dominant just before the Big Rip. Due to the sharp increase of the curvature invariants near the Big Rip, such a conjecture looks quite natural. ## X Summary and Conclusions In this review we aimed to present an overview of how finite-time cosmological singularities may arise in various cosmological contexts and to show the nature of singularities. There are various types of singularities that may be developed from cosmological theories, and we focused to demonstrate how soft and crushing types singularities may be developed. In all the cases, it is vital to understand that these cosmic singularities cannot be developed from standard GR approaches, unless a phantom ingredient is added in the theory. On the other hand, standard and non-standard modifications of GR give rise naturally to cosmic singularities without the need of a phantom ingredient. Thus, we presented all the distinct cases that may give rise to finite-time cosmological singularities. Our aim was to provide the theoretical frameworks that allow this singularity occurrence to happen. But our inherent reasoning for this work is to further motivate the study of finite-time cosmological singularities. The point is that cosmic singularities have their own physical significance, apart from the mathematical structures they imply. The fabric of spacetime that can accommodate such spacelike future singularities may have its own mysteries to reveal. A singularity, especially if it is a crushing type singularity, indicates our inability to describe physics adequately. The reason might be unknown for the moment, but this should be the aim of future scientists. The fact that GR cannot produce such singularities, while some modifications of it can, indicates the fact that GR may be an effective theory active in less strong gravity regimes, while the modifications of it might be the more fundamental theories that correctly describe to some extent nature in strong gravity limits. The fact that the modifications of GR predict cosmic singularities on the other hand, might be an indication that these theories link the classical physics to the unknown quantum nature of the Universe. Indeed, as gravity effects become stronger, the modifications of GR come into play, so they might act as a direct link between the classical GR physics and the unknown quantum nature of our Universe. The links are the singularities developed by the modifications of GR. The singularities are always mysterious, but our experience from electrodynamics indicates that they are the links between the classical and quantum physics. Their appearance indicates the need for a fundamental quantum theory. GR itself does not lead to finite-time singularities, unless a phantom ingredient is added to the description. On the other hand, GR modifications lead to singularities, that in some sense touch the quantum theory, they are in between the classical GR theory and the quantum theory of the Universe. The singularities themselves might be a direct indication that these theories are the correct description of nature, and they are closer to the physical reality. These theories are a step closer to the underlying fundamental quantum theory that governs our Universe at high energy and at strong gravity regimes. There are three things that are remarkable to point out, firstly spacelike singularities in GR occur only at the center of the Schwarzschild black holes, the nature of which is unknown, regarding the interior of the black hole, thus the singularity itself points to an underlying fundamental theory of quantum nature beyond GR. Secondly, string corrections to the GR Lagrangian always introduce higher curvature terms in the Lagrangian, and these corrections may lead to finite-time singularities. Thirdly the Big Bang singularity, present in some theories, indeed points out a quantum era for our Universe. Thus singularities always point out the urge for an underlying fundamental theory governing the Universe at strong gravity and high energy regimes, that is yet to be found. There are many fundamental questions that future scientists should address at some point. Was spacetime itself created along with matter? What is the relation of spacetime itself with matter, from a fundamental point of view? Is spacetime evolving with matter or it evolves independently? What is spacetime in the end, is it another manifestation of matter or are these independent? Obviously, the Big Bang singularity relates all these questions, and of course a future crushing singularity brings all these questions to the mainstream of theoretical physics. A singularity in spacetime means geodesics incompleteness and the latter indicates our inability to describe physics, to reach that era physically. This might indicate the change in the topology of the Universe occurring at the time instance of the finite-time future singularity. These questions are not easy to address. Thus, with this review we provided a comprehensive overview of all the distinct theories associated with the finite-time singularities, which may potentially yield indications of an underlying fundamental quantum theory. ## XI Acknowledgments The work of JdH has been supported by grant PID2021-123903NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe". This work was supported by MINECO (Spain), project PID2019-104397GB-I00 and also partially supported by the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M, Spain (S.D.O.). SP acknowledges the financial support from the Department of Science and Technology (DST), Govt. of India under the Scheme "Fund for Improvement of S&T Infrastructure (FIST)" [File No. SR/FST/MS-I/2019/41].
2309.13410
Tropical neural networks and its applications to classifying phylogenetic trees
Deep neural networks show great success when input vectors are in an Euclidean space. However, those classical neural networks show a poor performance when inputs are phylogenetic trees, which can be written as vectors in the tropical projective torus. Here we propose tropical embedding to transform a vector in the tropical projective torus to a vector in the Euclidean space via the tropical metric. We introduce a tropical neural network where the first layer is a tropical embedding layer and the following layers are the same as the classical ones. We prove that this neural network with the tropical metric is a universal approximator and we derive a backpropagation rule for deep neural networks. Then we provide TensorFlow 2 codes for implementing a tropical neural network in the same fashion as the classical one, where the weights initialization problem is considered according to the extreme value statistics. We apply our method to empirical data including sequences of hemagglutinin for influenza virus from New York. Finally we show that a tropical neural network can be interpreted as a generalization of a tropical logistic regression.
Ruriko Yoshida, Georgios Aliatimis, Keiji Miura
2023-09-23T15:47:35Z
http://arxiv.org/abs/2309.13410v1
# Tropical neural networks and its applications to classifying phylogenetic trees ###### Abstract Deep neural networks show great success when input vectors are in an Euclidean space. However, those classical neural networks show a poor performance when inputs are phylogenetic trees, which can be written as vectors in the tropical projective torus. Here we propose tropical embedding to transform a vector in the tropical projective torus to a vector in the Euclidean space via the tropical metric. We introduce a tropical neural network where the first layer is a tropical embedding layer and the following layers are the same as the classical ones. We prove that this neural network with the tropical metric is a universal approximator and we derive a backpropagation rule for deep neural networks. Then we provide TensorFlow 2 codes for implementing a tropical neural network in the same fashion as the classical one, where the weights initialization problem is considered according to the extreme value statistics. We apply our method to empirical data including sequences of hemagglutinin for influenza virus from New York. Finally we show that a tropical neural network can be interpreted as a generalization of a tropical logistic regression. ## 1 Introduction A neural network is a learning method, called deep learning, to learn data in a way to mimic a brain system, i.e., which interconnects nodes, called neurons, in a layered structure like a human brain system [13, 8, 15]. Recent years deep neural networks show great success to process input data which lay in the Euclidean space [11]. However, when input data are phylogenetic trees or the time series with trends, represented as vectors in the _tropical projective torus_[26, 27, 23, 38, 39, 40, 29, 42, 35], classical neural networks show a poor performance. Therefore in this paper, we propose neural networks which process an input data as vectors over the tropical projective torus. The tropical projective torus denoted by \(\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\) is the \(d\)-dimensional real numbers, \(\mathbb{R}^{d}\), mod by the vector with all ones, i.e., \(\mathbf{1}:=(1,1,\ldots,1)\in\mathbb{R}^{d}\). Over the tropical projective torus denoted by \(\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\), we define \(x:=(x_{1},x_{2},\ldots,x_{d})=(x_{1}+c,x_{2}+c,\ldots,x_{d}+c)\in\mathbb{R}^{d} /\mathbb{R}\mathbf{1}\) where \(c\in\mathbb{R}\)[20]. Here we consider the _tropical metric_, also known as the _generalized Hilbert projective metric_, over the tropical projective torus as activation functions in a hidden layer of a neural network. It is important to keep the invariance of the input vector under the one-vector which is innate in the tropical projective torus [20, 17, 33, 18, 41, 30]. Our strategy is to embed an input vector in the tropical projective torus into a vector in the classical Euclidean space in the first layer. This is analogous to the word embedding in the field of natural language processing [37, 28]. Then the following layers can be the same as the classical ones. Although some previous works analyzed ReLU neural networks by using the tropical geometry, the neural networks themselves are defined on a classical Euclidean space [43, 1, 24]. In this paper, on the other hand, we consider a tropical projective torus as an input space and keep the invariance under the one-vector. That is, our work is truly tropical. In this paper, we first introduce a tropical embedding layer. We use the tropical embedding layer as the first layer of the classical neural networks to keep the invariance in the tropical projective space. To check if this tropical neural network has enough flexibility, we next prove that this tropical neural network is a universal approximator. Then we derive a backpropagation rule for the tropical neural networks. We provide TensorFlow 2 codes for implementing a tropical neural network in the same fashion as the classical one, where the weights initialization problem is considered according to the extreme value statistics. We show its applications to phylogenomics, a new field in evolutionary biology which applies tools from phylogenetic trees to genome data. Applications includes simulated data under the multi-species coalescent model which is the most popular model to analyze gene tree analysis on a genome data [21], and empirical data of influenza virus data set collected from the state of New York [27]. Finally we briefly show that a tropical neural network can be interpreted as a generalization of a tropical logistic regression. ## 2 Tropical Embedding for Tropical Neural Networks The classical neural networks only accept a input vector in an Euclidean space in its original form. Thus they cannot accept a phylogenetic tree as an input since a space of phylogenetic trees is not Euclidean [33, 3, 6], for example. Therefore, we first consider a tropical embedding layer, which is analogous to the word embedding in natural language processing [37, 28]. Once a phylogenetic tree is embedded in an Euclidean space, a classical neural network can is applied to analyzing it. **Definition 1** (tropical neural networks).: _A tropical neural network is a network where a tropical embedding layer as the first hidden layer is followed by a classical neural network (classical layers)._ **Definition 2** (tropical embedding layer).: _Let \(x\) in \(\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\) be an input vector to the tropical embedding layer. the activity of \(j\)-th neuron as an output of the tropical embedding layer is given by_ \[z_{j}=\max_{i}(x_{i}+w^{(1)}_{ji})-\min_{i}(x_{i}+w^{(1)}_{ji}). \tag{1}\] **Remark 3**.: _Note that no activation function is executed for \(z\) as "max - min" operation is somehow regarded as the activation function of the neurons in the first hidden layer._ **Remark 4**.: _There is a geometric interpretation: "max - min" operation measures the distance between the points \(x\) and \(w^{(1)}_{j}\). Therefore \(z(x)\) is invariant along one vectors \(\mathbf{1}\)._ **Remark 5**.: _There are alternative ways to attain the invariance such as_ \[z_{j}=\max_{i}(x_{i}+w^{(1)}_{ji})-2\text{nd}\max_{i}(x_{i}+w^{(1)}_{ji}). \tag{2}\] _There is a geometric interpretation: "max - 2nd max" operation measures the distance between a point \(x\) and the tropical hyperplane whose normal vector is \(w^{(1)}\)[17]. Therefore \(z(x)\) is invariant along one vectors \(\mathbf{1}\). You could even use \(j\)-th max in general. However, the repertoire of functions never increase by using alternative ones. That is, from the view point of universal approximation theorem, using Eq. (1) suffices. In addition, Eq. (1) seems to perform better than the alternative ones according to our numerical experiments (not shown). Therefore we solely use Eq. (1) as a tropical embedding layer in what follows._ **Remark 6**.: _Suppose \(A\in\mathbb{Z}_{+}^{N\times d}\). Then we consider the ReLU such that_ \[\max\{Ax+b,0\}.\] _Assume that \(A\mathbf{1}\neq 0\). Suppose \(x\in\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\). Then we have \(x:=x+c\cdot(1,\ldots,1)=x+c\cdot\mathbf{1}\in\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\). Then for \(c\ll 0\) and fixed \(x\), we have:_ \[\max\{Ax+cA\mathbf{1}+b,0\} = 0.\] _As \(c\to-\infty\), we have_ \[\frac{1}{1+\exp(-\max\{Ax+cA\mathbf{1}+b,0\})}\to\frac{1}{1+1}=1/2\] _for any \(x\in\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\). Also for \(c\gg 0\) and fixed \(x\), we have:_ \[\max\{Ax+cA\mathbf{1}+b,0\} = Ax+cA\mathbf{1}+b.\] _As \(c\to\infty\), we have_ \[\frac{1}{1+\exp(-(Ax+cA\mathbf{1}+b))}\to 1\] _for any \(x\in\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\). Therefore, neural networks with the ReLU cannot learn from observation in these cases. However, with the activator function defined in Eq. (1), we have_ \[\max_{i}(x_{i}+c\cdot\mathbf{1}+w^{(1)}_{ji})-\min_{i}(x_{i}+c\cdot\mathbf{1}+ w^{(1)}_{ji})=\max_{i}(x_{i}+w^{(1)}_{ji})-\min_{i}(x_{i}+w^{(1)}_{ji}).\] **Remark 7**.: _Classical neural networks are not well-defined in the tropical projective torus, since the neuron values are not invariant under transformations of the form \(x\to x+(c,\ldots,c)\). Meanwhile, the tropical embedding layer of Eq. (1) is invariant under such transformations._ ## 3 Universal Approximation Theorems for Tropical Neural Networks It is very important to check if the tropical embedding layer as in Eq. (1) followed by classical layers has enough varieties to represent considerable input-output relations [8]. In this section, we show that the tropical neural network can approximate enough variety of functions so that we can safely use it. **Definition 8**.: _The norm \(\|\cdot\|_{q}\) for \(q\geq 1\) is defined by_ \[\|f(x)\|_{q}=\int_{\mathbb{R}^{n}}|f(x)|^{q}dx \tag{3}\] _The space \(L^{q}(\mathbb{R}^{d}),(1<q<\infty),\) is the set of Lebesgue integrable functions \(f\) from \(\mathbb{R}^{d}\) to \(\mathbb{R}\) for which \(\|f(x)\|_{q}<\infty\)._ **Definition 9**.: _The space \(C^{0}(\mathbb{R}^{d})\) is the set of continuous, compactly suppported functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}\)._ **Remark 10**.: _Note that \(C^{0}(\mathbb{R}^{d})\subset L^{q}(\mathbb{R}^{d})\)._ For the classical case, a universal approximation theorem for ReLU feedforward neural networks has been proved in [4]. **Theorem 11** (classical universal approximation theorem [4]).: _Any function of \(x_{j}\) for \(j=1,\ldots,d\) in \(L^{q}(\mathbb{R}^{d}),(1<q<\infty),\) can be arbitrarily well approximated in the \(\|\cdot\|_{q}\) by a ReLU feedforward neural network with at most \(L=2(\lfloor\log_{2}d\rfloor+2)\) layers._ As the \(d-1\) neurons in the tropical embedding layer can easily represent \((x_{j}-x_{d})\) for \(j=1,\ldots,d-1\) and Theorem 11 can be applied to the second and later layers of a tropical neural network (that is equivalent to a classical neural network), we can prove the following theorem. **Theorem 12** (tropical universal approximation theorem).: _Any function of \((x_{j}-x_{d})\) for \(j=1,\ldots,d-1\) in \(L^{q}(\mathbb{R}^{d}/\mathbb{R}\mathbf{1})\simeq L^{q}(\mathbb{R}^{d-1}),(1<q <\infty),\) can be arbitrarily well approximated in the \(\|\cdot\|_{q}\) by a tropical neural network with at most \(L=2(\lfloor\log_{2}d\rfloor+2)+1\) layers (which include an tropical embedding layer as the first layer)._ Proof.: For any \(f\in L^{q}(\mathbb{R}^{d-1})\), \(\exists g\in C_{0}(\mathbb{R}^{d-1})\) such that \(\|f-g\|_{q}<\epsilon/2\)[8]. Let \(K\) be the support of \(g\) and let \(M\) be \(\max_{x\in K}\|x\|\). For \(x\in K\), we can set \(w^{(1)}_{jj}=-w^{(1)}_{jd}=2M\) and \(w^{(1)}_{ji}=0\) for \(i\neq j,d\) to obtain \(z_{j}=x_{j}-x_{d}+4M\) for \(j=1,\ldots,d-1\). This means that a neuron in the first tropical embedding layer can represent \(x_{j}-x_{d}\). Then \(d-1\) neurons can represent \(z_{1}\), \(z_{2}\),..., \(z_{d-1}\). Finally, simply apply Theorem 11 to the classical neural network \(F(z_{1},\ldots,z_{d-1})\) consisting of the second and later layers of a tropical neural network to obtain \(\|g-F\|_{q}<\epsilon/2\). Taken together, \(\|f-F\|_{q}<\|f-g\|_{q}+\|g-F\|_{q}<\epsilon\). There is another type of classical universal approximation theorems. **Definition 13**.: _The width \(d_{m}\) of a neural network is defined to be the maximal number of neurons in a layer._ **Theorem 14** (classical universal approximation theorem for width-bounded ReLU networks [19]).: _For any \(f\in L^{1}(\mathbb{R}^{d})\) and any \(\epsilon>0\), there exists a classical neural networks \(F(x)\) with ReLU activation functions with width \(d_{m}\leq d+4\) that satisfies_ \[\int_{\mathbb{R}^{d}}|f(x)-F(x)|dx<\epsilon. \tag{4}\] Again, as the \(d-1\) neurons in the tropical embedding layer can easily represent \((x_{j}-x_{d})\) for \(j=1,\ldots,d-1\) and Theorem 14 can be applied to the second and later layers of a tropical neural network (that is equivalent to a classical neural network), we can prove the following theorem. **Theorem 15** (tropical universal approximation theorem with bounded width).: _For any function \(f\) of \((x_{j}-x_{d})\) for \(j=1,\ldots,n-1\) in \(L^{1}(\mathbb{R}^{d}/\mathbb{R}\mathbf{1})\simeq L^{1}(\mathbb{R}^{d-1})\) and any \(\epsilon>0\), there exists a tropical neural networks \(F(x)\) with width \(d_{m}\leq d+4\) that satisfies_ \[\int_{\mathbb{R}^{d-1}}|f(x)-F(x)|dx<\epsilon. \tag{5}\] Proof.: For any \(f\in L^{1}(\mathbb{R}^{d-1})\), \(\exists g\in C_{0}(\mathbb{R}^{d-1})\) such that \(\|f-g\|_{1}<\epsilon/2\)[8]. Let \(K\) be the support of \(g\) and let \(M\) be \(\max_{x\in K}\|x\|\). For \(x\in K\), we can set \(w^{(1)}_{jj}=-w^{(1)}_{jd}=2M\) and \(w^{(1)}_{ji}=0\) for \(i\neq j,d\) to obtain \(z_{j}=x_{j}-x_{d}+4M\) for \(j=1,\ldots,d-1\). This means that a neuron in the first tropical embedding layer can represent \(x_{j}-x_{d}\). Then \(d-1\) neurons can represent \(z_{1}\), \(z_{2}\),..., \(z_{d-1}\). Finally, simply apply Theorem 14 to the classical neural network \(F(z_{1},\ldots,z_{d-1})\) consisting of the second and later layers of a tropical neural network to obtain \(\|g-F\|_{q}<\epsilon/2\). Taken together, \(\|f-F\|_{q}<\|f-g\|_{q}+\|g-F\|_{q}<\epsilon\). ## 4 Backpropagation Rule for Simplest Tropical Neural Networks Here we demonstrate that the gradients of the loss function with respect to weights exist for tropical neural networks. The gradient is computable through the chain rule for differentials called backpropagation rule in the similar fashion to the classical case. The gradients obtained in this way can guarantee the successful update of the weights at each iteration of learning. We consider a simplest three layer network whose weights in the first and the second layers are denoted as \(w^{(1)}\in\mathbb{R}^{d\times N}\) and \(w^{(2)}\in\mathbb{R}^{N\times 1}\). Suppose the activity in the first hidden layer is given by Eq. 1 and the output of the network is given as \[y=\sum_{j}^{N}w_{j}^{(2)}z_{j}. \tag{6}\] Note that although here we derive a backpropagation rule for this regression setting just for simplicity, the backpropagation rule can be derived in a similar manner for the classification setting with a sigmoid function, too. Below is the summary of parameters of the neural network. * \(w^{(1)}\), \(w^{(2)}\): weights in the first and second layers; * \(z_{j}\): activation of \(j\)-th neuron in the hidden layer; and * \(y\): activation of the output neuron. **Theorem 16**.: _The partial derivatives of the cost function \(Q:=\frac{1}{2}(y-y^{\text{true}})^{2}\) with respect to weights for the above tropical neural network \(y=f(x)\) are given by_ \[\frac{\partial Q}{\partial w_{ji}^{(1)}}=(y-y^{\text{true}})w_{j}^{(2)}(\delta (i=i_{j}^{\text{max}})-\delta(i=i_{j}^{\text{min}})), \tag{7}\] _where \(i_{j}^{\text{max}}\) (or \(i_{j}^{\text{min}}\)) is the index \(i\) for which \((x_{i}+w_{ji}^{(1)})\) takes the maximum (or Figure 1: architecture of a simplest neural networks that accept a vector in \(\mathbb{R}^{d}/\mathbb{R}\mathbf{1}\) minimum) and_ \[\frac{\partial Q}{\partial w_{j}^{(2)}}=(y-y^{true})z_{j}. \tag{8}\] Proof.: Direct calculations. **Example 17**.: _As a simplest example of Eq. (7), let us consider the three dimensional input case \((d=3)\). Suppose the number of neurons in the middle layer is one and its activity is z, for simplicity. Assume \(x_{1}=1\), \(x_{2}=2\) and \(x_{3}=3\) and \(w_{1}^{(1)}=w_{2}^{(1)}=w_{3}^{(1)}=0\). Then \((x_{i}+w_{1}^{(1)})<(x_{i}+w_{2}^{(1)})<(x_{i}+w_{3}^{(1)})\) and \(i_{\text{max}}=3\) and \(i_{\text{min}}=1\). Therefore,_ \[\frac{\partial Q}{\partial w_{i}^{(1)}}=\left\{\begin{array}{ll}-(y-y^{true })w^{(2)}&(i=1)\\ 0&(i=2)\\ (y-y^{true})w^{(2)}&(i=3).\end{array}\right. \tag{9}\] _In this case we have \(z=3-1=2\) If, furthermore, \(w^{(2)}=1\), then, \(y=w^{(2)}z=2\) and_ \[\frac{\partial Q}{\partial w_{i}^{(1)}}=\left\{\begin{array}{ll}-(2-y^{true }),&(i=1)\\ 0,&(i=2)\\ 2-y^{true}.&(i=3),\end{array}\right. \tag{10}\] \(w_{i}^{(1)}\) _can be, for example, updated by the SGD rule: \(\Delta w_{i}^{(1)}=-\eta\frac{\partial Q}{\partial w_{i}^{(1)}}\), where \(\eta>0\) is a learning rate. Then, \(w_{3}^{(1)}\) increases (and \(w_{1}^{(1)}\) decreases) if \(2>y^{\text{true}}\)._ **Remark 18**.: _It is interesting that only two of \(w_{i}^{(1)}\) are modified while the others remains. Note that \(\Delta w^{(1)}\) is orthogonal to the one vector, \(\mathbf{1}:=(1,1,\ldots,1)\in\mathbb{R}^{d}\). It is interesting to elucidate how this learning rule works as a dynamical system._ ## 5 TensorFlow2 Codes for Tropical Neural Networks In order to boost computing with GPUs, we implement tropical neural networks in TensorFlow 2 [9]. As is the case for the classical neural networks, the auto-differential is the key for the GPU implementation of tropical neural networks. In order to guarantee fast auto-differentials, all the calculations must be implemented only with the math functions in TensorFlow2 such as top_k(\(v\),\(d\)), which returns the maximum and the minimum of a vector \(v\). In practice, it is essential to create a user-friendly class for the tropical embedding as the first layer of the tropical neural networks, that is scalable for big data. The following code defines a hand-made class called TropEmbed(), which enables us to easily implement the tropical neural networks in the Keras/Tensorflow style. class TropEmbed(Layer): def__init__(self,units=2,input_dim=3): super(TropEmbed,self).__init__() self.w = self.add_weight(shape=(units,input_dim), \ initializer="random_normal") self.units = units self.input_dim = input_dim def call(self, x): x_reshaped = tf.reshape(x,[-1, 1, self.input_dim]) Bcast = repeat_elements(x_reshaped, self.units, 1) val, i = tf.math.top_k(Bcast + self.w, self.input_dim) returnval[:,:,0] - val[:,:,-1] # usage model = Sequential([TropEmbed(10, d), Dense(1)]) The codes for TropEmbed() class and for reproducing all the figures in this paper are available at [https://github.com/keiji-miura/TropicalNN](https://github.com/keiji-miura/TropicalNN). ## 6 Weight Initialization Based on Extreme Value Statistics Weight initializations are important for avoiding the divergence and vanishment of neural activities after propagating many layers. For the classical neural networks, Xavier's and He's initializations are famous [12, 16]. Here we consider a tropical analogue. **Definition 19** (Generalized Hilbert Projective Metric).: _For any points \(v:=(v_{1},\ldots,v_{d}),\,w:=(w_{1},\ldots,w_{d})\in\mathbb{R}^{d}/\mathbb{R} \mathbf{1}\), the tropical distance (also known as tropical metric) \(d_{\mathrm{tr}}\) between \(v\) and \(w\) is defined as:_ \[d_{\mathrm{tr}}(v,w):=\max_{i\in\{1,\ldots,d\}}\bigl{\{}v_{i}-w_{i}\bigr{\}}- \min_{i\in\{1,\ldots,d\}}\bigl{\{}v_{i}-w_{i}\bigr{\}}.\] **Lemma 20**.: _Suppose \(x_{i},w_{i}\sim N(0,1)\) for \(i=1,\ldots,d\). Then the expectation and variance of \(d_{\mathrm{tr}}(x,-w)\) can be approximated by \(2\sqrt{2}(a_{d}\gamma+b_{d})\) and \(\frac{\pi^{2}}{3\log d}\), respectively, where \(a_{d}=\frac{1}{\sqrt{2\log d}}\) and \(b_{d}=\sqrt{2\log d}-\frac{\log\log d+\log(4\pi)}{2\sqrt{2}\log d}\)._ Proof.: As \(x_{i}+w_{i}\sim N(0,2)\), \(Z:=\frac{\max\{x+w\}/\sqrt{2}-b_{d}}{a_{d}}\sim Gumbel(0,1)\) as \(d\rightarrow\infty\). Therefore, \(\mathrm{Ex}[d_{\mathrm{tr}}(x,-w)]=\mathrm{Ex}[2\max\{x+w\}]\xrightarrow[d \rightarrow\infty]{}2\sqrt{2}(a_{d}\mathrm{Ex}[Z]+b_{d})\). \(\mathrm{Var}[d_{\mathrm{tr}}(x,-w)]=\mathrm{Var}[\max\{x+w\}-\min\{x+w\}]=2 \mathrm{Var}[\max\{x+w\}]+2\mathrm{Cov}[\max\{x+w\},-\min\{x+w\}]\xrightarrow[d \rightarrow\infty]{}2\times 2a_{d}^{2}\mathrm{Var}[Z]=2\times 2a_{d}^{2}\frac{\pi^{2}}{6}\) where \(\mathrm{Cov}[\max\{x+w\},-\min\{x+w\}]\xrightarrow[d\rightarrow\infty]{}0\) was assumed. Here we confirm that the above scaling holds actually by numerical calculations. One way for better weight initialization is to choose the scale of \(w\) so that the variance of the neural activity in the embedding layer becomes \(1\). **Lemma 21**.: _Suppose \(x_{i}\sim N(0,1)\) and \(w_{i}\sim N(0,\frac{6\log d}{\pi^{2}}-1)\) for \(i=1,\ldots,d\). Then the expectation and variance of \(d_{\mathrm{tr}}(x,-w)\) can be approximated by \(2\sqrt{\frac{6\log d}{\pi^{2}}}(a_{d}\gamma+b_{d})\) and \(1\), respectively._ Proof.: As \(x_{i}+w_{i}\sim N(0,\frac{6\log d}{\pi^{2}})\), \(Z:=\frac{\max\{x+w\}/\sqrt{\frac{6\log d}{\pi^{2}}-b_{d}}}{a_{d}}\sim Gumbel(0,1)\). Therefore, \(\mathrm{Ex}[d_{\mathrm{tr}}(x,-w)]=\mathrm{Ex}[2\max\{x+w\}]\xrightarrow[d \rightarrow\infty]{}2\sqrt{\frac{6\log d}{\pi^{2}}}(a_{d}\mathrm{Ex}[Z]+b_{d})\). \(\mathrm{Var}[d_{\mathrm{tr}}(x,-w)]\xrightarrow[d\rightarrow\infty]{}2\times \frac{6\log d}{\pi^{2}}a_{d}^{2}\mathrm{Var}[Z]=2\frac{6\log d}{\pi^{2}}a_{d}^ {2}\frac{\pi^{2}}{6}=1\). To control the standard deviation of the weights, you can customize an initializer (instead of simply specifying initializer = "random_normal") in the definition of TropEmbed class. > ini = tf.keras.initializers.RandomNormal(mean=0., stddev=1.) > self.w = self.add_weight(shape=(units, input_dim), \(\backslash\) initializer=ini) However, as the weight initialization should be done together with the data preprocessing, in this paper we entirely use the default value of stddev=0.05 for "random_normal" for simplicity. > self.w = self.add_weight(shape=(units, input_dim), \(\backslash\) initializer="random_normal") Figure 2: histogram of simulated \(d_{\mathrm{tr}}(x,-w)\) for the same situation as in Lemma 20 with \(d=10000\). For the histogram, \(100000\) samples of \(d_{\mathrm{tr}}(x,-w)\) are used. The simulated mean is \(10.893\) while the theoretical prediction is \(10.954\). The simulated std is \(0.604\) while the theoretica prediction is \(0.598\). The mean and std are the same as predicted from the theory. Computational Experiments In this section, we apply tropical neural networks with one hidden layer (the tropical embedded layer) to simulated data as well as empirical data. Then later we compare its performance against neural networks with one hidden layer with ReLU activator. ### Small simulated data First we illustrate our tropical neural networks with one hidden layer with \(16\) neurons and with one output Sigmoid function using a small example. First we generate two dimensional \(16+16\) random points from the Gaussian distributions with means \((0.5,-0.5)\) and \((-0.5,0.5)\) with the unit covariance matrix. Then these points are randomly translated by \((c,c)\) where \(c\) is a Gaussian random variable whose standard deviation is \(4\). The left and right figures in Figure 3 show the actual test labels and the predicted probabilities of the test data by the tropical neural networks with one hidden layer with \(16\) neurons. ### High-dimensional simulated data Second we demonstrate that a tropical neural network with one hidden layer with \(8\) neurons and with one output sigmoid function works against the curse of dimensionality, where the most of the variables in this high dimensional data are rather just noises [42]. We generate \(d\) dimensional \(16+16\) random points from the Gaussian distributions with means \((0.5,-0.5,0,\ldots,0)\) and \((-0.5,0.5,0,\ldots,0)\) with the unit covariance matrix. Then these points are randomly translated by \((c,c,c,\ldots,c)\) where \(c\) is a Gaussian random variable whose standard deviation is \(6\). The result demonstrates that the tropical neural networks work robustly against the curse of dimensionality. Figure 3: Predicted probabilities by the tropical neural networks on a small example. ### simulated data generated from the multi-species coalescent model In this subsection we apply the tropical neural networks to a sample of phylogenetic trees generated under the _multi-species coalescent model_. A phylogenetic tree is a weighted tree whose leaves are labeled with \([m]:=\{1,2,\ldots,m\}\), where \(m\) is the number of leaves, and whose internal nodes are unlabeled. A weight on each edge of a phylogenetic tree is considered as a distance from a node to another node on the tree and in evolutionary biology, a weight on an edge can be considered as a product of an evolutionary time and mutationa rate [32]. On this paper we consider a rooted phylogenetic tree with a leaf label set \([m]\). An equidistant tree with \(m\) leaves is called _equidistant tree_ if the total weights on the unique path from its room to each leaf is the same for all leaf in \([m]\). Under the multi-species coalescent model which can be used to analyze gene trees, which are phylogenetic trees reconstructed from genes in a genome, we assume that gene trees are all equidistant trees. Therefore in this paper we assume that all phylogenetic trees are equidistant trees. To conduct a statistical analysis on a set of phylogenetic trees, we consider a _space of phylogenetic trees_ with fixed \([m]\). A space of phylogenetic trees on \([m]\) is a set of all possible phylogenetic trees with \([m]\) and it is well-known that it is not Euclidean [33]. It is also well-known that the space of all possible equidistant trees on \([m]\) with the _tropical metric_ under the max-plus algebra is a subspace of the tropical projective space [3, 39]. In order to define the space of equidistant trees, first we define _ultrametrics_. COnsider a map \(u:[m]\times[m]\rightarrow\mathbb{R}\) such that \(u(i,j)=u(j,i)\) and \(u(i,i)=0\). This map is called a _dissimilarity map_ on \([m]\). If a dissimilarity map \(u\) satisfies that \[\max\{u(i,j),u(i,k),u(j,k)\}\] Figure 4: Application of the tropical neural networks to a high-dimensional example. The test accuracy averaged over 100 trials is plotted. The tropical neural networks work robustly against the curse of dimensionality. achieve at least twice, then we call \(u\) an _ultrametric_. **Example 22**.: _Suppose \(m=3\), i.e., \([3]=\{1,2,3\}\) and suppose_ \[u(1,2)=u(2,1)=1,u(1,3)=u(3,1)=1,u(2,3)=u(3,2)=0.5\] _and \(u(i,i)=0\) for all \(i=1,2,3\). Since_ \[\max\{u(1,2),u(1,3),u(2,3)\}=1\] _and it achieves twice, i.e., \(u(1,2)=u(1,3)=1\). Thus, \(u\) is an ultrametric._ Consider dissimilarity maps \(u_{T}\) on a phylogenetic tree \(T\) on \([m]\) such that \(u(i,j)\) is the total weights on the unique path from a leaf \(i\) to a leaf \(j\) for all \(i,j\in[m]\). Then we have the following theorem: **Theorem 23** ([7]).: _Consider an equidistant tree \(T\) on \([m]\). Then \(u_{T}\) realizes an equidistant tree \(T\) on \([m]\) if and only if a dissimilarity map \(u_{T}\) is ultrametric._ **Example 24**.: _Suppose we have an ultrametric from Example 22. An equidistant tree whose dissimilarity maps are ultrametric in Example 22 is a rooted phylogenetic tree with leaves \([3]=\{1,2,3\}\) shown in Figure 5._ Therefore, we consider the space of all possible ultrametrics on \([m]\) as the space of equidistant trees on \([m]\). Then we have the following theorem: **Theorem 25** ([3]).: _The space of ultrametrics on \([m]\) is the tropicalization of the linear subspace defined by linear equations_ \[x_{ij}-x_{ik}+x_{jk}=0\] _for \(1\leq i\leq j\leq k\leq m\) by replacing sum with max operation and replacing multiplication by a classical summation._ Figure 5: An equidistant tree with the dissimilarity maps which are ultrametric shown in Example 22. The space of ultrametrics on \([m]\) is a subspace of the tropical projective space \((\mathbb{R}^{e}\cup\{-\infty\})/\mathbb{R}\mathbf{1}\) where \(e=\binom{m}{2}\). Therefore, we apply our method, tropical neural networks, to simulated data generated from the multi-species coalescent model using a software Mesquite [21]. The multi-species coalescent model has two parameters: species depth and effective population size. Here we fix the effective population size \(N_{e}=100000\) and we vary \[R=\frac{SD}{N_{e}}\] where \(SD\) is the species depth. We generate species trees using the Yule process. Then we use the multi-species coalescent model to generate gene trees with a given species tree. In this experiment, for each \(R\), we generate two different set of \(1000\) gene trees: In each set of gene tree, we have a different species tree so that each set of gene trees is different from the other. We conduct experiments with \(R=0.25,0.5,1,2,5,10\). Note that when we have a small ratio \(R\), then gene trees become more like random trees since the species tree constrains less on tree topologies of gene trees. Thus it is more difficult when we have a small \(R\) while if we have a large \(R\), then it is easier to classify since the species tree constrains more on tree topologies of gene trees. In this experiment, we set one hidden layer for each neural network: neural network with ReLU activators and neural network with tropical activators. We set the Sigmoid function in the output node in both neural networks. In each neural network, we set \(1000\) neurons in the hidden layer. Figure 6 shows ROC curves for neural networks with ReLU and tropical Figure 6: ROC curves for neural networks with ReLU and tropical neural networks with one hidden layer. We conduct experiments with \(R=0.25,0.5,1,2,5,10\). neural networks. In general tropical neural networks perform better than neural networks with ReLU activator function. ### Influenza data In this subsection we apply our method to genomic data for 1089 full length sequences of hemagglutinin (HA) for influenza A H3N2 from 1993 to 2017 collected in the state of New York obtained from the GI-SAID EpiFlu database (www.gisaid.org). These collected data were aligned using muscle developed by [10] with the default settings. Then we apply the neighbor-joining method with the p-distance [31] to reconstruct a tree from each sequenced data. Each year corresponds to the first season. We also apply KDETrees [38] to remove outliers and a sample size of each year is about 20,000 trees. We apply tropical neural networks and neural networks with ReLU with one hidden layer with 10 neurons to all pairs of different years to see if they are significantly different one year to the other. Heatmaps of accuracy rates with the probability threshold 0.5 and AUC values are shown in Figure 7. Again, tropical neural networks outperform classical neural networks. Tropical Neural Network as a Generalization of Tropical Logistic Regression for Classification of Gene Trees A tropical neural network can be interpreted as a generalization of a tropical logistic regression. The tropical logistic regression model [2] is developed for binary classification on gene trees and it has been proven to have higher predictive power than classical logistic regression to classify phylogenetic trees. It assumes that if \(X\) is the covariate (gene tree), the binary response random variable is \(Y\sim\text{Bernoulli}(p(X))\), with \[p(x)=S\left(\lambda_{0}d_{\text{tr}}(x,\omega_{0})-\lambda_{1}d_{\text{tr}}(x,\omega_{1})+C\right),\] where \(m\) is the number of leaves in each phylogenetic tree in the given sample, \(e:=\binom{m}{2}\), \(S\) is the sigmoid function, \(\omega_{0},\,\omega_{1}\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\), and \(\lambda_{0},\lambda_{1},C\in\mathbb{R}\) with \(\lambda_{0}\lambda_{1}\geq 0\). Note that this model is a special case for Eq. (6), with the sigmoid function as the link, and two neurons in the hidden layer whose weights are \(w_{0}^{(2)}=\lambda_{0},w_{1}^{(2)}=-\lambda_{1}\) and \(w_{0}^{(1)}=-\omega_{0},w_{1}^{(1)}=-\omega_{1}\). Therefore, tropical logistic regression is almost identical to a tropical neural network consisting of one tropical embedding layer with two neurons and a classical layer, with the additional assumption that \(w_{0}^{(2)}w_{1}^{(2)}\leq 0\). The one-species model described in [2] can be considered to be a neural network with \(e\) neurons in the input layer, no hidden layers and a unique output neuron. The activation function is the logistic function and the inner product used is tropical defined as \[\langle x,-\omega\rangle:=d_{\text{tr}}(x,\omega)-C, \tag{11}\] where \(C\) can be considered to be a bias variable, similarly to the intercept variable in classical models. Tropical logistic regression returns the sigmoid of the tropical inner product. We define the tropical generalised linear model as an extension to tropical logistic regression, where instead of the sigmoid function, we may use a different link/activation function. If there are multiple outputs (multivariate generalized linear model (GLM)) and if we treat the output layer as the new input layer and iterate this \(L\) times, then we have an \(L\)-layer neural network. In the same way that classical neural networks are a stack/recursive application of classical multivariate GLMs, tropical neural networks can be a stack of tropical multivariate GLMs. Effectively, all is identical to classical networks, but instead of applying classical inner products, we apply tropical inner products as defined in Eq. (11). The \(i\)-th neuron of the \(l\)-th layer is Figure 7: Heat maps for (top) classification rates with threshold 0.5 and (bottom) AUC values for classical neural networks with ReLU (left) and tropical neural networks (right). defined as \(x_{i}^{(l)}\) and computed through the recursive formula, \[x_{i}^{(l)}=d_{\mathrm{tr}}\left(x_{i}^{(l-1)},\omega_{i}^{(l)}\right)-C_{i}^{(l )}, \tag{12}\] where \(\Omega^{(l)}=(\omega_{1}^{(l)},\omega_{2}^{(l)},\ldots,\omega_{N_{l}}^{(l)})\in \mathbb{R}^{N_{l-1}\times N_{l}}\) is the weight matrix between layer \((l-1)\) and \(l\) for the number \(N_{s}\) of neurons in layer \(s\), and \(C^{(l)}\in\mathbb{R}^{N_{l}}\). By assuming that all neurons share the same bias variable \(c=C_{i}^{(l)}\) for all \(i\in[N_{l}]\), Eq. (12) reduces to Eq. (1), since vectors are defined up to an additive constant vector \((c,\ldots,c)\) in the tropical projective torus. When the last tropical embedding layer connects to the first classical layer, the constant bias vector is incorporated in the bias term of the classical layer. Hence, tropical bias terms are redundant and not considered in the development of tropical neural networks. Thus, the tropical neural network which we propose in this paper follows naturally as an extension of the tropical logistic regression model. ## 8 Summary and Discussion In this paper, we first developed a tropical embedding layer. We used the tropical embedding layer as the first layer of the classical neural networks to keep the invariance in the tropical projective space. To check if this tropical neural network has enough flexibility, we next proved that this tropical neural network is a universal approximator. After we derived a backpropagation rule for the tropical neural networks, we provided TensorFlow 2 codes for implementing a tropical neural network in the same fashion as the classical one, where the weights initialization problem is considered according to the extreme value statistics. Finally we showed some applications as examples. The tropical neural networks with the tropical metric worked better than the classical neural networks when the input data are phylogenetic trees which is included in the tropical projective torus. This is partly because only the tropical neural network can keep the invariance of the input vector under the one-vector which is innate in the tropical projective torus. One of the nice properties of tropical neural networks is its tractability and interpretability in analysis. The tropical embedding can be interpreted as taking the tropical distance to a point in the space of the tropical projective torus. The activities of neurons in the tropical neural networks with the randomized weights and inputs can be analyzed by using the extreme value statistics. The backpropagation rule of the tropical neural networks can be derived and interpreted rather easily. The TensorFlow 2 codes for the Python class for tropical embedding was provided in the paper. This makes it possible to implement a tropical neural network in the same familiar fashion as the classical one. This facilitates, for example, to compare tropical and classical neural networks for the same data by using a common code. Recent work shows that neural networks are vulnerable against adversarial attacks (for example, [5, 25, 34, 22]). However, our initial computational experiments on image data from computer vision show that tropical neural networks are robust against gradient based methods, such as the Fast Gradient Sign Method [14] and Ensemble Adversarial Training [36]. It is interesting to investigate why tropical neural networks are robust against such attacks. In addition, it is interesting to develop adversarial attacks toward tropical neural networks.
2309.05272
Minuteman: Machine and Human Joining Forces in Meeting Summarization
Many meetings require creating a meeting summary to keep everyone up to date. Creating minutes of sufficient quality is however very cognitively demanding. Although we currently possess capable models for both audio speech recognition (ASR) and summarization, their fully automatic use is still problematic. ASR models frequently commit errors when transcribing named entities while the summarization models tend to hallucinate and misinterpret the transcript. We propose a novel tool -- Minuteman -- to enable efficient semi-automatic meeting minuting. The tool provides a live transcript and a live meeting summary to the users, who can edit them in a collaborative manner, enabling correction of ASR errors and imperfect summary points in real time. The resulting application eases the cognitive load of the notetakers and allows them to easily catch up if they missed a part of the meeting due to absence or a lack of focus. We conduct several tests of the application in varied settings, exploring the worthiness of the concept and the possible user strategies.
František Kmječ, Ondřej Bojar
2023-09-11T07:10:47Z
http://arxiv.org/abs/2309.05272v1
# Minuteman: Machine and Human Joining Forces in Meeting Summarization ###### Abstract Many meetings require creating a meeting summary to keep everyone up to date. Creating minutes of sufficient quality is however very cognitively demanding. Although we currently possess capable models for both audio speech recognition (ASR) and summarization, their fully automatic use is still problematic. ASR models frequently commit errors when transcribing named entities while the summarization models tend to hallucinate and misinterpret the transcript. We propose a novel tool - Minuteman - to enable efficient semi-automatic meeting minuting. The tool provides a live transcript and a live meeting summary to the users, who can edit them in a collaborative manner, enabling correction of ASR errors and imperfect summary points in real time. The resulting application eases the cognitive load of the note-takers and allows them to easily catch up if they missed a part of the meeting due to absence or a lack of focus. We conduct several tests of the application in varied settings, exploring the worthiness of the concept and the possible user strategies. ## 1 Introduction When holding meetings, in addition to communicating in real time, it is often necessary to produce an accurate summary of whatever was discussed, what were the major points for and against and what was agreed on. We call the outcome of such a process _a meeting summary_ or _meeting minutes_. Such a summary can then go on to be used in subsequent meetings or sent to the participants or those who could not attend but still need to know what happened. Meeting summarization is cognitively difficult. This is firstly due to the sheer amount of information the author has to process in real time to be able to write a result of sufficient quality. Secondly, many meetings in non-professional settings do not have a dedicated notetaker and the author has to multitask, on one hand partaking actively in the meeting, and on the other hand writing things down. Since the coronavirus pandemic began, many meetings have moved to online platforms like Google Meet, JitSi or Zoom. With state-of-the-art technology for ASR and text summarization, it is becoming possible to automate the task. As with most language processing tasks, pre-trained transformer(Vaswani et al., 2017) language models show the most promise, as shown by Zhang et al. (2022) for summarization in general and Shinde et al. (2021) for meetings specifically. Transformer-based language models have, as of now, a few issues. First of all, they have a limited input size due to the quadratic complexity of the self-attention mechanism, thus reducing the available context.1 Meeting transcripts are often long and will not fit inside a single input window, requiring workarounds. Secondly, current language models are prone to hallucination and can be extremely inaccurate at times, as explored by Ji et al. (2023). But when summarizing meetings, relevance and factuality is key, as many people rely on the output for their work and coordination; thus, mistakes can be costly. Fully automatic solutions already exist,2, they however do not offer interactivity for the users to control the generated transcript and summary while the meeting is running. Footnote 1: Although there are experiments with modifying the attention mechanism to accomodate a longer input, see Beltagy et al. (2020). Footnote 2: see for example sembly.ai or meetgeek.ai To circumvent the challenges of insufficient summary factuality, coverage and hallucination, we introduce a novel tool, _Minuteman_, to enable effective cooperation between the model and the participants of the meeting. The meeting is recorded and transcribed. The transcript is provided live to the users in an online editor and summarized in real time. The users can edit the transcript; these edits trigger a new automatic summarization of the respective section, updating the live summary. The users can also indicate that a particular segment in the transcript is important and trigger its additional automatic summarization. The live summary is also editable by the users, allowing them to correct or complement the output of the summarization model. The tool is designed in a modular manner, allowing for easy replacement of summarization and transcription models. ## 2 Minuteman Tool Minuteman is an online application that helps users with meeting minuting. The demo version for testing is available at minuteman.kmjec.cz. A screenshot of the application user interface is shown in Figure 1. Upon entering a Jitsi3 room name, Minuteman connects to the meeting as an additional participant to record all participants' audio tracks. A live transcript is then generated from the conversation of the meeting participants, relying on speech recognition (Section 2.3 below) and summarization (Section 2.5). Footnote 3: meet.jit.si The summary creation works automatically in an iterative manner; when enough new utterances have been appended, a new summary point is created to represent them. The density of the summary can be controlled by selecting the chunk length in words in the top bar. This selection only affects the newly generated summary points. It is also possible to select a portion of the transcript and trigger its summarization manually by pressing Ctrl + Alt + S. This additional summary point is appended at the end of the minutes at the point when it was created, see Figure 2 for an example. The transcription and summarization models can produce errors. Minuteman thus allows the meeting participants to correct them using two shared editors. Both the transcript and the generated summaries are editable by anyone, with changes in transcript being reflected in the generated summaries. However, if a summary point is edited, it is then frozen and never updated by Minuteman again, to prevent the model from overwriting users' corrections. ### Implementation and Architecture Minuteman consists of four main components: the frontend user interface and sound recording, the transcription module, the backend of the editor and the summarization module. These components communicate together over a RabbitMQ message queue, allowing for easy interchangeability. The components are containerized using Docker and built and run using docker-compose. ### Frontend and Sound Processing The frontend is responsible for providing the user with two Etherpad editors and the control bar, and for recording the audio. When Minuteman connects to the meeting, it sets up audio recording for each user track separately, therefore it does not Figure 1: A screenshot of Minuteman during a meeting. Debug mode is enabled so sequence numbers of transcriptions and summaries are shown. The left editor contains the transcript, the right editor contains the summary. Figure 2: Segment selection and on-demand summarization need diarization to distinguish different users. The audio is converted to 16KHz and sent to the back-end in one-second long chunks for transcription. The audio recording itself happens in in a separate Javascript thread, meaning it does not block the UI processing code. The recorded chunks are bound to an identifier of the track and session and sent to a Python Flask5 API, which appends them to a processing queue provided by RabbitMQ, ensuring good ordering. The chunks are then picked up by the transcription module and processed. Footnote 5: flask.palletsprojects.com/en/2.3.x/ ### Transcription Module The transcription module collects recorded audio chunks from a queue and processes them, producing transcript utterances. It is a Python script connected to RabbitMQ via the pika6 library. It uses a Whisper ASR model by Radford et al. (2022) for transcription.7 As Whisper is not purposefully built for live transcription but for transcribing already recorded audio files, the live ASR requires a mechanism for splitting the audio tracks into utterances and transcribing those as a whole. Footnote 6: github.com/pika/pika Footnote 7: namely the faster-whisper implementation from github.com/guillaumekn/faster-whisper We keep a buffer of audio data for each track, and we ensure that the all chunks in the buffer always contain speech. When a new audio chunk comes, it is first checked for speech presence using Silero voice activity detector (VAD) by Silero Team (2021). If speech is detected, the chunk is appended to the buffer. If no speech is detected, we assume that the utterance in the buffer is finished and we send the buffer contents to Whisper for transcription. We then flush the buffer contents. The transcribed utterance is sent to the editor backend over RabbitMQ. This setup implicitly means that utterances are sorted by their end times. We chose this approach due to simplicity, however, there may be a need for more complex utterance ordering in the future. A known limitation is that we currently do not support more speakers in the same audio channel. ### Editor Backend To allow for collaboration of multiple users as well as interaction with the summarization model, we use the Etherpad editor. We implement the tool backend as an Etherpad plugin written in Javascript with dependencies managed through npm.8 Footnote 8: npmjs.com The first responsibility of the backend is handling the appending of utterances from the transcription module to the transcript editor pad. Each utterance is given a sequence number, which is bound to the utterance line contents using Etherpad attributes. This is done to be able to refer to transcript sections even when they are being edited by the users. Secondly, the backend handles the summarization of the transcript. As utterances are received from the transcription module, the plugin keeps track of how many unsummarized words are present. When the number of unsummarized words reaches a threshold, the section of transcript to be processed is extracted and sent to the summarization module. The starting and ending utterance sequence numbers are saved. An imprompt summarization message is placed in the summary pad and once a response comes back from the summary module, it is replaced by the generated summary. The same process is repeated when a user requests a summary of a certain selected segment. An important part of the editor backend code is the extraction of transcript segments. To ensure robustness with respect to user edits, we work at the level of single utterances (lines). The extraction mechanism is given a sequence number of the start Figure 3: Application architecture. The blue markers represent data being sent over RabbitMQ. ing and ending utterance; it then iterates over the whole transcript. When it finds the starting utterance or an utterance with a higher sequence number, it starts recording the transcript segment. When it reaches the ending utterance or an utterance with a higher sequence number, it stops the recording and returns the recorded segment for summarization. That way, the process behaves robustly and predictably even in a collaborative environment. Lastly, the backend ensures that when the transcript changes (perhaps due to a user edit that corrects a badly transcribed utterance), the summaries generated from the affected segments are updated. On every edit, the already-summarized segments are extracted and compared to their past form; if we find a difference, the segment is summarized again and the corresponding summary is updated. However, if the summary point was already edited by a user, it is not overridden to preserve the user inputs. ### Summarization Module The summarization module is a Python program listening on RabbitMQ for incoming transcript chunks. It requests the summaries from a BART model by Lewis et al. (2020) finetuned on XSum Narayan et al. (2018) and SAMSum Gliwa et al. (2019) datasets. The model is available from HuggingFace.9 We elected to use BART because it provided one of the best performances at the Automin 2021 competition Ghosal et al. (2021), with a successfull team Shinde et al. (2021) using it together with several preprocessing steps. We take up the same preprocessing process, including the clearing of stopwords and removing unnecessary filler words. To enable simple interchange of models for newer ones, we run the model in a TorchServe10 backend. Footnote 9: huggingface.co/lidiya/bart-large-xsum-samsum Footnote 10: pytorch.org/serve/ ## 3 User Testing We conducted several tests of the tool between the authors and together with a group of network administrators from a local high school, using their work meeting as a testing ground for meetings with multiple active participants. We exploited the fact that their meetings contain a lot of named entities and technical wording, allowing us to test the ASR model to the limit. Based on the results, we formed a qualitative assessment of the tool usability and possible workflows. All the participants of our experiments were briefed and consented to their recordings being used in the evaluation. ### Suggested Workflow An efficient workflow is reliant on having multiple available participants in the meeting to supervise the transcript and summary points; we found it difficult to keep track of what was happening in the transcript and in the summary in only two people, as constant activity is required of both of the participants. However, the summary was of high quality, capturing the contents of the meeting well, and if the group of two users needs to produce a summary anyway, the tool definitely helps. In a larger group of users, only several of them are usually vocally active. The rest can then contribute to correcting the transcript and the generated summary. Upon testing with the administrator group, we found that the workflow of transcript correction is effective, allowing everyone to correct named entities misidentified by the model. At the same time, we observed that the overall summary quality was perceptibly lower. This could be due to a number of causes including lower microphone quality, different non-native accents, less well-arranged transcript due to more participants or the fact that the summary model was finetuned on short non-technical conversations. We give a detailed overview of the errors encountered below as well as suggestions for future work to counteract them. ### Error Analysis We found that most errors were committed by the ASR model upon transcribing named entities. This was expected; many of the topics discussed in the test meetings required sufficient domain knowledge or were in different languages. Examples of transcription errors are listed in Table 1. These could probably be largely counteracted by using a more powerful version of the Whisper model; while testing, we resorted to the small.en variant due to speed and hardware constraints. Also, many of the errors originate in the non-native English of the meeting participants with imperfect pronunciation and in bad quality of the participants' microphones. Upon inspecting the transcript, we updated the model to medium.en, increasing the transcript quality without sacrificing much speed. As for summarization model errors, from our experiments, we conclude that the quality of the generated output is highly dependent on the quality and coherence of the provided transcript. We divide the committed errors into three main categories. Examples are provided in the list below: * **Overgeneralization:** PARTICIPANT1 and PARTICIPANT2 discuss the implementation of a text editor. is a true statement for our test meeting, but it does not convey any important information that would be worth writing down, since the entirety of it was devoted to improving the editor. * **Swapping or misinterpreting the actors of an action:** PARTICIPANT1 wants PARTICIPANT2 to finish the machinery before the end of this month so that if she switches the cables, she can just note it down and some scripts will fix it for him. It is noted that PARTICIPANT1 wants PARTICIPANT2 to do something, but this is never mentioned in the transcript. * **Errors due to lacking context:** PARTICIPANT1 needs to refer to some parts of the transcript for the minutes to get summarized. PARTICIPANT2 will double check the deadline for the bachelor thesis. In the transcript, the checked date was supposed to be the deadline of paper submissions, not for the thesis, but it was discussed in the same context as the bachelor thesis. From a longer context window, the error could be deduced by the model and avoided. Overall, it can be stated that the generated summary is good at capturing the main point of contention for a summary segment, but it very often fails on determining who is the subject of an action and who is an object; much user cooperation is needed in that regard. The generated summary also does not necessarily correspond to a predetermined meeting agenda; it can therefore be difficult for the users to manipulate the model to focus on the content that is important to them. This is however natural, as the model cannot know the agenda in advance. ### Feedback From Users The testers reported that they appreciated the possibility of catching up with the meeting even on getting a quick pause. They did not yet feel comfortable trusting the tool for summarizing the whole meeting, noting the differing styles of a normal summary that mostly focuses on agreed-upon conclusions stemming from a previous agenda and the generated summary. Overall, they found the tool helpful for catching up with the meeting when they were interested to know what each participant had to say about a given point. ## 4 Conclusion We demonstrated a novel modular tool for interactive summarization, implementing a promising user interface concept. We conducted several qualitative evaluations of the tool outputs and collected feedback from the users. From the user feedback, we conclude the tool demonstrates a worthy concept, although a lot of improvement is needed for the summaries to be reliable and trusted by the users completely. ### Future work We divide possible improvements into three categories: Summarization ModelsWe presume using larger models like Llama introduced by Touvron et al. (2023) would be helpful for getting more relevant summaries. A possible improvement would be finetuning these for summarization and replacing BART. Also, the summarization module could be modified to request summaries from ChatGPT API,11 allowing efficient cooperation with a power \begin{table} \begin{tabular}{l|l} Example & Error explanation \\ \hline \hline Vojta: a different DHCP server named **care** so we can try it, I’ve never used it. & The discussed DHCP server is called Kea, not ‘care’. \\ \hline \begin{tabular}{l} Fanda: like, adapt this towards **check** summarization, like, you just, like, one thing is swapping for **check** whisper, that’s easy, and one thing is just, like, uploading a new model to... \\ \end{tabular} & The Whisper model misinterpreted bad pronunciation and did not recognize the word ‘Czech’ in context. \\ \end{tabular} \end{table} Table 1: Examples of errors committed by the ASR model ful model that is currently very popular with users. Prompt engineering would then be required to support summarization. User InterfaceCurrently, it can be difficult for the user to find where the summary points refer to in the transcript. Adding color coding to the summaries and the transcript segments would greatly improve the orientation in the transcript-summary correspondence. Inspiration could be drafted from the ALIGNMEET tool for meeting evaluation by Polak et al. (2022), which also uses color coding for this purpose. Underlying Meeting PlatformCurrently, a large limitation is the restriction to the Jitsi Meet platform. A possible improvement is to rewrite the interface to allow the user to connect to Google Meet, Zoom etc. It would also be worthwhile to prepend the transcription pipeline with diarization and record in-person meetings, allowing the tool to be used in offline settings. ## 5 Limitations We note that our assessment of the result quality is only qualitative and of a limited sample size, as we did not have the means to conduct a larger quantitative testing effort. Testing was carried out in English by non-native English speakers, therefore the quality of the results can be influenced by non-natural word orders and phrases taken over from their mother tongue (Czech).
2309.12068
Superfluidity of Total Angular Momentum
Spontaneous symmetry breaking of a U(1) symmetry in interacting systems leads to superfluidity of a corresponding conserved charge. We generalize the superfluidity to systems with U(1) symmetries acting on both matter fields and 2D spatial coordinates. Such systems can be effectively realized in easy-plane ferromagnetic systems with spin-orbit coupling where the conserved charge is a total angular momentum. We clarify that under a steady injection of spin angular momentum, the superfluid of the total angular momentum shows spacetime oscillations of the spin density and geometry-dependent spin hydrodynamics. We also demonstrate that the steady spin injection destabilizes the superfluid of total angular momentum, causing a dissipation effect in its spin hydrodynamic properties. Although a stability analysis shows that the superfluid under the spin injection is nonideal, the unique spin-transport features persist with weak dissipation of the spin angular momentum. Our study broadens the comprehension of superfluidity and sheds new light on the interplay between symmetries and phases of matter.
Yeyang Zhang, Ryuichi Shindou
2023-09-21T13:37:45Z
http://arxiv.org/abs/2309.12068v3
# Superfluidity of Total Angular Momentum ###### Abstract Spontaneous symmetry breaking of a U(1) symmetry leads to superfluidity of a corresponding conserved charge. We generalize the superfluidity to systems with U(1) symmetries acting on both matter fields and two-dimensional spatial coordinates. Such systems can be effectively realized in easy-plane ferromagnetic systems with spin-orbit coupling where the conserved charge is a total angular momentum. We clarify that under a steady injection of spin angular momentum, the superfluid of the total angular momentum shows spacetime oscillations of the spin density and geometry-dependent spin hydrodynamics. Though a stability analysis shows that the superfluid under the spin injection is nonideal, the proposed spin transport persists with weak dissipation of the spin angular momentum. Our study broadens the comprehension of superfluidity and sheds new light on the interplay between symmetries and phases of matter. _Introduction._--The discovery of superfluidity [1; 2; 3] is a milestone in the history of physics. Exotic macroscopic quantum phenomena in superfluids are explained by the condensation of bosonic atoms [4; 5] or neutral Cooper pairs [6]. Spontaneous symmetry breaking (SSB) of a U(1) global gauge symmetry leads to Goldstone modes with gapless and linear dispersions [7; 8; 9], which enables dissipationless mass currents. By alternative U(1) symmetries, the superfluidity can be generalized to spin [10; 11; 12; 13; 14; 15; 16; 17] and excitonic [18; 19; 20; 21; 22; 23] currents. General relations between Goldstone modes and SSB of continuous symmetries are derived in the literature [24; 25; 26; 27], while they mostly considered continuous _internal symmetries_ that transform only field operators locally. _Spacetime symmetries_ act on both field operators and spacetime coordinates [28], and the symmetries bring about fundamental physical consequences such as the relativistic spin-orbit coupling (SOC). The continuous spacetime symmetries can be spontaneously broken in spinful superfluids in cold-atom systems [29; 30; 31; 32; 33; 34]. Nonetheless, it remains largely unexplored how the SSB of the continuous spacetime symmetries affects the hydrodynamic transport of "charges" associated with the broken spacetime symmetries. In this Letter, we generalize the concept of superfluidity to the SSB of continuous spacetime symmetries. As a physical example, we consider the superfluidity of total angular momentum, where a joint U(1) rotational symmetry of the in-plane spin vector and two-dimensional spatial coordinates is spontaneously broken. The superfluid of total angular momentum is nothing but a spin superfluid [10; 11; 12; 13; 14; 15; 16; 17] in the presence of the SOC. It can be realized in spin-triplet exciton condensation [35; 36; 37] and easy-plane ferromagnetic spin models when discreteness of lattice rotational symmetries within the two-dimensional plane becomes effectively negligible. We derive an effective field theory of a Goldstone mode in the total-angular-momentum superfluid and solve its classical equation of motion in the presence of a steady injection of spin. We find that the total-angular-momentum superfluid shows spacetime oscillations of spin density and current under the spin injection, which contrasts with conventional spin superfluid without SOC [10; 11; 12; 13; 14; 15; 16; 17]. We also uncover unique geometry dependence and non-reciprocity in its hydrodynamic spin transport, which are absent in systems only with discrete internal rotational symmetry [13; 19; 38]. Especially when the system is in a circular geometry with finite curvature, the spin hydrodynamics depends on the direction of the spin flow as well as the curvature of the system. The proposed spatial and temporal spin textures can be experimentally detected by magnetic force microscopy [39; 40] and X-ray pump-probe microscopy [41], respectively. Landau argued that uniform superfluids moving slower than a critical velocity realize states at local minima of energy, so the superfluidity is protected from any dissipative perturbation [42; 43; 44; 13]. Following the same argument as the Landau criterion, we demonstrate that the total-angular-momentum superfluid is _not_ an energy-local-minimum state in the presence of the spin injection, and dissipation by local variation is possible even at lower spin supercurrent. Nonetheless, we also show that the qualitative behavior of the hydrodynamic spin transport remains unchanged and distinct from a non-superfluid [13]. _Model._--Consider a two-dimensional complex field \(\phi\equiv\phi_{x}+i\phi_{y}\), where the two-dimensional real and time-reversally-odd vector field \((\phi_{x},\phi_{y})\) and two-dimensional spatial coordinate \((x,y)\) transform under a joint U(1) rotation around \(z\)-direction, \[\phi\rightarrow\phi e^{i\epsilon},\quad\left(\begin{array}{c}x\\ y\end{array}\right)\rightarrow\left(\begin{array}{cc}\text{cos}&-\text{ sin}\epsilon\\ \text{sin}\epsilon&\text{cos}\epsilon\end{array}\right)\left(\begin{array}{c}x \\ y\end{array}\right). \tag{1}\] SSB of the joint U(1) symmetry is characterized by a real-time field theory of \(\phi\), \[\mathcal{L}_{\phi}=i\eta_{0}\phi^{\dagger}\partial_{t}\phi+\frac{ \eta_{1}^{2}}{2}(\partial_{t}\phi^{\dagger})(\partial_{t}\phi)\\ -\frac{\eta_{1}^{2}c^{2}}{2}(\partial_{j}\phi^{\dagger})(\partial_ {j}\phi)-\frac{\eta_{1}^{2}c^{2}}{4}[\alpha(\partial_{-}\phi)^{2}+\alpha^{*}( \partial_{+}\phi^{\dagger})^{2}]\\ -\frac{1}{2}[u(\partial_{+}\phi)(\phi^{\dagger})^{2}+u^{*}( \partial_{-}\phi^{\dagger})\phi^{2}]-\frac{U}{2}(\phi^{\dagger}\phi-\rho_{0})^ {2}, \tag{2}\] where \(\partial_{\pm}\equiv\partial_{x}\pm i\partial_{y}\), \(j=x,y\). In this Letter, we impose the time-reversal symmetry onto the Lagrangian \(\mathcal{L}_{\phi}\), \[\phi\rightarrow-\phi^{\dagger},\quad t\rightarrow-t,\quad i\rightarrow-i. \tag{3}\] This leads to \(\eta_{0}=u=0\), and the theory is simplified, \[\mathcal{L}_{\phi}= \frac{\eta_{1}^{2}}{2}(\partial_{t}\phi^{\dagger})(\partial_{t} \phi)-\frac{\eta_{1}^{2}c^{2}}{2}(\partial_{j}\phi^{\dagger})(\partial_{j}\phi)\] \[-\frac{\alpha\eta_{1}^{2}c^{2}}{4}[(\partial_{-}\phi)^{2}+( \partial_{+}\phi^{\dagger})^{2}]-\frac{U}{2}(\phi^{\dagger}\phi-\rho_{0})^{2}. \tag{4}\] A global phase of \(\phi\) is chosen so that \(\alpha\) is real and positive. We assume \(0<\alpha<1\) for the stability of the theory. Ground states for \(\rho_{0}>0\) break the U(1) symmetry by uniform field configurations \(\phi=\sqrt{\rho_{0}}e^{i\theta}\). Taking \(\phi=\sqrt{\rho_{0}+\delta\rho}e^{i\theta}\) and integrating a gapped amplitude mode \(\delta\rho\), we obtain an effective field theory for a Goldstone mode \(\theta\) in the SSB phase, \[\mathcal{L}=\frac{1}{2}(\partial_{t}\theta)^{2}-\frac{1}{2}( \partial_{x}\theta)^{2}[1-\alpha\mathrm{cos}(2\theta)]\\ -\frac{1}{2}(\partial_{y}\theta)^{2}[1+\alpha\mathrm{cos}(2\theta )]+\alpha(\partial_{x}\theta)(\partial_{y}\theta)\mathrm{sin}(2\theta). \tag{5}\] We set \(\eta_{1}=c=\rho_{0}=1\) without loss of generality. For a given ground state \(\phi=\sqrt{\rho_{0}}e^{i\theta_{0}}\), the dispersion of a phase fluctuation \(\delta\theta=\theta-\theta_{0}\) is gapless with a linear dispersion, where velocities are anisotropic and depend on \(\theta_{0}\). Note that the joint U(1) symmetry generally allows higher-order terms in derivatives or fields in the effective theory, while they do not affect the hydrodynamic transport of low-energy excitations near the ground states. Easy-plane ferromagnetic systems with SOC may effectively have the continuous spacetime U(1) symmetry, where an in-plane spin vector \((\phi_{x},\phi_{y})\) and two-dimensional spatial coordinate \((x,y)\) transform under the joint rotational symmetry due to the SOC. Such systems are realized in semiconductors with spin-\(\frac{1}{2}\) conduction-band electrons (\(\mathbf{a}\)) and valance-band holes (\(\mathbf{b}^{\dagger}\)), which are described by the following continuum model (\(\hbar=1\)), \[H_{\mathrm{ex}}=\int d^{2}\mathbf{r}\{\mathbf{a}^{\dagger}[(-\frac{ \partial_{i}^{2}}{2m_{0}}+\epsilon_{g0})\mathbf{\sigma}_{0}\] \[+\xi_{R0}(-i\partial_{y}\mathbf{\sigma}_{x}+i\partial_{x}\mathbf{\sigma}_ {y})]\mathbf{a}+\mathbf{b}^{\dagger}[(\frac{\partial_{i}^{2}}{2m_{0}^{\prime}}- \epsilon_{g0})\mathbf{\sigma}_{0}\] \[+\xi_{R0}^{\prime}(i\partial_{y}\mathbf{\sigma}_{x}-i\partial_{x}\bm {\sigma}_{y})]\mathbf{b}+(\Delta_{t}\mathbf{a}^{\dagger}\mathbf{\sigma}_{0}\mathbf{b}\] \[+\Delta_{t}^{*}\mathbf{b}^{\dagger}\mathbf{\sigma}_{0}\mathbf{a})+\frac{g_{s 0}}{2}\sum_{\sigma,\sigma^{\prime}=\uparrow,\downarrow}(a_{\sigma}^{\dagger} a_{\sigma^{\prime}}^{\dagger}a_{\sigma^{\prime}}a_{\sigma}\] \[+b_{\sigma}^{\dagger}b_{\sigma^{\prime}}^{\dagger}b_{\sigma^{ \prime}}b_{\sigma^{\prime}}b_{\sigma}+2\xi_{1}a_{\sigma}^{\dagger}b_{\sigma^{ \prime}}^{\dagger}b_{\sigma^{\prime}}a_{\sigma})\}. \tag{6}\] Attraction between the electrons and holes (\(g_{s}\)) results in excitonic collective modes inside a band gap (\(\epsilon_{g0}\)), which have singlet (\(\mu=0\)) and triplet (\(\mu=x,y,z\)) components, \(O_{\mu}=\mathbf{a}^{\dagger}\mathbf{\sigma}_{\mu}\mathbf{b}\). In the case of \(m_{0}\xi_{R0}\simeq m_{0}^{\prime}\xi_{R0}^{\prime}\) and \(0<\xi_{1}<1\), interband hopping (\(\Delta_{t}\)) and Rashba SOC (\(\xi_{R0}\), \(\xi_{R0}^{\prime}\)) [45; 46] lead to a uniform condensate of in-plane triplet components of the exciton modes \(O_{\mu}\) (\(\mu=x,y\)), where the condensation is well described by Eq. (4) [47]. Another relevant physical system is an easy-plane (\(xy\) plane) ferromagnetic spin model with bond-dependent exchange interaction. Due to the bond-dependent interaction, the spin model has a discrete joint rotational symmetry around \(z\)[47]. The joint rotational symmetry can be spontaneously broken by the ferromagnetic order of the \(xy\) moment. When the discreteness of the lattice rotational symmetry is characterized only by higher powers of the field operator such as \(c_{6}(\phi^{6}+\mathrm{c.c.})\), the ordered phase near the critical point has an intermediate length scale whose spin hydrodynamics are effectively well described by Eq. (4) [47; 48; 49]. In this Letter, we focus on classical motion of Eq. (5). According to Noether's theorem [28; 50], the U(1) continuous spacetime symmetry endows the classical motion with a conserved current of total angular momentum, which can be divided into a spin part (\(j_{\mu}^{*}\)) and an orbital part (\(j_{\mu}^{l}\)), \[j_{\mu}^{s}=\frac{\partial\mathcal{L}}{\partial(\partial_{\mu} \theta)}\Delta\theta,\] \[j_{\mu}^{l}=[\delta_{\mu\nu}\mathcal{L}-\frac{\partial\mathcal{L}} {\partial(\partial_{\mu}\theta)}(\partial_{\nu}\theta)]\Delta x_{\nu}, \tag{7}\] with \(\mu,\nu\in\{t,x,y\}\), \(\Delta x_{\nu}\in\{\Delta t,\Delta x,\Delta y\}\), \(\Delta\theta=1\), and \((\Delta t,\Delta x,\Delta y)=(0,-y,x)\). The two parts are not conserved by themselves, \(\partial_{\mu}j_{\mu}^{s}=-\partial_{\mu}j_{\mu}^{l}=G\), where a spin torque \(G\) can be defined by the divergence of the spin current. The spin torque (\(G\)), spin currents (\(j_{x}^{s}\), \(j_{y}^{s}\)), and a spin angular momentum along \(z\)-direction (\(j_{t}^{s}\)) are given by the following equations[47], \[G=-\alpha[(\partial_{x}\theta)^{2}-(\partial_{y}\theta)^{2}] \mathrm{sin}(2\theta)+2\alpha(\partial_{x}\theta)(\partial_{y}\theta)\mathrm{ cos}(2\theta),\] \[j_{x}^{s}=-(\partial_{x}\theta)[1-\alpha\mathrm{cos}(2\theta)] +\alpha(\partial_{y}\theta)\mathrm{sin}(2\theta),\] \[j_{y}^{s}=-(\partial_{y}\theta)[1+\alpha\mathrm{cos}(2\theta)] +\alpha(\partial_{x}\theta)\mathrm{sin}(2\theta),\] \[s\equiv j_{t}^{s}=\partial_{t}\theta. \tag{8}\] The conversion between the spin and orbital angular momentum through the torque results in the magneto-mechanical effect [13; 51; 52]. _Spin injection and transport._--To illustrate observables of a total-angular-momentum superfluid, consider a uniform spin current \(j_{0}\) (\(j_{0}>0\)) injected into one end (\(x=0\)) of the superfluid (\(0<x<L\)). The spin current passes through the superfluid and flows into a spin non-superfluid at the other end \(x=L\) (see Fig. 1(a)) [11; 13]. The non-superfluid "lead" has diffusive spin transport. Hydrodynamic spin transport in the superfluid is determined by a one-dimensional (1D) equation of motion (EOM) of the Goldstone mode \(\theta(x,t)\) in Eq. (5) with \(\partial_{y}\theta=0\), \[\partial_{t}^{2}\theta-(\partial_{x}^{2}\theta)[1-\alpha{\rm cos}(2\theta)]- \alpha(\partial_{x}\theta)^{2}{\rm sin}(2\theta)=0. \tag{9}\] The EOM Eq. (9) will be solved together with proper boundary conditions. To determine the boundary conditions, note that spin transport in the non-superfluid (\(x>L\)) is described by diffusion equations [11; 13], \[\frac{\partial s}{\partial t}+\frac{\partial j_{x}^{s}}{\partial x}=-\frac{s} {T_{1}^{\prime}},\;j_{x}^{s}=-D_{s}\frac{\partial s}{\partial x}, \tag{10}\] with relaxation time \(T_{1}^{\prime}\) and a diffusion coefficient \(D_{s}\). The diffusive spin current is caused by the gradient of the spin density. Due to the relaxation time, the density and current decay exponentially in space, \[s(x,t) =\sum_{c\in\mathbb{R}}a_{c}e^{ict}e^{-\sqrt{D_{s}^{-1}\omega_{c}x }},\] \[j_{x}^{s}(x,t) =\sum_{c\in\mathbb{R}}\sqrt{D_{s}\omega_{c}}a_{c}e^{ict}e^{- \sqrt{D_{s}^{-1}\omega_{c}x}}. \tag{11}\] Here \(\omega_{c}=ic+\frac{1}{T_{1}}\), \(a_{c}\) are complex coefficients, and the square roots of \(D_{s}^{-1}\omega_{c}\) take positive real parts. The spin current is assumed to be continuous at the junction between the superfluid and non-superfluid, and it is proportional to the gradient of an effective local magnetic field felt by the spin density [11], \[j_{x}^{s}(x=L-,t)=j_{x}^{s}(x=L+,t)\] \[= -\beta_{t}[\frac{1}{\chi^{\prime}}s(x=L+,t)-\frac{1}{\chi}s(x=L-,t)]. \tag{12}\] Here \(\chi\), \(\chi^{\prime}\) are magnetic susceptibilities at \(x=L-\) and \(x=L+\) respectively, \(\beta_{t}\) is a response coefficient of the junction, and they are all positive. Eq. (12) imposes a boundary condition (BC) on the spin density and current at \(x=L-\), \[s_{c}(x=L-,t)=k_{c}j_{x,c}^{s}(x=L-,t), \tag{13}\] with \(k_{c}\equiv\frac{\chi}{\chi^{\prime}}\big{[}D_{s}(\frac{1}{T_{1}^{\prime}}+ic )\big{]}^{-\frac{1}{2}}+\frac{\chi}{\beta_{t}}\), \(k_{-c}=k_{c}^{*}\), and \({\rm Re}(k_{c})>0\). The steady injection of spin imposes another boundary condition at \(x=0+\), \(j_{x}^{s}(x=0+,t)=j_{0}\)[11]. In the following, the EOM Eq. (9) is solved for \(\theta(x,t)\) such that \(s(x,t)\) and \(j_{x}^{s}(x,t)\) satisfy the BCs. An analytical solution of \(\theta(x,t)\) can be obtained perturbatively in the SOC. The solution at the first order consists of three parts, \[\theta(x,t)=\theta_{0}(x,t)+\theta_{1}(x,t)+\theta_{2}(x,t)+\mathcal{O}( \alpha^{2}). \tag{14}\] \(\theta_{0}\) is the zeroth order solution satisfying the EOM and BCs [11; 13], \[\theta_{0}(x,t)=k_{0}j_{0}t-j_{0}x, \tag{15}\] with \(k_{0}=\frac{\chi}{\chi^{\prime}}\sqrt{\frac{T_{1}^{\prime}}{D_{s}}}+\frac{ \chi}{\beta_{t}}\). An oscillation is absent at the zeroth order due to the BCs with \({\rm Re}(k_{c})>0\). \(\theta_{1}\) and \(\theta_{2}\) are at the first order in \(\alpha\). \(\theta_{1}\) is a special solution of an inhomogeneous linear differential equation, \[\partial_{t}^{2}\theta_{1}-\partial_{x}^{2}\theta_{1}= -\alpha(\partial_{x}^{2}\theta_{0}){\rm cos}(2\theta_{0})+\alpha( \partial_{x}\theta_{0})^{2}{\rm sin}(2\theta_{0}). \tag{16}\] \(\theta_{2}\) is a solution of a homogeneous linear differential equation such that \(\theta\) satisfies the BCs at the first order in \(\alpha\), \[\partial_{t}^{2}\theta_{2}-\partial_{x}^{2}\theta_{2}=0. \tag{17}\] The solution at the first order oscillates with two spatial wavenumbers, \(2j_{0}\) and \(2k_{0}j_{0}\), and one temporal frequency \(2k_{0}j_{0}\)[47], \[\theta(x,t)=j_{0}(k_{0}t-x)-\frac{\alpha}{4(k_{0}^{2}-1)}{\rm sin} [2j_{0}(k_{0}t-x)]\] \[-\frac{\alpha(2k_{0}^{2}-1)}{4(k_{0}^{2}-1)}{\rm cos}(2k_{0}j_{0}t) {\rm sin}(2k_{0}j_{0}x)\] \[+\alpha{\rm Im}(\eta){\rm cos}(2k_{0}j_{0}t){\rm cos}(2k_{0}j_{0}x)\] \[+\alpha{\rm Re}(\eta){\rm sin}(2k_{0}j_{0}t){\rm cos}(2k_{0}j_{0}x)+ \mathcal{O}(\alpha^{2}). \tag{18}\] \(\eta\) is a constant depending on \(k_{0}\), \(k_{c=2k_{0}j_{0}}\), and \(2j_{0}L\). Note that the perturbative solution is divergent and fails near a "resonant" point \(k_{0}=1\)[47; 53]. Higher-order solutions can be systematically obtained by the perturbative iteration, where the spin density and current have the same periodicity in time as the first-order solution, \(\pi(k_{0}j_{0})^{-1}\). The time periodicity can be detected by a time-resolved measurement of the spin density in the non-superfluid "lead", which depends on the injected spin current (\(j_{0}\)) and properties of the junction Figure 1: The spin-injection model. A steady spin current \(j_{0}\) is injected from a spin injector (red) to the total-angular-momentum superfluid (blue). The spin current passes through the superfluid (blue) and flows into a spin non-superfluid (yellow). The direction of the dc component of the current is indicated by black arrows. (a) A straight geometry. (b) A circular geometry with positive current. (c) A circular geometry with negative current. \((k_{0})\). The higher-order solution has no spatial periodicity in general, while its Fourier-transform in space has two major peaks at \(2j_{0}\) and \(2k_{0}j_{0}\) as in the first-order solution. The two major wavenumbers can be observed by a local measurement of the spin density in the superfluid. The spin hydrodynamics under the spin current has unique geometric effect in a geometry with a finite curvature (Figs. 1(b),1(c)). To see this, suppose that the width of the spin-injection model in the circular geometry is small enough that the radius of the curvature is taken as a constant \(r\) and the field depends only on time and a 1D angular coordinate \(\vartheta\). With \((x,y)=r(\cos\!\vartheta,\sin\!\vartheta)\), Eq. (5) leads to a 1D Lagrangian [47], \[\mathcal{L}=\frac{1}{2}(\partial_{t}\theta)^{2}-\frac{1}{2}(\partial_{\ell} \theta)^{2}[1+\alpha\mathrm{cos}(2\theta-\frac{2}{r}\ell)], \tag{19}\] where \(\ell\equiv r\vartheta\). The corresponding EOM under the injected spin current \(j_{0}\) together with the junction parameter \(k_{0}\) has a zeroth-order solution, \(\theta_{0}(\ell,t)=k_{0}j_{0}t-j_{0}\ell\), and a first-order solution, \(\theta_{0}(\ell,t)+\theta_{1}(\ell,t)+\theta_{2}(\ell,t)\). Here \(\theta_{1}\) is a special solution of an inhomogeneous differential equation, \[\partial_{t}^{2}\theta_{1}-\partial_{t}^{2}\theta_{1}\] \[= -\alpha j_{0}(j_{0}+\frac{2}{r})\mathrm{sin}[2k_{0}j_{0}t-2(j_{0 }+\frac{1}{r})\ell]. \tag{20}\] \(\theta_{1}\) and \(\theta_{2}\) introduce two wavenumbers, \(2j_{0}+\frac{2}{r}\) and \(2k_{0}j_{0}\), in the observables respectively, where the wavenumber of \(\theta_{1}\) acquires a curvature (\(r\)) dependence. Due to the curvature dependence, two opposite injected spin currents (\(j_{0}\) from Fig. 1(b) and \(-j_{0}\) from Fig. 1(c)) lead to different spatial distributions of the observables (non-reciprocal spin hydrodynamics). The non-reciprocity in the curved geometry contradicts neither the time-reversal symmetry nor an inversion at the origin (\(r=0\)), as a uniform circular spin current is even under those symmetries. When the discreteness of the lattice rotational symmetry becomes relevant, the Lagrangian and EOM acquire \(\mathbb{Z}_{n}\) terms in the EOM Eq. (9), where the U(1) spacetime symmetry reduces to the \(\mathbb{Z}_{n}\) spacetime symmetry [47]. The \(\mathbb{Z}_{n}\) theory leads to a gapped ground state at equilibrium, whose low-energy spin transport is characterized by dynamics of domain walls [13; 11; 20]. A \(\mathbb{Z}_{n}\) term \(\tilde{c}_{n}\sin(n\theta)\) also gives rise to similar spacetime oscillations in the observables under the spin injection [13], while they have no geometric dependence. On the contrary, as described above, the spacetime oscillations induced by the SOC (\(\alpha\)) has the non-reciprocal and curvature-dependent hydrodynamics in the curved geometry. _Dissipation effects._--In the presence of the Galilean covariance, a uniform superfluid moving slower than the velocity of its Goldstone mode achieves a local energy minimum, so that it is stable against dissipation by local perturbations (e.g. elastic scattering by disorder) [47; 13]. To see the stability of a supercurrent state with the broken U(1) spacetime symmetry, we compare the classical energies of the 1D solution \(\theta(x,t)\) and its local deformation \(\theta(x,t)+\delta\theta(x,t)\). The deformation \(\delta\theta(x,t)\) is induced by local perturbations, so the spacetime derivatives of \(\delta\theta\) do not contain any uniform component in spacetime. \(\theta(x,t)+\delta\theta(x,t)\) as well as \(\theta(x,t)\) is a classical solution of Eq. (9), while they do not necessarily share the same boundary conditions. Given that \(\theta\) is lower than \(\theta+\delta\theta\) in the classical energy for any \(\delta\theta\), the supercurrent state is (locally) stable. The classical energy in the 1D model can be evaluated from a Hamiltonian, \[H[\theta]=\int dx\Big{\{}\frac{1}{2}(\partial_{t}\theta)^{2}+\frac{1}{2}( \partial_{x}\theta)^{2}[1-\alpha\mathrm{cos}(2\theta)]\Big{\}}. \tag{21}\] As the classical energies are independent of time, for simplicity, we compare time averages of the energies (with \(k_{0}\neq 1\)) over a large period of time \(T\)[47], \[\Delta J\equiv\lim_{T\rightarrow\infty}\frac{1}{T}\Big{(}\int_{0} ^{T}dtH[\theta+\delta\theta]-\int_{0}^{T}dtH[\theta]\Big{)}\] \[= \lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}dt\int dx\{( \partial_{t}\theta)(\partial_{t}\delta\theta)+(\partial_{x}\theta)(\partial_ {x}\delta\theta)\] \[\quad\times[1-\alpha\mathrm{cos}(2\theta)]+\alpha(\partial_{x} \theta)^{2}\mathrm{sin}(2\theta)(\delta\theta)\}+\mathcal{O}((\delta\theta)^{ 2})\] \[= 2\lim_{T\rightarrow\infty}\int_{0}^{T}dt\int dx(\partial_{x} \theta_{2})(\partial_{x}\delta\theta_{0})+\mathcal{O}(\alpha^{2}\delta\theta,( \delta\theta)^{2}), \tag{22}\] with \(\theta+\delta\theta=\theta_{0}+\delta\theta_{0}+\mathcal{O}(\alpha)\)[47]. Terms oscillating in space or time vanish after the spacetime integrals. \(\delta\theta_{0}\), as well as \(\theta_{2}\), is a solution of Eq. (17), and both are given by linear superpositions of \(e^{iq(t-x)}\) and \(e^{iq(x+t)}\) over \(q\). Thus, for a given \(\theta_{2}\neq 0\), one can always choose \(\delta\theta_{0}\) such that the spacetime integral of the right-hand side of Eq. (22) remains non-zero and negative, i.e. \(\delta J<0\). This means that the supercurrent state is classically unstable toward other states, and energy always dissipates by the local perturbations. Note that in the spin-injection model, \(\theta_{2}\) effectively vanishes in a limit \(k_{0}\to 0\)[47], while the supercurrent state with the finite SOC is still classically unstable toward other states due to higher-orders terms in \(\alpha\). To see effects of the energy dissipation, one can include a spin relaxation term \(-T_{1}^{-1}\partial_{t}\theta\) into the classical EOM, i.e. the right-hand side of Eq. (9). Due to finite \(T_{1}\), the zeroth-order solution \(\theta_{0}\) shows linear decay of the spin current in space [13; 11], which contrasts with the exponential decay in the non-superfluid [13]. When a phase accumulation \(\gamma\) due to the spatially dependent current is small, a double expansion in \(\alpha\) and \(\gamma\) enables a perturbative solution of \(\theta\) systematically [47]. The solution suggests that the spin density and current remain periodic in time, and they show non-reciprocal and curvature-dependent spin transport in the curved geometry. _Summary._-- In this Letter, we generalize the U(1) internal symmetry in conventional superfluid theories into the U(1) spacetime symmetry. Due to the joint symmetry, the supercurrent state shows geometry-dependent spacetime oscillations, and it is unstable against the dissipation effect. Our study paves the way for further exploration of multiple spacetime symmetries and their coupling with internal symmetries. ###### Acknowledgements. We are grateful to Lingxian Kong, Zhenyu Xiao, and Xuesong Hu for their helpful discussions. The work was supported by the National Basic Research Programs of China (No. 2019YFA0308401) and the National Natural Science Foundation of China (No. 11674011 and No. 12074008).
2309.03165
A Semiparametric Generalized Exponential Regression Model with a Principled Distance-based Prior for Analyzing Trends in Rainfall
The Western Ghats mountain range holds critical importance in regulating monsoon rainfall across Southern India, with a profound impact on regional agriculture. Here, we analyze daily wet-day rainfall data for the monsoon months between 1901-2022 for the Northern, Middle, and Southern Western Ghats regions. Motivated by an exploratory data analysis, we introduce a semiparametric Bayesian generalized exponential (GE) regression model; despite the underlying GE distribution assumption being well-known in the literature, including in the context of rainfall analysis, no research explored it in a regression setting, as of our knowledge. Our proposed approach involves modeling the GE rate parameter within a generalized additive model framework. An important feature is the integration of a principled distance-based prior for the GE shape parameter; this allows the model to shrink to an exponential regression model that retains the advantages of the exponential family. We draw inferences using the Markov chain Monte Carlo algorithm. Extensive simulations demonstrate that the proposed model outperforms simpler alternatives. Applying the model to analyze the rainfall data over 122 years provides insights into model parameters, temporal patterns, and the impact of climate change. We observe a significant decreasing trend in wet-day rainfall for the Southern Western Ghats region.
Arijit Dey, Arnab Hazra
2023-09-06T17:07:21Z
http://arxiv.org/abs/2309.03165v1
A Semiparametric Generalized Exponential Regression Model with a Principted Distance-based Prior for Analyzing Trends in Rainfall ###### Abstract The Western Ghats mountain range holds critical importance in regulating monsoon rainfall across Southern India, with a profound impact on regional agriculture. Here, we analyze daily wet-day rainfall data for the monsoon months between 1901-2022 for the Northern, Middle, and Southern Western Ghats regions. Motivated by an exploratory data analysis, we introduce a semiparametric Bayesian generalized exponential (GE) regression model; despite the underlying GE distribution assumption being well-known in the literature, including in the context of rainfall analysis, no research explored it in a regression setting, as of our knowledge. Our proposed approach involves modeling the GE rate parameter within a generalized additive model framework. An important feature is the integration of a principled distance-based prior for the GE shape parameter; this allows the model to shrink to an exponential regression model that retains the advantages of the exponential family. We draw inferences using the Markov chain Monte Carlo algorithm. Extensive simulations demonstrate that the proposed model outperforms simpler alternatives. Applying the model to analyze the rainfall data over 122 years provides insights into model parameters, temporal patterns, and the impact of climate change. We observe a significant decreasing trend in wet-day rainfall for the Southern Western Ghats region. C RESEARCH ARTICLE Climate change; Generalized exponential distribution; Markov chain Monte Carlo; Penalized complexity prior; Semiparametric Bayesian regression; Western Ghats; Wet-day precipitation modeling ## 1 Introduction The Western Ghats, a prominent mountain range along the western coast of India, plays a crucial role in shaping the climatic patterns and hydrological dynamics of Southern India. Known for its exceptional biodiversity, lush forests, and vital water resources, the Western Ghats has long captured the attention of researchers and environmentalists [32, 50]. Among the various climatic parameters that influence this ecologically significant region, rainfall is a crucial driver of its diverse ecosystems, water availability, and overall environmental health. The Western Ghats, characterized by its rugged terrain and proximity to the Arabian Sea, experiences a unique and intricate rainfall pattern heavily influenced by monsoon dynamics [48]. Over the last century, this region has experienced notable climatic shifts due to global climate change and local human activities [1; 49]. Analyzing wet-day rainfall in this region during monsoon months over an extended period of 122 years using a flexible statistical model offers a valuable opportunity to gain insights into long-term trends, variability, and potential shifts in the monsoonal regime. Researchers have widely employed the exponential distribution to model rainfall data [16; 46] in literature; its simplicity integrates seamlessly into hydrological and climatological frameworks. However, contemporary research increasingly recognizes the need for innovative probability distribution models for better encompassing complex real-world data patterns. This realization has prompted the introduction of novel probability classes with far-reaching implications across diverse research domains. [2] provides an excellent overview of newly developed distributions. A notable collection of models are generalized distributions, gaining attention from both practical and theoretical statisticians for their adaptability to various datasets. Some examples include the Marshall-Olkin generalized exponential distribution [31], generalized inverse Gaussian distribution [21], generalized Rayleigh distribution [26], etc. A more comprehensive examination of these distributions is available in [45]. [10] introduced another crucial generalized distribution called the generalized exponential (GE) distribution, which emerges as a specific case within the three-parameter exponentiated-Weibull model. The GE distribution has two parameters- a shape and a rate (or scale, defined as the inverse of rate) parameter. This distribution boils down to an exponential distribution when the shape parameter is one. Thus, with an additional shape parameter, it expands the capabilities of the exponential distribution, making it more adaptable to various datasets. Since its introduction, many researchers have integrated substantial advancements in exploring different properties, estimation strategies, extensions, and applications of this distribution. For instance, [11] found the efficacy of GE distribution compared to gamma or Weibull distributions, whereas [12] discussed different methods of estimating the parameters of GE distribution. [20], [38], and [25] explored Bayesian estimation and prediction methods in this context. [13] reviewed the existing results and discussed some new estimation methods and their characteristics. Numerous researchers have modeled the experimental data using GE distribution across several disciplines like meteorological studies [15; 29]; flood frequency analysis [30]; reliability analysis [3]; lifetime analysis [4]; risk analysis [42]. But, as of our knowledge, any GE regression-type model has never been proposed in the literature. In this study, taking a step beyond exponential regression, we employ the GE regression framework to explore rainfall patterns. Rainfall data collected over a century are inherently nonstationary. Here, modeling the temporal trend using traditional parametric regression would struggle to capture the intricate and evolving short-term temporal patterns. In this context, a semiparametric regression setup emerges as a promising approach. In the existing literature, many researchers have delved into applying semiparametric regression techniques for analyzing rainfall patterns [34; 53]. While a generalized linear model (GLM) assumes the link function to be a linear combination of the covariates, the more flexible generalized additive models [GAM, 14] allow the link function to be a sum of nonlinear smooth functional forms of the underlying covariates. We generally model each smooth function in GAMs as a linear combination of basis functions like cubic B-splines. Instead of estimating the entirely unknown function, we draw inferences based on basis function coefficients [8]. Henceforth, instead of GAM, we use the term'semiparametric regression', which is common in Bayesian nonparametrics. The rate parameter of the GE distribution is always positive, and hence, it would be reasonable to model the log-rate in a semiparametric regression framework. Within the Bayesian methodology, priors hold a pivotal role in inference, and the literature provides a diverse spectrum of prior distributions utilized for regression coefficients in semiparametric regression frameworks. For instance, Gaussian prior was employed by [7], while [27] opted for Laplace prior. [28] utilized Zellner's \(g\)-prior, while [6] considered flat priors, and [23] used the Normal-Gamma prior. On the other hand, the gamma distribution has consistently been considered the most natural prior choice for the shape parameter of the GE distribution; authors who introduced the GE distribution chose a gamma prior for the shape parameter in [25] as well. Besides [38] and [22] also employed gamma prior for the shape parameter. However, the literature demonstrates that a handful of alternative prior choices have also been utilized. For example, [33] employed a Jeffrey's prior, indicating their preference for an objective prior, and [5] opted for a non-informative prior in their study. The Penalized Complexity (PC) prior, introduced by [43], has emerged in recent literature, which mitigates the model complexity through penalization. In cases where a model extends from a simpler foundational model by incorporating an additional parameter, this type of prior becomes applicable; it penalizes the escalation in model complexity that arises when favoring the extended model over its more straightforward counterpart. Existing literature encompasses instances of this approach across various models [47]. [51] developed PC priors for estimating the effective degrees of freedom in Bayesian penalized splines (P-splines), while [35] discussed a PC prior for the skewness parameter of the power links family, and [44] proposed interpretable and comprehensive PC priors for the coefficients of a stationary autoregressive process. In this paper, along with modeling the wet-day rainfall of the Western Ghats region for the last century by a semiparametric Bayesian GE regression model, we employ the PC prior for the GE shape parameter, which allows the GE regression to shrink towards an exponential regression. Thus, the exponential distribution is considered the base model for the GE distribution. In several practical examples [16; 17], the exponential distribution is already a reasonable model and enjoys several benefits of being a member of the exponential family, thus shrinking the GE distribution to its base model through shrinking the shape parameter to one is justified. On the other hand, we opt for the independent Gaussian priors for the regression coefficients. We draw inferences using the Markov chain Monte Carlo (MCMC) algorithm; here, conjugate priors are not available for the model parameters, and thus we update them using Metropolis-Hastings steps. We conduct a thorough simulation study by simulating 1000 datasets from each combination of the model generating and model fitting scenarios, and we compare the performances of parametric and semiparametric Bayesian GE regression models under the conventional gamma prior choices for the GE shape parameter along with our proposed one. We study the coverage probabilities for the shape parameter and the rate functions and compare these two models using widely applicable information criterion [WAIC, 52]. We implement the proposed methodology to the daily wet-day precipitation spanning from 1901 to 2022 in different regions of the Western Ghats mountain range, using the year as a covariate and wet-day precipitation as a response. We study the convergence and mixing of the MCMC chains and compare different model fits in terms of WAIC. The paper is structured as follows: Section [2] delves into the GE distribution, thoroughly examining its properties. Section [3] discusses an exploratory data analysis that justifies our semiparametric GE model assumption for the wet-day precipitation data. In Section [4], we introduce the GE regression model. Proceeding to Section [5], we concentrate on delineating the prior specifications for the regression model, including introducing a principled distance-based prior for the shape parameter of the GE distribution. Bayesian parameter inference is addressed in Section [6]. Section [7] presents the outcomes of the simulation study, while Section [8] discusses the results obtained from our proposed model and some simpler alternatives. Finally, Section [9] summarizes our findings and contributions. ## 2 Background: Generalized Exponential (GE) Distribution We say a random variable \(X\) follows GE distribution if its cumulative distribution function (CDF) is given by \[F(x;\alpha,\lambda)=\left(1-e^{-\lambda x}\right)^{\alpha};\quad x,\alpha, \lambda>0,\] where \(\alpha\) is the shape parameter and \(\lambda\) is the rate parameter. The corresponding probability density function (PDF) is given by \[f(x;\alpha,\lambda)=\alpha\lambda\left(1-e^{-\lambda x}\right)^{\alpha-1}e^{- \lambda x};\quad x,\alpha,\lambda>0. \tag{1}\] The GE distribution is a more complex model than the exponential distribution, as it incorporates an extra shape parameter. Both models coincide when \(\alpha\) = 1. ### Properties of GE The hazard function of the GE distribution is given by \[h(x;\alpha,\lambda)=\frac{f(x;\alpha,\lambda)}{1-F(x;\alpha,\lambda)}=\frac{ \alpha\lambda\left(1-e^{-\lambda x}\right)^{\alpha-1}e^{-\lambda x}}{1-\left( 1-e^{-\lambda x}\right)^{\alpha}};\quad x>0.\] The GE distribution has an increasing or decreasing hazard rate depending on the value of the shape parameter. The hazard function is decreasing for \(\alpha<1\), constant for \(\alpha=1\), and increasing for \(\alpha>1\). The moment generating function (MGF) of the GE distribution is given by \[M_{X}(t)=\frac{\Gamma(\alpha+1)\Gamma(1-\frac{t}{\lambda})}{\Gamma(1+\alpha- \frac{t}{\lambda})};\ 0\leq t<\lambda,\] and differentiating the log of the MGF with respect to \(t\) repeatedly and then setting \(t=0\), we get the expectation, variance, and skewness of GE distribution as \[\mathrm{E}(X) = \lambda^{-1}\left[\psi(\alpha+1)-\psi(1)\right],\] \[\mathrm{V}(X) = \lambda^{-2}\left[\psi^{(1)}(1)-\psi^{(1)}(\alpha+1)\right],\] \[\mathrm{Skewness}(X) = \left[\psi^{(2)}(\alpha+1)-\psi^{(2)}(1)\right]\bigg{/}\left[ \psi^{(1)}(1)-\psi^{(1)}(\alpha+1)\right]^{\frac{3}{2}},\] where \(\psi^{(m)}(z)=\dfrac{\partial^{m}}{\partial z^{m}}\psi(z)=\dfrac{\partial^{m+1 }}{\partial z^{m+1}}\ln\Gamma(z)\) is the polygamma function of order \(m\); for \(m=0\), it denotes the digamma function. Figure 1 sheds light on different aspects of the GE distribution, e.g., PDF, hazard function, mean, variance, and skewness. The top-left panel of Figure 1 shows that for \(\alpha<1\), the curve depicting the PDF of the GE distribution has an asymptote at the Y-axis and then decreases exponentially and monotonically as we move across the positive real line. With \(\alpha=1\), GE coincides with the exponential distribution, thus having mode at zero (with value \(=\lambda\)) and gradually decreasing similarly as the previous case. When \(\alpha>1\), the curve initiates at zero, then increases over a range of values, and eventually decreases monotonically, having a unique mode at \(\log(\alpha)/\lambda\). As mentioned earlier, the top-right panel of Figure 1 shows that the hazard function is monotonically increasing when \(\alpha<0\), monotonically decreasing when \(\alpha>0\), and constant (the value being \(\lambda=1\)) when \(\alpha=1\). The mean and variance of GE behave somewhat in a similar manner. From the bottom-left and the bottom-middle panel of Figure 1, we see for a fixed value of \(\alpha\), both mean and variance decrease with increasing \(\lambda\) and for a fixed value of \(\lambda\), both increase as \(\alpha\) increases. On the other hand, the skewness of the GE distribution depends only on the shape parameter and decreases exponentially with increasing \(\alpha\) (bottom-right panel of Figure 1). ## 3 Data Preprocessing and Exploratory Data Analysis In this section, we describe the preprocessing steps involved in obtaining the dataset comprising average daily precipitation data for rainy days in the Northern, Middle, and Southern Western Ghats regions during the monsoon months between 1901-2022. Besides, we discuss pieces of evidence based on exploratory data analysis that confirm the suitability of a GE distribution for fitting the data and to determine whether a semiparametric mean structure is necessary or a linear trend structure would suffice. Figure 1: Generalized Exponential probability density function (top-left), hazard function (top-right), mean (bottom-left), variance (bottom-middle), and skewness (bottom-right) functions. Both top panels share the same legend. We obtain daily gridded rainfall (in mm) data over the Western Ghats region with a spatial resolution of \(1.0^{\circ}\times 1.0^{\circ}\), covering the period from 1901-2022. The data was sourced from the official website of the Indian Meteorological Department, Pune ([https://www.imdpune.gov.in/cmpg/Griddata/Rainfall_1_NetCDF.html](https://www.imdpune.gov.in/cmpg/Griddata/Rainfall_1_NetCDF.html)). The gridded data product was obtained through spatial interpolation of ground-station data following the procedure described in [36]. We extract the daily rainfall information for June, July, August, and September (JJAS) throughout all the years. Additionally, we exclude days within the JJAS months where recorded rainfall amounts were zero. Out of the pixels representing the Western Ghats area, we group them into three distinct significant regions: the Northern, Middle, and Southern regions (the regions are shown in the supplementary material). We compute the daily rainfall values for each region by calculating the average of the corresponding pixel values within that region. Afterward, we conduct a further analysis based on these regions. Given our dataset (after preprocessing) spans over a century, our initial focus involves performing necessary analyses to address any potential trends within the data. In the top panels of Figure 2, we present a bar diagram depicting the average yearly rainfall for each year. No clear long-term linear trend is observable for any of the three regions. However, several short-term upward and downward patterns are noticeable. We use a basis spline regression approach to explore such short-term trends, which treats daily rainfall values as response variables and corresponding years as covariates. Considering the residuals from this regression, we can effectively eliminate any potential trends embedded in the data. We overlap the estimated means with the bar diagrams in the top panels of Figure 2. Firstly, the estimated mean curve aligns well with the visualized bar diagram. Moreover, both components highlight the presence Figure 2: Bar diagrams of the annual average wet-day rainfall during June through September along with fitted mean curves based on twelve cubic B-splines with equidistant knots, for Northern, Middle, and Southern Western Ghats regions (top panels). Histograms of the detrended residuals from the daily rainfall overlapped with the fitted GE densities (bottom panels). of a nonstationary rainfall pattern. This pattern, in turn, underscores the suitability of employing a semiparametric regression model, which can effectively accommodate and incorporate these nonstationary patterns within the data. Subsequently, after conducting a fundamental analysis to identify and remove outliers via the popular adjusted-boxplot method developed by [19], we present two important visualizations in the bottom panels of Figure 2, where the panels correspond to three regions of interest. Firstly, we showcase a histogram illustrating the distribution of the detrended residuals obtained by exponentiating the residuals obtained by fitting a semiparametric regression curve to the log-transformed wet-day rainfall observations, which aligns with the standard link function formulations for generalized additive models. Additionally, a red line denotes the fitted density of the GE distribution, with parameters estimated from the detrended residuals. We observe a strong alignment between the estimated density and the associated histograms, indicating a favorable fit. This visual representation significantly supports the rationale behind the GE regression model proposed in this paper. Additionally, the plots highlight a marked convergence of the GE distributions towards their foundational model, the exponential distribution. This convergence also reinforces our second consideration: using a novel distance-based prior for the shape parameter of the GE distribution. ## 4 Generalized Exponential (GE) Regression The GE regression model is a statistical model that we can use for modeling continuous positive-valued response variables. It can be considered an extension of the standard linear regression model that allows for non-Gaussian and asymmetric distributions, accommodating heteroscedasticity and skewness in the data. The regression framework assumes that the response variable \(Y\) follows a GE distribution, and it models the relationship between the \(Y\) and the covariates \(\mathbf{X}=(X_{1},\ldots,X_{P})^{\prime}\) through a linear predictor \(\eta\). The linear predictor is a linear combination of the covariates with associated regression coefficients, given as \[\eta(\mathbf{X})=\beta_{0}+\beta_{1}X_{1}+\beta_{2}X_{2}+\cdots+\beta_{P}X_{P}, \tag{2}\] where \(\mathbf{\beta}=(\beta_{0},\beta_{1},\ldots,\beta_{p})^{\prime}\) is the vector of regression coefficients. In the GE regression model, the shape parameter \(\alpha\) is considered an inherent property of the distribution that characterizes the shape and asymmetry of the distribution, allowing for a more flexible modeling approach compared to a standard linear regression with a Gaussian error component. On the other hand, the rate parameter \(\lambda\) is the parameter of interest in the regression model, which captures the association between the covariates and the response variable. By incorporating the covariates through the rate parameter, the model captures how changes in the covariate values influence the rate or dispersion of the response variable. Moreover, given that the rate parameter of the GE distribution is positive, we relate it to the linear predictor from (2) using a link function, which is designed to ensure the rate parameter always stays positive. Thus, we can conceptualize the GE Regression model as \[Y_{i}|\mathbf{X_{i}}=\mathbf{x_{i}}\sim GE(\alpha,\lambda_{i}),\] where \(g(\lambda_{i})=\eta(\mathbf{x_{i}})\), with \(g(\cdot)\) representing the appropriate link function and \(\mathbf{x_{i}}=(x_{i1},x_{i2},\ldots,x_{iP})^{\prime}\). The parameters in the Generalized Exponential regression model, including the regression coefficients and the shape parameter, are typically estimated using maximum likelihood estimation or other suitable estimation methods. The above-explained theory introduces a parametric framework for the GE regression model, aiming to capture the relationship between the response variables and the covariates by utilizing rate parameters. However, this paper adopts a semiparametric approach to achieve the same objective, which allows us to incorporate both parametric and nonparametric components into the model, providing flexibility and accommodating potential complexities in the relationship between the covariates and the response variables. We introduce the specific form of the predictor function and other essential formulations following a general introduction to semiparametric regression models. ### Semiparametric Regression Parametric regression assumes a known distribution for the response variable and estimates a finite-dimensional set of parameters. It is straightforward, provides easy estimation, and offers interpretability. However, it may struggle to capture nuanced aspects of the data, such as non-linearity or variable interactions. In contrast, non-parametric regression assumes no specific form for the relationship between the response and explanatory variables, allowing flexibility based on data-derived information. While this approach allows for more flexible modeling, it is computationally intensive, less interpretable, and can be affected by the _curse of dimensionality_. Semiparametric regression integrates the above two approaches, allowing us to have the best of both regimes. It incorporates the interpretability of the parametric setup and flexibility of the nonparametric setup. In linear or generalized linear models (GLM) that fall under the parametric setup, we assume the conditional mean of the distribution of the response variable is linked with the linear predictor through a linear combination of the covariates or their functions. Semiparametric setup extends this domain of regression models by introducing the nonparametric component in the linear predictors. In this setup, the most general formulation of the linear predictor can be given as \[\eta(\mathbf{x})=\sum_{p=1}^{P}f_{p}(x_{p}), \tag{3}\] where \(f_{j}\)'s are smoothing functions of continuous covariates and \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{P})^{\prime}\). Regarding smoothing functions, most semiparametric methods assume they can be expressed as a linear combination of finite basis functions, often denoted as \[f_{p}(z)=\sum_{k=1}^{K_{p}}\beta_{p,k}B_{p,k}(z);\ \ p=1,2,\ldots,P, \tag{4}\] where \(B_{p,k}(\cdot)\)'s are known basis functions and \(\beta_{p,k}\) are unknown basis function coefficients that determine the shape of the smoothing function \(f_{p}(z)\). A basis expansion of \(M\) terms can match the true curve \(f_{p}(\cdot)\) at any \(M\) points \(X_{1},\ldots,X_{M}\) in the range of covariates. Hence, increasing \(M\) gives us an arbitrarily flexible model. In this study, we employ a semiparametric model akin to (3) for the rate parameter of the GE distribution. With a covariate vector comprising \(P\) components and the appropriate logarithmic link function, the regression model takes the form: \[Y_{i}|\mathbf{X_{i}}=\mathbf{x_{i}}\sim GE\big{(}\alpha,\lambda(\mathbf{x_{i}})\big{)}\text{ with }\log\big{\{}\lambda(\mathbf{x_{i}})\big{\}}=\sum_{p=1}^{P}\sum_{k=1}^{K_{p}}\beta_{p,k}B_ {p,k}(x_{ip}), \tag{5}\] where \(B_{p,k}(\cdot)\)'s are cubic B-splines and \(\beta_{p,k}\)'s are the spline coefficients representing the weights assigned to the corresponding spline functions. To provide a brief overview, a cubic B-spline is a piecewise-defined polynomial cubic function that is defined on a set of knots or control points, taking the form \(B_{s}(x)=(x-v_{s})_{+}^{3}\) where \(v_{s}\)'s are the fixed knots that span the range of \(x\) and '+' denotes the positive part. ## 5 Prior Specification Selecting an appropriate prior is one of the most crucial parts of a Bayesian analysis. While there is no universal rule for constructing optimal prior distribution, choosing an appropriate prior can significantly enhance the quality of the study. A well-chosen proper prior can stabilize the posterior distribution and yield better results compared to an improper prior [Chapter 4, 39]. This section defines the prior distribution parameters of the regression model presented in (5). In semiparametric Bayesian regression, instead of formulating an explicit prior distribution for \(\lambda\), independent prior distributions are explicitly specified for the spline coefficients \(\mathbf{\beta}=(\beta_{1},\beta_{2},\ldots,\beta_{K})^{\prime}\). This paper considers independent weakly-informative Gaussian priors for \(\beta_{k}\)'s. As for the shape parameter of the GE distribution, we employ a newly developed class of priors. In cases where a model is constructed based on a simpler base model, the chosen prior should accurately reflect the characteristics of the model considered and capture its departure from the base model. This type of prior construction is founded upon the work of [43], who introduced the Penalized Complexity (PC) prior. The PC prior is a model-based prior that imposes a penalty on the deviation of the model of consideration from its simpler base version at a logarithmic constant rate. The following subsections discuss the PC prior for \(\alpha\). ### Penalized Complexity (PC) prior The PC prior is an informative proper prior that exhibits robustness properties of high quality and invariance under reparameterization. It aims to penalize the complexity that arises when we move from a simpler base model to a more complex one, thereby preventing overfitting and adhering to _Occam's razor principle_[43]. By using the PC prior, we uphold the _principle of parsimony_, which suggests a preference for simpler models until sufficient evidence supports more complex alternatives. The PC prior is established based on the statistical difference between the proposed complex model and its base model. We quantify this distance using the Kullback-Leibler divergence (KLD) [24]. KLD is an information-based measure that essentially measures how much information we lose when we substitute a complex model having PDF \(f\) with its simpler version having PDF \(g\). For the GE distribution, the exponential distribution is commonly selected as the appropriate base model. Hence, for our purposes, we consider \(f\) as the GE PDF and \(g\) as the exponential PDF. For two continuous distributions with PDF \(f\) and \(g\) defined over the same support, KLD is defined as \[\text{KLD}(f\parallel g)=\int_{-\infty}^{\infty}f(y)\log\left(\frac{f(y)}{g(y)} \right)\,dy\,. \tag{6}\] We define the distance between the two models by the 'unidirectional' distance function \(d(f\parallel g)=\sqrt{2\text{KLD}(f\parallel g)}\)[43]. The absence of symmetry in KLD is not a concern within this context. Our main focus is on quantifying the additional complexity that arises from employing the intricate model rather than the other way around. The main idea of the PC prior involves assigning priors to the distance between two models, \(d(\alpha)\), rather than directly on the model parameters, and then by employing a change-of-variables approach, one can obtain a prior distribution for the parameter of interest. While constructing the PC prior for the shape parameter \(\alpha\), we take this distance as a function of \(\alpha\), i.e., \(d(\alpha)=\sqrt{2\text{KLD}(\alpha)}=\sqrt{2\text{KLD}(f\parallel g)}\) with \(f\) and \(g\) being GE and exponential PDFs respectively. To incorporate the fact that the prior should have a decaying nature as a function of the distance between the two models, we take the constant rate penalization assumption and construct the PC prior by assigning an exponential prior to the distance, i.e., \(d(\alpha)\sim\text{Exp}(\theta)\) with \(\theta>0\). This gives us the PC prior for \(\alpha\) as \[\pi(\alpha)=\theta e^{-\theta d(\alpha)}\left|\frac{\partial d(\alpha)}{ \partial\alpha}\right|, \tag{7}\] where \(\theta>0\) is a user-defined quantity that controls the prior mass at the tail. We impose this quantity in such a way that this characterizes how informative the PC prior we want. This is achieved by imposing a condition \(\Pr[d(\alpha)>U]=\xi\) where \(U\) is the upper bound of the tail-event and \(\xi\) is the weight of the event [43]. ### PC prior for the shape parameter of GE distribution This section contains the KLD between our complex model, the GE model (\(f\)) and its natural base model, Exponential (\(g\)), and the PC prior of the shape parameter, \(\alpha\). **Theorem 5.1**.: _The KLD, with \(f\) being the PDF of the GE distribution given in (1) and \(g\) being the PDF of the exponential distribution with rate \(\lambda\), is given by_ \[\text{KLD}(\alpha)=\log(\alpha)+1/\alpha-1.\] **Theorem 5.2**.: _The PC prior for the shape parameter (\(\alpha\)) of the Generalized Exponential (GE) distribution is supported over the positive real line and given as_ \[\pi(\alpha)=\frac{\theta}{2}\cdot\exp\left(-\theta\,\sqrt{2\log(\alpha)+\frac {2(1-\alpha)}{\alpha}}\right)\cdot\left(2\log(\alpha)+\frac{2(1-\alpha)}{ \alpha}\right)^{-\frac{1}{2}}\cdot\left|\frac{1}{\alpha}-\frac{1}{\alpha^{2}} \right|.\] Proof.: Proof for Theorem 5.1 is given in the Appendix A and Theorem 5.2 follows directly from (7) and the expression \(d(\alpha)=\sqrt{2\text{KLD}(\alpha)}\). Moreover, Theorem 5.2 includes a scaling factor which ensures that \(\pi(\cdot)\) integrates to one. In Figure 3 we illustrate \(\pi(\alpha)\) for different hyperparameter specifications \(\theta\). We notice a proportional relationship between the value of \(\theta\) and the extent of contraction to the base model. As the value of \(\theta\) decreases, the tails become heavier, resulting in reduced contraction towards the base model. We also observe that for \(\theta\leq 4/3\), the mode of the PDF occurs at a value of \(\alpha\) less than one, but for \(\theta\geq 4/3\), the mode is at \(\alpha=1\) (Figure 3). One might expect the prior would have mode at \(\alpha=1\) irrespective of the value of \(\theta\). Here, we do not necessarily need the mode at \(\alpha=1\), and rather we should rely on a prior that is consistent with the principles of PC prior. ## 6 Inference This paper employs Bayesian estimation methods to infer and quantify the uncertainty surrounding the parameters of interest. Fundamentally, Bayesian inference encompasses amalgamating prior knowledge or beliefs (prior probability) regarding a particular event or hypothesis with evidence or data (likelihood) via the Bayes theorem and derives a revised or updated belief (posterior probability), which provides a more comprehensive understanding of the values of the parameters in question and their inherent uncertainty. In our context, the likelihood function based on \(n\) observations from the GE distribution under the regression setting from (5) is given by \[L(\alpha,\boldsymbol{\beta}|\boldsymbol{y})=\prod_{i=1}^{n}f\big{(}y_{i}; \alpha,\lambda(\boldsymbol{x_{i}})\big{)}, \tag{8}\] where \(\boldsymbol{y}=(y_{1},y_{2},\ldots,y_{n})^{\prime}\) is the observed data, \(f(y;\alpha,\lambda)\) is the PDF of the GE distribution from (1) and \(\lambda(\boldsymbol{x_{i}})\) taking form as given in (5). Also, let \(\pi(\alpha)\) and \(\pi(\boldsymbol{\beta})\) denote the specified mutually independent priors for the parameters \(\alpha\) and \(\boldsymbol{\beta}\). Combining the priors for shape parameter \(\alpha\) and the regression coefficients \(\boldsymbol{\beta}\), and the likelihood function as given in (8), we obtain the joint posterior distribution as \(\pi(\alpha,\boldsymbol{\beta}|\boldsymbol{y})\propto L(\alpha,\boldsymbol{ \beta}|\boldsymbol{y})\cdot\pi(\boldsymbol{\beta})\cdot\pi(\boldsymbol{\alpha})\) from which Bayesian inference is facilitated. However, the explicit form of the marginal posterior density of the parameters is not analytically tractable, leading to employing simulation-based techniques such as Markov chain Monte Carlo (MCMC) methods or numerical approximation methods like Integrated Nested Laplace Approximations (INLA), introduced by [41]. In this paper, we employ MCMC techniques for parameter inference, specifically utilizing the adaptive Figure 3: The PC prior for the shape parameter of the GE distribution for different choices of the hyperparameter \(\theta\). Metropolis-Hastings algorithm within Gibbs sampling. We iteratively adjust the variance of the proposal distribution within the chain so that the acceptance rate remains between 0.3 and 0.5. We initiate the MCMC chains with an initial value of 1 for \(\alpha\) and the maximum likelihood estimate for \(\mathbf{\beta}\) as calculated under \(\alpha=1\). Our approach employs an algorithm that updates one parameter at a time. Regarding computational resources, conducting the simulations outlined in Section 7, involving 144 different configurations, each with 1000 datasets, and employing an MCMC output of 10000 iterations, required nearly 8 hours to finalize. This computational task was carried out on a desktop system equipped with a Ryzen 9 5900x 12-core 24-threaded processor and 64GB of RAM. On average, the computation time for a single MCMC chain was around 1 second for the parametric setup (assuming \(\log(\lambda)\) to be a simple linear function of year) using a simple linear regression model and approximately 3 seconds for our proposed semiparametric model. Furthermore, it is feasible to describe the ultimate distribution of the posterior estimates for the parameters \(\alpha\) and \(\mathbf{\beta}\). In this setting, the _Bernstein-von Mises_ theorem outlines the shape of this asymptotic distribution, delineating how the parameters \(\alpha\) and \(\mathbf{\beta}\) behave as the sample size tends towards infinity. To gauge the level of uncertainty linked with these parameter estimations, we investigate the asymptotic variance of the parameters, which is encapsulated by the inverse of the information matrix. Additional elaboration can be found in the supplementary materials. ## 7 Simulation Study We conducted an extensive simulation study to demonstrate the effectiveness of the PC prior and the proposed semiparametric model over a parametric model, where the GE rate parameter (in log scale) is modeled as a simple linear function of the covariate(s). We designed two separate simulation setups for this purpose. In the first setup, we compared the PC prior to the conventional gamma prior for the shape parameter of the GE distribution under four different scenarios. These scenarios included- Setting 1: generating data from a linear setup and fitting a parametric model, Setting 2: generating data from a nonlinear setup and fitting a parametric model, Setting 3: generating data from a linear setup and fitting a semiparametric model, and Setting 4: generating data from a nonlinear setup and fitting a semiparametric model. For each setting, we used four different prior specifications: the PC prior with parameters 2.5 and 5 and the gamma prior with parameters \((0.01,0.01)\) and \((1,1)\). We compared their effectiveness in estimation. Additionally, we considered two cases for the number of samples, namely \(n=24\) and \(n=99\), to gain insights into parameter estimation in scenarios with small and large sample sizes, respectively. In each case of this simulation setup, we calculated the coverage probability of \(\alpha\) based on 95% credible intervals and the absolute bias in estimating \(\alpha\) to facilitate the comparison. In the second simulation setup, we compared the efficiency of the semiparametric model with that of the parametric model. We considered four settings for this comparison- Setting 5: generating data from a linear setup and using a gamma prior for \(\alpha\), Setting 6: generating data from a linear setup and employing a PC prior for \(\alpha\), Setting 7: generating data from a nonlinear setup and using a gamma prior for \(\alpha\) and finally, Setting 8: generating data from a nonlinear setup and employing a PC prior for \(\alpha\). We fitted the parametric and semiparametric models for each setting and compared their goodness of fit. We also examined variations in hyperparameters for each case. Specifically, in Settings 5 and 7, we considered the gamma prior with pa rameters \((0.01,0.01)\) and \((1,1)\), respectively. In Settings 6 and 8, we employed the PC prior with parameters \(2.5\) and \(5\), respectively. Additionally, we explored two cases for the number of samples, namely \(n=24\) and \(n=99\), with the same objective as in the previous setup. In every case within this simulation setup, we have computed the absolute fitting error and the WAIC of the model fit with the estimated parameters as part of the comparison process. All the simulations were conducted using three different values of \(\alpha\), the shape parameter: \(0.5\), \(1\), and \(2\). The choice of these values allowed us to explore various scenarios; the value of \(\alpha\) being \(1\) represented the true scenario, resembling an exponential distribution. In contrast, the values of \(\alpha\) being \(0.5\) and \(2\) indicated deviations from the exponential-like behavior. We utilized two different covariate sequences depending on the sample size. In (5), we assume \(P=1\) under all the simulation settings. When the sample size is \(n=24\), the covariate sequence we considered was \(\mathbf{X}=(0.04,0.08,\ldots,0.96)^{\prime}\). For a larger sample size of \(n=99\), the covariate sequence was \(\mathbf{X}=(0.01,0.02,\ldots,0.99)^{\prime}\). To generate data from the linear model, we constructed a design matrix with two columns: one for the intercept and the other for \(\mathbf{X}\). However, for nonlinear data generation, we modified the second column of the design matrix to include \(\sin(2\pi\mathbf{X})\) instead of \(\mathbf{X}\). During fitting the parametric model, we used the same design matrix as the one employed during the linear data generation process when fitting the data. However, we introduced basis splines with ten basis functions Figure 4: Coverage probabilities (top illustration) and absolute bias values (bottom illustration) computed based on imposing a PC prior. For each illustration, from the left-most to the right-most columns: (i) Setting \(1\) with \(n=24,99\), (ii) Setting \(2\) with \(n=24,99\), (iii) Setting \(3\) with \(n=24,99\), (iv) Setting \(4\) with \(n=24,99\). for the semiparametric setup. In total, we generated 1,000 datasets from the GE distribution. We employed MCMC techniques to deduce the parameters, utilizing 4,000 MCMC samples in total. The initial 2,000 samples were considered burn-in samples and excluded from analysis, while a thinning interval of 5 was applied. Figure 4 corresponds to the first simulation setup. The coverage probabilities are the proportion of times the true value of \(\alpha\) falls under the 95% credible intervals of \(\alpha\) obtained from MCMC. The top illustration of Figure 4 presents the coverage probabilities of different simulation setups. Four columns represent Settings 1, 2, 3, and 4, with two rows representing sample sizes of 24 and 99, respectively. Each panel showcases four prior specifications represented by different lines, while the X-axis represents the different \(\alpha\) values considered. In the bottom illustration of Figure 4, the identical simulation setups are depicted, but instead of coverage probabilities, it focuses on illustrating the absolute bias in the estimation of \(\alpha\). Figure 5 focuses on the second simulation setup. In the top illustration of Figure 5, we compare the goodness of fit between the parametric and semiparametric models using the absolute fitting error. Similarly to previous figures, there are four columns representing Settings 5, 6, 7, and 8, with two rows representing sample sizes of 24 Figure 5: Absolute fitting error (top illustration) and WAIC values (bottom illustration) based on fitting a semiparametric GE regression model. For each illustration, from the left-most to the right-most columns: (i) Setting 5 with \(n=24,99\), (ii) Setting 6 with \(n=24,99\), (iii) Setting 7 with \(n=24,99\), (iv) Setting 8 with \(n=24,99\). (For the bottom illustration, the lines representing the gamma prior with parameters \((0.01,0.01)\) and the PC prior with \(\theta=5\) are excluded since they closely resemble the demonstrated prior specification). and 99. Each panel displays specifications of different values of \(\alpha\) at the X-axis, and lines correspond to either the parametric setup or the semiparametric setup, with varying hyperparameter specifications. Moving to the bottom illustration of Figure 5, it provides a similar comparison but focuses on the WAIC of the model fit instead of the absolute fitting error. The two aforementioned simulation steps offer substantial validation for the efficacy of the semiparametric regression setup and the utilization of the PC prior. The top illustration in Figure 4 clearly illustrates that when the true value of \(\alpha\) approximates one, the PC prior exhibits superior coverage probability compared to the conventional gamma prior. This pattern holds across all four configurations in the initial simulation setup, except Setting 2, where both the gamma and PC priors yield undesirable outcomes due to attempts to fit a parametric linear model to highly non-linear data. Furthermore, the bottom illustration in Figure 4 highlights a reduction in estimation bias when \(\alpha\) equals one, aligning with the inherent characteristic of the PC prior to shrink the estimate towards the base model. Additionally, the lower row within the same figure empirically confirms the well-established hypothesis that as the sample size increases, the influence of the prior gradually diminishes. This pattern is evident as the lines representing absolute bias nearly overlap, regardless of the chosen prior. On the other hand, the top illustration in Figure 5 highlights a substantial increase in the absolute fitting error as we transition from a dataset demonstrating a linear trend to one illustrating a nonlinear trend. However, the plots associated with settings involving the generation of linear data and fitting a parametric model (Settings 5 and 6) exhibit minimal differences and occasionally yield superior results. When examining the settings where data originates from a nonlinear context (Settings 7 and 8), a distinct and considerable gap emerges between the lines representing the parametric and semiparametric models, where the latter consistently performs less favorably than the former, occupying a lower position on the graph. The lower illustration in Figure 5 conveys a similar observation. Despite Settings 5 and 6 pertaining to data generated from a linear setup, we observe that the parametric model consistently exhibits lower WAIC scores. In Settings 7 and 8, this ordering of WAIC scores is entirely reversed, with the lines representing the semiparametric model falling consistently below those of the parametric model. This reversal underscores the clear advantage of employing semiparametric modeling, particularly in scenarios characterized by high nonlinearity. ## 8 Data Application We utilized the dataset introduced in Section 3 to drive our objective and conducted data analysis. The dataset comprises daily average rainfall for the wet days of the monsoon months during 1901-2022, focusing on Northern, Middle, and Southern Western Ghats regions. As outlined in Section 3, to account for the nonstationary nature of the dataset, we employed semiparametric regression techniques, using rainfall as the response variable and the corresponding year as the covariate. This analysis was performed separately for all three regions. With the validation from Section 3, we assumed the daily rainfall follows the GE distribution. Within a regression framework, we examined how the rate parameter of the GE distribution correlates with the covariate. ### Model Description We formulate our model such that the rate parameter of the GE distribution in the regression is influenced by the covariate 'Year' (\(T\)). Then, if \(Y_{t}\) represents our response variable of daily rainfall, our model is given as \[Y_{T}|T=t\sim GE(\alpha,\lambda(t))\text{ where }\lambda(t)\text{ is dependent on the covariate }T=t.\] We conducted the analysis using two distinct models. The first model employs a parametric approach to model the rate parameter, while the second model is our proposed semiparametric formulation. We employed a simple linear regression model for the rate parameter in the parametric setting, given as \(\lambda_{(L)}(t)\) in (9). On the other hand, for the semiparametric regression, we adopted the suggested basis spline regression form presented in (4). While Section 4.1 outlines the theory in a broader context, it is important to note that in practical application, we lack access to multiple covariate data. We have only one covariate (Year), and consequently, in this section, we take \(P=1\). With this, our model takes the form given as \(\lambda_{(NL)}(t)\) in (9) given by \[\lambda_{L}(t)=\exp(\beta_{0}+\beta_{1}\ t),\ \lambda_{NL}(t)=\exp\left[\sum_{k =1}^{K}\beta_{k}B_{k}(t)\right]. \tag{9}\] We employed Bayesian methods to estimate the model parameters. As the exploratory analysis shows a strong contraction of the considered GE distribution towards its base model, the exponential one, we used the proposed PC prior for the shape parameter \(\alpha\) and chose independent flat Gaussian priors for the regression parameters. As discussed in Section [6], we resorted to the MCMC techniques to draw inferences about the model parameters. We chose to utilize \(K=12\) basis functions for our analysis. With data spanning over 122 years, adopting 12 splines enabled us to capture decadal patterns using each spline effectively. To optimize the hyperparameter \(\theta\) for the PC prior and achieve the most precise fit in our semiparametric regression, we computed WAIC values across a range of \(\theta\) values, spanning within the interval {0.5, 1,..., 5} for \(\theta\). After examining the northern, middle, and southern regions individually, we identified the optimal values for \(\theta\) that yielded the lowest WAIC values. Specifically, for the Northern Western Ghats region, \(\theta=4.5\) demonstrated the most favorable outcome. Similarly, for the middle Western Ghats region, \(\theta=3.5\) was identified as optimal, while for the southern Western Ghats region, \(\theta=1.5\) exhibited superior performance. As a result, these chosen \(\theta\) values were employed for their respective regions during the final model fitting stage. We fitted six models corresponding to the two models and all three regions. For each of the model fits, we generated 10,000 MCMC samples for each model parameter. The initial 3,000 samples were removed as burn-in and subsequently excluded from the analysis. Additionally, we employed a thinning interval of 5. To evaluate convergence, mixing, and the behavior of the chains derived from the MCMC process, we visualize the trace plots of the parameters associated with both model fits in the supplementary materials. Specifically, we present the trace plots of the shape parameter \(\alpha\) for each region. The regression parameters also exhibited similar satisfactory mixing behavior. ### Model Comparison In this section, we carry out a detailed comparison between the two model fits mentioned in (9). These model fits were obtained using a Bayesian approach through MCMC simulations, as elaborated upon in Section 8.1. The central focus of this comparison is visually represented in Figure 6. In this figure, we visually present the estimated mean daily rainfall on wet days for each year spanning the period from 1901 to 2022. These estimates are provided for the three regions separately, and both model fits - one characterized as semiparametric (depicted in red) and the other as parametric (depicted in blue). Furthermore, distinct panels are employed to illustrate various regions (the top panel represents the Northern region, the middle panel corresponds to the Middle region, and the bottom panel depicts the Southern region). In each panel, we display the estimated trajectory alongside a bar diagram, which offers a clear view of the annual average of daily rainfall on wet days for each year. Across all three regions, a noticeable trend emerges: the semiparametric models ex Figure 6: Estimated mean of daily wet-day rainfall (in mm) with Semiparametric (red line) and Parametric (blue line) models given by (9), along with corresponding point-wise 95% credible intervals (ribbons). The top, middle, and bottom panels show the results for the Northern, Middle, and Southern Western Ghats regions, respectively. hibit a notably superior fit. This distinction becomes evident as we observe multiple abrupt fluctuations in the bars representing the annual averages of wet-day precipitation. Remarkably, the semiparametric model effectively captures these fluctuations. Particularly noteworthy is the ability of the semiparametric model to accurately capture the nonstationarity present in the precipitation patterns. This heightened ability to encapsulate the dynamic variations in precipitation is a notable strength of the semiparametric model fitting. In Figure 6, we also provide a visual representation of a 95% credible interval for the trajectory, estimated from the MCMC samples. This interval allows us to understand the uncertainty associated with the estimation process. ### Inferences about Western Ghats Rainfall The fitted model offers valuable insights into the intricate short-term trends and features of rainfall in the Western Ghats region. Similar to the numerous instances in the literature where the authors modeled rainfall data using an exponential distribution [16; 46], our study also echoes this trend, aptly captured by the PC prior. The estimated GE shape parameters (posterior means) for the Northern, Middle, and Southern Western Ghats regions are 0.859, 0.949, and 0.873, respectively. Corresponding posterior standard deviations for these regions are 0.096, 0.100, and 0.097, respectively. These shape parameter values indicate a pronounced alignment with the exponential distribution of wet-day rainfall (the shape parameter is one). Consistent with the fluctuating pattern in the annual average of daily wet-day rainfall, the fitted mean lines for each region also demonstrate periodic crests and troughs. Besides, a consistent and stable mean rainfall trend is noticeable across the Northern and Middle Western Ghats regions. In the Southern Western Ghats region, the fitted parametric and semiparametric models distinctly reveal a decaying pattern in the annual averages of daily wet-day rainfall. We present two significant insights into the rainfall patterns within these regions: the overarching decade-long shifts and individual region-specific probability rainfall plots. The calculation of the decadal change involves determining the overall rainfall shift and dividing it by the number of decades, resulting in \(\{\mu(2022)-\mu(1901)\}/12.1\), where \(\mu(t)=\lambda(t)^{-1}\left[\psi(\alpha+1)-\psi(1)\right]\), and \(t\) representing the corresponding year. In this equation, \(\alpha\) represents the estimated value of the shape parameter specific to the region, while \(\lambda(t)\) denotes the fitted rate parameter values for the given year, \(t\). Subsequently, the calculated decadal shifts in rainfall amount to 0.458 mm, 0.078 mm, and -0.367 mm for the northern, middle, and southern regions, respectively. In Figure 7, the probability rainfall graphs are displayed for three distinct probabilities: 0.3 (red line), 0.5 (blue line), and 0.7 (green line). In agrometeorology, 100\(p\)% probability rainfall means the \((1-p)^{th}\) quantile of the probability distribution of rainfall. These plots hold significant implications in agriculture as they empower farmers to formulate their harvesting strategies based on the anticipated likelihood of rainfall, allowing them to align their plans with the rainfall patterns to fulfill their specific requirements. Figure 7 showcases the estimated probability rainfall graphs, along with pointwise 95% credible intervals for the estimated rainfall. We derived these intervals from the MCMC samples; they illustrate the uncertainty associated with the estimation process. As a crucial component of our comprehensive analysis, we further discuss the dynamic nature of annual average rainfall for the Western Ghats region over the past century by exploring the plot of its rate of change. Interpreting this quantity unveils insights into trends, variations, and shifts in mean values over time, offering glimpses into rainfall behavior. A higher magnitude implies swift changes, while a lower one indicates gradual shifts. A positive rate denotes an increasing fitted mean rainfall over time, potentially signaling rising annual average rainfall. Conversely, a negative rate signifies a decreasing fitted mean, indicating a declining pattern and drier conditions. A rate of change near zero indicates a stable fitted mean rainfall. Fluctuations around zero imply short-term variations within a steady range. We compute this quantity by taking the derivative of the fitted mean from our semiparametric model with respect to the time component \((t)\). This process involves differentiating the cubic B-splines from \(\lambda_{NL}(t)\) in (9), given by \[\frac{\partial\mu(t)}{\partial t}=-\frac{\psi(\alpha+1)-\psi(1)}{[\lambda(t)] ^{2}}\sum_{k=1}^{K}\beta_{k}\frac{\partial B_{k}(t)}{\partial t} \tag{10}\] where we computed the derivatives of the cubic B-splines using fda package [37] in R. Figure 8 illustrates the plots depicting the rate of change over the years for each of the three regions (the left panel for the Northern region, the middle panel for the Middle region, and the right panel for the Southern region). For different regions of the Western Ghats mountain range, the data uncovers varying trends in rainfall patterns. Initially, the Northern and Middle regions exhibit more pronounced fluctuations in the rate-of-change graphs than the Southern region. This pattern suggests more rapid variations in rainfall trends in the Northern and Middle regions, while a more stable rainfall pattern is visible for the Southern area. Moreover, the small-scale positive and negative rate-of-change instances are well-balanced for the Northern and Middle regions. This pattern implies that over the past century, changes in rainfall have been relatively symmetric in terms of increase and decrease, with no significant alterations in long-term patterns. In contrast, the Southern re Figure 8: Rate of change in annual average of daily wet-day Rainfall in the monsoon months across the year 1920–2022, given by \(\frac{\partial\mu(t)}{\partial t}\) in (10). The black line represents zero value. Figure 7: 30% (red line), 50% (blue line), and 70% (green line) probability rainfall (in mm) with corresponding point-wise 95% credible intervals (ribbons). gion displays a substantial portion of years with graphs below the zero line, signifying a prevalent decreasing trend in rainfall. The rate-of-change in mean for the last 30 years shows consistent negative values in the Southern sector, indicating the declining rainfall trend, while the graphs for the other two regions consistently exhibit positive values, indicating an increasing trend in rainfall over the past three decades in those areas. The pointwise 95% credible intervals include the zero line for the Northern and Middle regions; hence, the positive values for the last years are not significant. On the other hand, while the posterior mean rate-of-change remains negative for the Southern region in general, the credible intervals indicate that the negative values of rate-of-change are significant for several timestamps; however, the positive values are generally insignificant. ## 9 Discussions and Conclusions The Western Ghats, a formidable mountain range running parallel to the western coast of the Indian subcontinent, have a significant role in shaping precipitation patterns in Southern India. This impact is especially notable during the monsoon season, responsible for a substantial portion of the yearly rainfall of the region that is essential for ecosystem vitality and agricultural sustenance. The Western Ghats can be divided into Northern, Middle, and Southern regions. The proposed semiparametric generalized exponential (GE) regression model provides a reasonable fit for the wet-day rainfall data for all three regions. The model allows the marginal distributions to be the popular GE distribution. With its shape and rate parameters, the GE distribution felicitates more rigorous skewness attributes than several other distributions. Thus, it is a better choice as a potential flexible model to incorporate high positive skewness in the data. Additionally, depending on the shape parameter, the varying nature of the hazard function provides GE distribution more compatibility in fitting into complex data structures. In the regression arena, semiparametric regression is a powerful statistical method that combines the flexibility of the nonparametric models with the interpretability and efficiency of the parametric models. The superiority of our proposed model in capturing nonlinearity compared to the corresponding parametric model is depicted in Sections 7 and 8. On the other hand, PC prior is a principled distance-based prior that penalizes departure from a base model and is used for specifying priors on parameters where it is difficult to elicit from expert knowledge directly. This paper introduces a PC prior for the shape parameter in the GE distribution, with the motivation of driving the GE distribution closer to the characteristics of the exponential distribution, a well-known probability distribution model for classical rainfall modeling. There are several directions for extending this research. In addition to modeling the rate parameter, we can consider treating the shape parameter as a time-dependent variable. Instead of utilizing splines for the rate parameter, an alternative approach could involve employing a Gaussian process prior. Moreover, to ensure the applicability of our comparisons to large datasets, we may explore various approximation techniques like Gaussian Markov random fields [40]. While this paper has primarily focused on the temporal analysis of rainfall data, further enhancements can be made by incorporating spatial components [9]. This extension involves investigating the variability in rainfall patterns across diverse geographical regions or watersheds [54]. Additionally, there is potential for developing a real-time rainfall prediction system, offering timely information for tasks such as flood forecasting, reservoir management, and emergency response, based on the foundation provided by this model. For the high-dimensional spatial problems, our model can be implemented as a two-stage model where the GE parameters can be estimated at each spatial location ignoring the spatial structure and those estimates can be smoothed using a Gaussian process [18]. From the application perspective, we observed a consistent overall trend with periodic fluctuations in the Northern and Middle Western Ghats regions. However, a clear declining trend was evident in the Southern Western Ghats region. This observation is further supported by the decadal analysis of rainfall changes in these three regions, where only the Southern region exhibited a clear and significant negative value indicating the effects of climate change. This research not only enhances our comprehension of the intricate climatic dynamics within the Western Ghats but also emphasizes the critical role of precise predictive models in anticipating seasonal rainfall variations. ## Data availability statement The dataset used in this paper can be downloaded (in a gridded data format) from [https://www.imdpune.gov.in/cmpg/Griddata/Rainfall_1_NetCDF.html](https://www.imdpune.gov.in/cmpg/Griddata/Rainfall_1_NetCDF.html). ## Disclosure statement No potential conflict of interest was reported by the authors.
2309.14419
On the expressivity of embedding quantum kernels
One of the most natural connections between quantum and classical machine learning has been established in the context of kernel methods. Kernel methods rely on kernels, which are inner products of feature vectors living in large feature spaces. Quantum kernels are typically evaluated by explicitly constructing quantum feature states and then taking their inner product, here called embedding quantum kernels. Since classical kernels are usually evaluated without using the feature vectors explicitly, we wonder how expressive embedding quantum kernels are. In this work, we raise the fundamental question: can all quantum kernels be expressed as the inner product of quantum feature states? Our first result is positive: Invoking computational universality, we find that for any kernel function there always exists a corresponding quantum feature map and an embedding quantum kernel. The more operational reading of the question is concerned with efficient constructions, however. In a second part, we formalize the question of universality of efficient embedding quantum kernels. For shift-invariant kernels, we use the technique of random Fourier features to show that they are universal within the broad class of all kernels which allow a variant of efficient Fourier sampling. We then extend this result to a new class of so-called composition kernels, which we show also contains projected quantum kernels introduced in recent works. After proving the universality of embedding quantum kernels for both shift-invariant and composition kernels, we identify the directions towards new, more exotic, and unexplored quantum kernel families, for which it still remains open whether they correspond to efficient embedding quantum kernels.
Elies Gil-Fuster, Jens Eisert, Vedran Dunjko
2023-09-25T18:00:01Z
http://arxiv.org/abs/2309.14419v2
# On the expressivity of embedding quantum kernels ###### Abstract One of the most natural connections between quantum and classical machine learning has been established in the context of kernel methods. Kernel methods rely on kernels, which are inner products of feature vectors living in large feature spaces. Quantum kernels are typically evaluated by explicitly constructing quantum feature states and then taking their inner product, here called embedding quantum kernels. Since classical kernels are usually evaluated without using the feature vectors explicitly, we wonder how expressive embedding quantum kernels are. In this work, we raise the fundamental question: can all quantum kernels be expressed as the inner product of quantum feature states? Our first result is positive: Invoking computational universality, we find that for any kernel function there always exists a corresponding quantum feature map and an embedding quantum kernel. The more operational reading of the question is concerned with efficient constructions, however. In a second part, we formalize the question of universality of efficient embedding quantum kernels. For shift-invariant kernels, we use the technique of random Fourier features to show that they are universal within the broad class of all kernels which allow a variant of efficient Fourier sampling. We then extend this result to a new class of so-called composition kernels, which we show also contains projected quantum kernels introduced in recent works. After proving the universality of embedding quantum kernels for both shift-invariant and composition kernels, we identify the directions towards new, more exotic, and unexplored quantum kernel families, for which it still remains open whether they correspond to efficient embedding quantum kernels. ## I Introduction Quantum devices carry the promise of surpassing classical computers in certain computational tasks [1; 2; 3; 4; 5; 6]. With machine learning playing a crucial role in predictive tasks based on training data, the question arises naturally to investigate to what extent quantum computers may assist in tackling _machine learning_ (ML) tasks. Indeed, such tasks are among the potential applications foreseen for near-term and intermediate-term quantum devices [7; 8; 9; 10; 11; 12]. In the evolving field of _quantum machine learning_ (QML), researchers have explored the integration of quantum devices to enhance learning algorithms [13; 14; 15; 16; 17; 18; 19]. The most-studied approach to QML relies on learning models based on _parametrized quantum circuits_ (PQCs) [20], sometimes referred to as quantum neural networks. When considering learning tasks with classical input data, PQCs must embed data into quantum states. This way, PQCs are built from encoding and trainable parts, and real-valued outputs are extracted from measuring certain observables. Since the inception of the field, a strong parallelism has been drawn between PQC-based QML models and _kernel methods_[14; 15; 21]. Kernel methods, like neural networks, have been used in ML for solving complex learning tasks. Yet, unlike neural networks, kernel methods reach the solution by solving a linear optimization task on a larger feature space, onto which input data is mapped. Consequently, the kernel approach is very well suited by our ability to map classical data onto the Hilbert space of quantum states. These maps are called _quantum feature maps_, and they lead to _quantum kernel methods_. Although kernel methods are more costly to implement than neural networks, they are guaranteed to produce optimal solutions. Much the same way, with quantum kernel methods, we are guaranteed to find better solutions than with other PQC-based models (where "better" means solutions which perform better on the training set; see Ref. [22] for a discussion on when this guarantee is not enough to ensure a learning advantage). For quantum kernel methods, plenty of knowledge is inherited from classical ML - including kernel selection tools [19], optimal solution guarantees [21; 22], generalization bounds [23], and approximation protocols [24; 25; 26]. Nevertheless, there is one large difference between quantum and classical kernel methods, and namely one that affects the cornerstone of these techniques: the _kernel function_ (or just _kernel_). Formally, all kernels correspond to the inner product of a pair of feature maps. Yet, first constructing the feature vector and second evaluating the inner product is often inefficient. Fortunately, many cases are known in which the inner product can be evaluated efficiently, by means other than constructing the feature map explicitly. For instance the Gaussian kernel is a prominent instance of this case. This is sometimes called "the kernel trick", and as a result it is often the case that practitioners do not even specify the feature vectors when using kernel methods. In contrast to this, it is fair to say that quantum kernels hardly ever use this trick, with some exceptions [27; 28]. Almost all quantum kernels conceived in the literature are constructed explicitly from a quantum feature map, or _quantum embedding_[21; 29], as discussed below. Specifically, one considers as quantum kernel \(\kappa\) the inner product \(\kappa(x,x^{\prime})\coloneqq\langle\psi(x),\psi(x^{\prime})\rangle\), where \(x\mapsto\psi(x)\) is a representation of a quantum state, either a state vector or a density operator, and \(\langle\cdot,\cdot\rangle\) is the appropriate inner product. In particular, quantum em beddings map classical data onto the Hilbert space of quantum states, or said otherwise, on the Hilbert space of quantum computations. We call _embedding quantum kernels_ (EQKs) the kernels which come from quantum embeddings. This difference between quantum and classical kernels raises some interesting questions, as for example: _Are EQKs the whole story for quantum kernel methods? Can all quantum kernels be expressed as EQKs?_ In this manuscript we analyze what families of kernels are already covered by EQKs, see Fig. 1. Our contributions are the following: 1. We show that all kernels can be seen as EQKs, thus proving their universality, when no considerations on efficiency are made. 2. We formalize the question of expressivity of _efficient_ EQKs. We immediately provide a partial answer restricted to _shift-invariant kernels_. We show that efficient EQKs are universal within the class of shift-invariant kernels, and we provide sufficient conditions for an EQK approximation to be produced efficiently in time. 3. We introduce a new class of kernels, called composition kernels, containing also non-shift-invariant kernels. We prove that efficient EQKs are universal in the class of efficient composition kernels, from where we can show that the _projected quantum kernel_ from Ref. [27] can in fact also be realized as an EQK efficiently. In all, we unveil the universality of EQKs in two important function domains. The rest of this work is organized as follows. The mathematical background and relevant definitions appear in Section II. Related and prior work is elucidated in Section III. Next, we prove the universality of embedding quantum kernels and formally state Question 1 in Section IV. Our results on shift-invariant kernels are in Section V, and the extension to composition kernels and the projected quantum kernel in Section VI. Finally, Section VII contains a collection of questions left open. A summary of the manuscript and closing remarks constitute Section VIII. ## II Preliminaries In this section we fix notation and introduce the necessary bits of mathematics on quantum kernel methods. ### Notation For a vector \(v=(v_{i})_{i}\in\mathbb{R}^{m}\), we call the \(1\)-norm (or \(\ell_{1}\)-norm) \(\|v\|_{1}=\sum_{i=1}^{m}\lvert v_{i}\rvert\) and the \(2\)-norm \(\|v\|_{2}=\sqrt{\sum_{i=1}^{m}v_{i}^{2}}\). We denote as \(\ell_{1}^{m}\) the set of \(m\)-dimensional unit vectors with respect to the \(1\)-norm, and similarly \(\ell_{2}^{m}\) the set of vectors that are normalized to have a unit \(2\)-norm. For a Hilbert space \(\mathcal{H}\), we use \(\langle\cdot,\cdot\rangle_{\mathcal{H}}\) to denote the inner product on that space. In the case of Euclidean spaces, we drop the subscript. Let \(A,B\in\mathbb{C}^{m\times m}\) be square complex matrices. Then, we call the _Hilbert-Schmidt inner product_ of \(A\) and \(B\) \[\langle A,B\rangle_{\text{HS}}=\operatorname{tr}\big{\{}A^{\dagger}B\big{\}}= \sum_{i,j=1}^{m}A_{i,j}^{*}B_{i,j}. \tag{1}\] For Hermitian matrices, we have \(A=A^{\dagger}\), and so the HS inner Figure 1: Illustration of the main question of this paper. Embedding Quantum Kernels (EQKs) have the form of an explicit inner product on the Hilbert space of quantum density matrices, which is evaluated using a quantum circuit. The box “Kernel functions” indicates that EQKs correspond to an inner product of feature vectors on a Hilbert space. The box “Efficient Quantum functions” restricts EQKs to functions that can be evaluated using a quantum computer in polynomial time, for instance these would include preparing a state-dependent state \(\tilde{\rho}(x,x^{\prime})\) and then measuring the expectation value of an observable \(\mathcal{M}\) on the data-dependent state. The box “Efficient Embedding Quantum Kernels” then clearly lives in the intersection of the two other boxes. The question we address here is then whether EQKs do cover the whole intersection. Said otherwise, can every efficient quantum kernel function be expressed as an efficient EQKs? Or, on the contrary, do there exist efficient quantum kernels which are not expressible as efficient EQKs? product becomes just \[\langle A,B\rangle_{\text{HS}}=\operatorname{tr}\left\{AB\right\}=\sum_{i,j=1}^{m }A_{j,i}B_{i,j}. \tag{2}\] The _Frobenius norm_\(\|\cdot\|_{F}\) of a matrix \(A\) is the square root of the sum of the magnitude square of all its entries, which is also equal to the root of the Hilbert-Schmidt inner product of the matrix with itself, as \[\|A\|_{F}^{2}=\langle A,A\rangle_{\text{HS}}=\sum_{i,j}\lvert A_{i,j}\rvert^{ 2}. \tag{3}\] In what follows, we call \(\mathcal{X}\subseteq\mathbb{R}^{d}\) a \(d\)-dimensional compact subset of the reals. We reserve \(n\) to denote qubit numbers. As we explain below, we use \(k\) to refer to arbitrary kernel functions, while \(\kappa\) is used exclusively for embedding quantum kernels. When talking about efficient and inefficient approximations, we consider sequences of functions \(\{k_{s}\}_{s\in\mathbb{N}}\). We refer to \(s\) as the _scale parameter_. Scale parameters can correspond to different qualities of the sequence, as for example the dimension of the input data, or the number of qubits involved in evaluating a function. Efficiency then means at most polynomial scaling in \(s\), and inefficiency means at least exponential scaling in \(s\), hence the name scale parameter. ### Kernel methods Kernel methods solve ML tasks as linear optimization problems on large feature spaces, sometimes implicitly. The connection between the input data and the feature space comes from the use of a kernel function. In this work we do not busy ourselves with how the solution is found, but rather we focus on our ability to evaluate kernel functions, which are defined as follows. **Definition 1** (Kernel function).: _A kernel function \(k\colon\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) is a map from pairs of inputs on the reals fulfilling two properties:_ 1. _Symmetry under exchange:_ \(k(x,x^{\prime})=k(x^{\prime},x)\) _for every_ \(x,x^{\prime}\in\mathcal{X}\)_._ 2. _Positive semi-definite (PSD): for any integer_ \(m\in\mathbb{N}\)_, for any sets_ \(\{x_{1},\ldots,x_{m}\}\subseteq\mathcal{X}\) _and_ \(\{a_{1},\ldots,a_{m}\}\subseteq\mathbb{R}\) _of size_ \(m\)_, it holds_ \[\sum_{i,j=1}^{m}a_{i}a_{j}k(x_{i},x_{j})\geq 0.\] (4) _This is equivalent to saying that the_ Gram matrix__\(K\coloneqq[k(x_{i},x_{j})]_{i,j=1}^{m}\) _is positive semi-definite for any_ \(m\) _and any_ \(\{x_{i}\}_{i=1}^{m}\)_._ Other standard definitions exclude the PSD property. In this work we do not study indefinite, non-PSD kernels, although the topic is certainly of interest. The common optimization algorithms used in kernel methods (SVM, KRR) require kernels to be PSD. Even though we said we do not deal with the optimization part in this manuscript, we study only the kernels that would be used in a (Q)ML context. Symmetry and PSD are properties usually linked to inner products, which partly justifies our definition of a Gram matrix as the one built from evaluating the kernel function on pairs of inputs. Indeed, by Mercer's Theorem (detailed in Appendix A) for any kernel function \(k\), there exists a Hilbert space \(\mathcal{H}\) and a _feature map_\(\phi\colon\mathcal{X}\to\mathcal{H}\) such that, for every pair of inputs \(x,x^{\prime}\in\mathcal{X}\), it holds that evaluating the kernel is equivalent to computing the \(\mathcal{H}\)-inner product of pairs of _feature vectors_\(k(x,x^{\prime})=\langle\phi(x),\phi(x^{\prime})\rangle_{\mathcal{H}}\). We say every kernel \(k\) has an associated feature map \(\phi\), which turns each datum \(x\) into a feature vector \(\phi(x)\), living in a _feature space_\(\mathcal{H}\). One notable remark is that different kernel functions have different feature maps and feature spaces associated to them. Each learning task comes with a specific data distribution. The ultimate goal is to, given the data distribution, find a map onto a feature space where the problem becomes solvable by a linear model. The _model selection_ challenge for kernel methods is to find a kernel function whose associated feature map and feature space fulfill this linear separation condition. Intuitively, in a classification task, we would want data from the same class to be mapped to the same corner of Hilbert space, and data from different classes to be mapped far away from one another. Then our project can be framed within the _quantum kernel selection_ problem, we want to identify previously unexplored classes of quantum kernels. ### Embedding quantum kernels In the following, we introduce the relevant concepts in quantum kernel methods. Our presentation largely follows the lines of Ref. [21]. We use the name "embedding quantum kernels" following Ref. [19]; many other works use simply "quantum kernels" when referring to the same concept. Hopefully our motives are clear by now. Learning models based on PQCs are the workhorses in much of today's approaches to QML. With some notable exceptions, these PQC-based models comprise a two-step process: 1. Prepare a data-dependent quantum state. For every \(x\in\mathcal{X}\), produce \(|\phi(x)\rangle=U(x)|0\rangle\) with some data-embedding unitary \(U(x)\). 2. Measure the expected value of a variationally-tunable observable on the data-dependent state. Given a parametrized observable \(\mathcal{M}(\vartheta)\), evaluate \[\langle\mathcal{M}(\vartheta)\rangle_{\phi(x)}=\operatorname{tr}\left\{|\phi(x )\rangle\!\langle\phi(x)|\mathcal{M}(\vartheta)\right\}.\] (5) For instance, for binary classification, one would then consider labeling functions like \(h(x)=\operatorname{sign}\left(\langle\mathcal{M}(\vartheta)\rangle_{\phi(x)}+b\right)\) and optimize the variational parameters \(\vartheta\) and \(b\). Examples of how \(U(x)\) could look like in practice include _amplitude encoding_[21], different _data re-uploading encoding strategies [17; 30; 31], or the _IQP ansatz_[14]. In turn, variationally-tunable observables are usually realized as a fixed "easy" observable (like a single Pauli operator), preceded by a few layers of brickwork-like \(2\)-local trainable gates. In our current presentation, we do not restrict a particular form for \(U(x)\) or \(\mathcal{M}(\vartheta)\), but rather allow for any general form possible, as long as it can be implemented on a quantum computer. This approach can also be understood through the lens of kernel methods. In this case, the feature space is explicitly chosen to be the Hilbert space of Hermitian matrices (of which density matrices are a subset, so also quantum states). Given a data-dependent state preparation \(x\mapsto|\phi(x)\rangle\), one could choose to call it a _quantum feature map_\(\rho\) and promote it to quantum density operators, or quantum states, as \[\rho\colon\mathcal{X} \to\mathrm{Herm}, \tag{6}\] \[x\mapsto \rho(x)=|\phi(x)\rangle\!\langle\phi(x)|. \tag{7}\] Another fitting name would be _quantum feature state_. In a slightly more general view, we call _data embedding_ any map from classical data onto quantum density matrices of fixed dimension \(\rho\colon\mathcal{X}\to\mathrm{Herm}(2^{n})\) for \(n\)-qubit systems. With this we abandon the need for a unitary gate applied to the \(|0\rangle\) state vector. Together with the Hilbert-Schmidt inner product, quantum feature maps give rise to an important family of kernel functions: **Definition 2** (Embedding quantum kernel (EQK)).: _Given a data embedding \(x\mapsto\rho(x)\) used to encode classical data \(x\) in a quantum density operator \(\rho(x)\), we call embedding quantum kernel (EQK) \(\kappa_{\rho}\) the Hilbert-Schmidt inner product of pairs of quantum feature vectors:_ \[\kappa_{\rho}\colon\mathcal{X}\times\mathcal{X} \to \mathbb{R}, \tag{8}\] \[(x,x^{\prime}) \mapsto \kappa_{\rho}(x,x^{\prime})\coloneqq\mathrm{tr}\left\{\rho(x) \rho(x^{\prime})\right\}. \tag{9}\] _Fig. 2 illustrates this construction._ Since \(\kappa_{\rho}\) is defined explicitly as an inner product for any \(\rho\), it follows that it is a PSD, symmetric function. For ease of notation, we will write just \(\kappa\) whenever \(\rho\) is unimportant or clear from context. There are, from the outset, a few good reasons to consider EQKs as QML models: 1. It is possible to construct EQK families that are not classically estimable unless \(\mathsf{BQP}=\mathsf{BPP}\), thus opening the door to quantum advantages [32]. 2. A core necessity for successful kernel methods is to map complex data to high-dimensional spaces where feature vectors become linearly separable. The Hilbert space of quantum states thus becomes a prime candidate due to its exponential dimension, together with our ability to estimate inner products efficiently1. Footnote 1: However, high dimensions are related to overfitting, so as usual there is a trade-off. 3. Since EQKs are explicitly defined from the data embeddings, we are free to design embeddings with specific desired properties. And, a priori, other than the shot noise2, EQKs do not add drawbacks to the list of general issues for kernel methods. So, EQKs are a well-founded family of quantum kernel functions. Nevertheless, their reliance on a specific data embedding could be a limiting factor _a priori_. Footnote 2: It is known that the finite approximation to the expectation values can break down PSD. In the case of classical computations we are not accustomed to think in terms of the Hilbert space of the computation (even though it does exist). There are numerous examples of kernels which have been designed without focusing on the feature map. The most prominent example of a kernel function used in ML is the _radial basis function (RBF) Gaussian kernel_ (or just Gaussian kernel, for short), given by \[k_{\sigma}(x,x^{\prime})=\exp\left(-\frac{\|x-x^{\prime}\|^{2}}{2\sigma^{2}} \right). \tag{10}\] On the one hand, we know the Gaussian kernel is a PSD function by virtue of Bochner's theorem (stated in Section V as Theorem 4), which explains it via the _Fourier transform_ (FT). On the other hand, though, one can prove that the Gaussian kernel corresponds to an inner product in the Hilbert space Figure 2: Schematic of the different ingredients that form embedding quantum kernels. The data input is mapped onto the “quantum feature space” of _quantum density operators_ via a _quantum embedding_. There the _Embedding Quantum Kernel_ is defined as the _Hilbert-Schmidt inner product_ of pairs of quantum features. of all monomials of the components of \(x\), from where it follows that the Gaussian kernel can learn any continuous function on a compact domain Ref. [33], provided enough data is given. The feature map corresponding to the Gaussian kernel is infinite-dimensional. This fact alone makes the Gaussian kernel not immediately identifiable as a reasonable EQK, which motivates the question: _can all quantum kernels be expressed as embedding quantum kernels?_ In this section, we have introduced kernel functions, embedding quantum kernels, and the question whether EQKs are all there is to quantum kernels. In the following sections, we first see that EQKs are universal, then we formalize a question about the expressivity of efficient EQKs, and finally we answer the question by proving universality of EQKs in two restricted kernel families. ## III Related work An introduction to quantum kernel methods can be found in the note [21] and in the review [29]. We refrain from including here a full compendium of all quantum kernel works to date, as good references can already be found in Ref. [12]. Instead, we make an informed selection of papers that bear relation with the object of our study: which quantum kernels are _embedding quantum kernels_ (EQKs). Kernel methods use different optimization algorithms depending on each task, the most prominent examples being _support vector machine_ and _kernel ridge regression_. The early years of QML owed much of their activity to the HHL algorithm [34]. The quantum speed-up for linear algebra tasks was leveraged to propose a _quantum support vector machine_ (qSVM) algorithm [35]. The qSVM algorithm is listed among the first steps of QML historically. Since qSVM is a quantum application for kernel methods, it is no wonder that the term quantum kernel methods was introduced for concepts around it. This, however, is not what we mean by quantum kernel methods in this work. Instead, we occupy ourselves with kernel methods where the kernel function itself requires a quantum computer to be evaluated, independently of the nature of the optimization algorithm. The optimization step comes only after the kernel function has been evaluated on the training data, so the object of study of qSVM is not the same as in this manuscript. We study the expressivity of a known kind of kernel functions, and not how to speed up the optimization of otherwise classical algorithms. Among the first references mentioning the evaluation of kernel functions with quantum computers is Ref. [13], where the differences between quantum kernels and qSVM have been made explicit. References [14; 15] have showcased the link between quantum kernels and PQCs and have demonstrated their implementation experimentally. In them, the authors have mentioned the parallelisms between quantum feature maps and the kernel trick. In particular, Ref. [15] has coined the distinction between an _implicit quantum model_ (quantum kernel method), and an _explicit quantum model_ (PQC model with gradient-based optimization). Implicit models are in this way an analogous name for EQKs, where emphasis is made on the distinction from other PQC-based models. Quantum kernels have enjoyed increasing attention since they have been used to prove an advantage of QML over classical ML in Ref. [32]. The authors have morphed the discrete logarithm problem into a learning task and then used a quantum kernel to solve it efficiently, which cannot be done classically according to well-established cryptographic assumptions. In that, the approach taken is similar to that of Ref. [36] (showing a quantum advantage in distribution learning). As such, this has been among the first demonstrations of quantum advantage in ML, albeit in an artificially constructed learning task. Importantly, the quantum kernel used in this work has explicitly been constructed from a quantum embedding, so it is an EQK in the sense of this work. One important difference between EQKs and other PQC-based approaches is that, for EQKs, the only design choice is the data embedding itself. Ref. [18] has wondered how to construct optimal quantum feature maps using measurement theory. The work has proposed constructing embeddings specific to learning tasks, which resonates with the idea that some feature maps are better than others for practically relevant problems. Ref. [19], where the term EQKs has been introduced for the first time, has presented the possibility of optimizing the feature map variationally, drawing bridges to ideas from _data re-uploading_[17; 30]. Other than the trainable kernels of Ref. [19], the data re-uploading framework did not fit with the established quantum kernel picture up until this point in time. In Ref. [22], the differences between explicit models, implicit models, and re-uploading models have been analyzed. On the one hand, the authors have found that the optimality of kernel methods might not be optimal enough, since they show a learning task in which kernels perform much worse than explicit models when evaluated on data outside the training set. On the other hand, a rewriting algorithm has been devised to convert re-uploading circuits into equivalent encoding-first circuits, so re-uploading models are explicit models. By construction, each explicit model has a quantum kernel associated to it. So, for the first time, using the rewriting via explicit models, Ref. [22] made data re-uploading models fit in the quantum kernel framework. This reference takes the explicit versus implicit distinction from [15], which means that it also only considers EQKs when it comes to kernels. Some examples of different kinds of quantum kernels can be found in Refs. [27; 28], which upon first glance do not resemble the previous proposals. In Ref. [27] a type of kernel functions based on the classical shadows protocol has been proposed, under the name of _projected quantum kernel_. These functions have been analyzed in the context of a quantum-classical learning separation, now considering real-world data, as opposed to the results of Ref. [32]. Even though the projected quantum kernel uses an explicit feature map which requires a quantum computer, the feature vectors are only polynomial in size, and they are stored in classical memory. Next, the kernel function is the Gaussian kernel evaluated on pairs of feature vectors, and not their Euclidean or Hilbert-Schmidt inner product. These two differences set the pro jected quantum kernel apart from the rest. In turn, the authors of Ref. [28] set out to address a looming issue for EQKs called _exponential kernel concentration_[37], or _vanishing similarity_, which is tantamount to the _barren plateau problem_[38] that could arise also for kernels. They show that the new construction, called "anti-symmetric logarithmic derivative quantum Fisher kernel", does not suffer from the vanishing similarity problem. Here, the classical input data is mapped onto an exponentially large feature space. Given a PQC, classical inputs are mapped to a long array, with as many entries as trainable parameters in the PQC. Each entry in this array is the product of the unitary matrix implemented by the PQC and its derivatives with respect to each variational parameter. Interestingly, this kernel can be seen as the Euclidean inner product of a flattened vector of unitaries, with a metric induced by the initial quantum state. In this way, classical data is not mapped onto the Hilbert space of quantum states, but the inner product used is the same as in regular EQKs. This realization paves the way to expressing these kernels as EQKs. Recent efforts in de-quantizing PQC-based models via _classical surrogates_[39] have touched upon quantum kernels. In Refs. [24, 25, 26], the authors propose using a classical kernel-approximation protocol based on _random Fourier features_ (RFF) to furnish classical learning models capable of approximating the performance of PQC-based architectures. The techniques used in these works are not only very promising for de-quantization of PQC-based learning models, but also they are relevant to our discussion. Below, we also use the RFF approach, albeit in quite a different way, as it is not our goal to find classical approximations of quantum functions, but rather quantum approximations of other quantum functions. The goal of the study in Refs. [24, 25, 26] is to, given a PQC-based model, construct a classical kernel model with guarantees that they are similarly powerful. In this way, the input to the algorithms is a PQC (either encoding-first, or data re-uploading), and the output is a classical kernel. Conversely, in our algorithms, the input is a kernel function, and the output is an EQK-based approximation of the same function. In Ref. [40], we find an earlier study of RFFs in a QML scenario. The authors focus on a more advanced version of RFFs, called _optimized Fourier Features_, which involves sampling from a data-dependent distribution. In the classical literature, it was found that sampling from this distribution could be "hard in practice" [41], so the authors of Ref. [40] set out to propose an efficient quantum algorithm for sampling. This way, a quantum algorithm is proposed to speed up a training algorithm for a classical learning architecture, similar to the case of qSVM [35]. ## IV The universality of quantum feature maps In the previous section we saw how we can define quantum kernel functions explicitly from a given data embedding. This section, together with the following two sections, contains the main results of this manuscript. The statements shift around different notions of efficiency and different classes of kernel functions. In Fig. 3 we provide a small sketch of how each of our results relates to one another from a zoomed-out perspective. The leitmotiv is that we can always restrict further what restrictions we are satisfied with when concocting EQK-based approximations of kernels. We find all kernels can be approximated as EQKs if our only restriction is to use finitely many quantum resources. But then, we specialize the search for kernels which can be approximated efficiently as EQKs, with a distinction between space-efficiency (number of qubits required) and total run-time efficiency. It should also be said that when we talk about time efficiency we always consider "quantum time", so we assume we are always allowed access to a quantum computer. This way, the analysis from now on departs from the usual quantum-classic separation mindset. From the outset, two basic results combined, namely Mercer's feature space construction (elaborated in Appendix A.1), together with the universality of quantum circuits, already certify the possibility of all kernel functions to be realized as EQKs. If we demanded mathematical equality, Mercer's construction could require infinite-dimensional Hilbert spaces, and thus quantum computers with _infinitely many qubits_. Instead, for practical purposes, from now on we do not talk about "evaluating" functions in one or another way, but rather about "approximating" them to some precision. With this, Theorem 1 confirms that we can always approximate any kernel function as an EQK to arbitrary precision _with finitely many resources_. For this universality statement, we allow for extra multiplicative and additive factors. Instead of talking about \(\varepsilon\)-approximation as \(|k(x,x^{\prime})-\operatorname{tr}\{\rho_{n}(x)\rho_{n}(x^{\prime})\}|<\varepsilon\), we consider \(|k(x,x^{\prime})-2^{n}\operatorname{tr}\{\rho_{n}(x)\rho_{n}(x^{\prime})\}+1|<\varepsilon\). These extra factors come from Lemma 3, introduced right below, and they do not represent an obstacle against universality. **Theorem 1** (Approximate universality of finite-dimensional quantum feature maps).: _Let \(k\colon\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) be a kernel function. Then, for any \(\varepsilon>0\) there exists \(n\in\mathbb{N}\) and a data embedding \(\rho_{n}\) onto the Hilbert space of quantum states of \(n\) qubits such that_ \[|k(x,x^{\prime})-2^{n}\operatorname{tr}\{\rho_{n}(x)\rho_{n}(x^{\prime})\}+1 |<\varepsilon \tag{11}\] _for almost all \(x,x^{\prime}\in\mathcal{X}\)._ The statement that Eq. (11) holds for _almost all_\(x,x^{\prime}\in\mathcal{X}\) comes from measure theory, and it is a synonym of "except in sets of measure \(0\)", or equivalently "with probability \(1\)". That means that although there might exist individual adversarial instances of \(x,x^{\prime}\in\mathcal{X}\) for which the inequality does not hold, these "bad" instances are sparse enough that the event of drawing them from the relevant probability distribution has associated probability \(0\). Theorem 1 says every kernel function can be approximated as an EQK up to a multiplicative and an additive factor using finitely-many qubits. Before we give the theorem proof, we introduce the useful Algorithm 1 to map classical vectors to quantum states that we can then use to evaluate Euclidean inner products as EQKs. Then, Lemma 2 contains the correctness statement and runtime complexity of Algorithm 1, and Lemma 3 shows the relation between the Euclidean inner product of the encoded real vectors and the Hilbert-Schmidt inner product of the encoding quantum states. ``` 0: A 1-norm unit vector \(r\in\ell_{1}^{d}\). 0: A quantum state \(\rho_{r}\propto\mathbb{I}+\sum_{i=1}^{d}r_{i}P_{i}\). \(\triangleright\) See Lemma 2. 1: Set \(n=\lceil\log_{4}(d+1)\rceil\). 2: Pad \(r\) with \(0\)s unit it has length \(4^{n}-1\). 3: Draw \(i\in\{1,\dots,4^{n}-1\}\) with probability \(|r_{i}|\). 4: Prepare \(\rho_{i}=\frac{1}{2^{n}}\left(\mathbb{I}+\operatorname{sign}(r_{i})P_{i}\right)\). 5:return\(\rho_{i}\). ``` **Algorithm 1** Classical to quantum embedding, C2QE Notice that the output of Algorithm 1, \(\frac{1}{2^{n}}\left(\mathbb{I}\pm P\right)\), is a single (pure) eigenstate of a Pauli operator \(P\) with eigenvalue \(\pm 1\). Nevertheless, as Line 3 involves drawing an individual index \(i\in\{1,\dots,4^{n}-1\}\), we see Algorithm 1 as a random algorithm, which prepares a mixed state as a classical mixture of pure states. **Lemma 2** (Correctness and runtime of Algorithm 1).: _Let \(r\in\ell_{1}^{d}\subseteq\mathbb{R}^{d}\) be a unit vector with respect to the \(1\)-norm, \(\|r\|_{1}=1\). Take \(n=\lceil\log_{4}(d+1)\rceil\) and pad \(r\) with \(0\)s until it has length \(4^{n}-1\). Let \((P_{i})_{i=1}^{4^{n}-1}\) be the set of all Pauli matrices on \(n\) qubits without the identity. Then Algorithm 1 prepares the following state as a classical mixture_ \[\rho_{(\cdot)}\colon\ell_{1}^{d} \rightarrow\operatorname{Herm}(2^{n}), \tag{12}\] \[r\mapsto \rho_{r}=\frac{\mathbb{I}+\sum_{i=1}^{4^{n}-1}r_{i}P_{i}}{2^{n}}. \tag{13}\] _The total runtime complexity \(t\) of Algorithm 1 fulfills \(t\in\mathcal{O}(\operatorname{poly}(d))\)._ **Lemma 3** (Euclidean inner products).: _Let \(r,r^{\prime}\in\mathbb{R}^{d}\) be unit vectors with respect to the \(1\)-norm \(\|r^{(\prime)}\|_{1}=1\). Then, for \(\rho_{r},\rho_{r^{\prime}}\) as produced in Algorithm 1, the following identity holds_ \[\langle r,r^{\prime}\rangle=2^{n}\operatorname{tr}\{\rho_{r}\rho_{r^{\prime}} \}-1. \tag{14}\] Proof of Lemmas 2 and 3.: The proofs are presented in Appendix B. **Remark** (Prefactors).: The \(2^{n}\) multiplicative factor is of no concern, since \(n\in\mathcal{O}(\log(d))\) and we are interested in methods that are allowed to scale polynomially in \(d\). In general \(\rho_{r}\) will be a mixed quantum state which can be efficiently prepared. Also, the map is injective, but not surjective. We are now in the position to prove Theorem 1. Proof of Theorem 1.: We prove this statement directly using a corollary of Mercer's theorem and the universality of quantum computing. First, we invoke Corollary A.3 in Appendix A, which ensures the existence of a finite-dimensional feature map \(\Phi_{m}:\mathcal{X}\rightarrow\mathbb{R}^{m}\) for which it holds that \[|k(x,x^{\prime})-\langle\Phi_{m}(x),\Phi_{m}(x^{\prime})\rangle|<\varepsilon. \tag{15}\] Without loss of generality, we assume \(\|\Phi_{m}(x)\|_{1}=1\) for all \(x\in\mathcal{X}\). Now, we can prepare the quantum state \(\rho_{\Phi_{m}(x)}\), which will use \(\lceil\log_{4}(m+1)\rceil\) many qubits. By preparing two such states, one for \(\Phi_{m}(x)\) and one for \(\Phi_{m}(x^{\prime})\), we can compute their inner product as the Hilbert-Schmidt inner product of the quantum states as in Lemma 3, to get \[\langle\Phi_{m}(x),\Phi_{m}(x^{\prime})\rangle=2^{n}\operatorname{tr}\{\rho_{ \Phi_{m}(x)}\rho_{\Phi_{m}(x^{\prime})}\}-1. \tag{16}\] For reference, notice \(\operatorname{tr}\{\rho_{\Phi_{m}(x)}\rho_{\Phi_{m}(x^{\prime})}\}\) could be computed using the \(\operatorname{SWAP}\) test, to additive precision in the number of shots. With this, we can ensure good approximation to polynomial additive precision efficiently \[|k(x,x^{\prime})-2^{n}\operatorname{tr}\{\rho_{\Phi_{m}(x)}\rho_{\Phi_{m}(x^{ \prime})}\}+1|<\varepsilon, \tag{17}\] for almost every \(x,x^{\prime}\in\mathcal{X}\). This completes the proof. Notice this is an existence result only, and the only statement is that there exists an EQK using a finite number of qubits, but it does not get into how quickly the number of qubits will grow for increasingly computational complicated kernel function \(k\), or increased required precision \(\varepsilon>0\). The number of qubits \(n\) will depend on some properties of the kernel \(k\) and the approximation error \(\varepsilon\), and if for example we had exponential scaling of the required number of qubits on some of these quantities, then Theorem 1 would bring no practical application. Similarly, one should also consider the time it would take to find such an EQK approximation, independently of the memory and run-time requirements of preparing the feature vectors and computing their inner product. Let us take the anti-symmetric logarithmic derivative quantum Fisher kernel of Ref. [28] as an example. Upon first inspection, evaluating that kernel does not seem to rely in the usual quantum feature map and Hilbert-Schmidt inner product combination. Yet, a feature map can be identified by rewriting some of the variables involved as a vector of exponential length. That means the scaling in the number of qubits required to encode the feature vector is at worst polynomial, according to Lemma 3. The normalization requirement from Lemma 3 could still prevent the feature map to be encoded using the construction we propose, but for the sake of illustration let us assume normalization is not a problem. Then, we have found a way of realizing the same kernel as an EQK. In this case, even though the scaling of the qubit number is no more than polynomial, the scaling in total run-time of the EQK approximation would still be at worst exponential. The message remains that, although all kernel functions can be realized as EQKs, there could still exist kernel functions which cannot be realized as EQKs _efficiently_. Pointing back to Fig. 1, this explains why we added the word "efficient" to the sets of quantum functions and EQKs. In order to talk about efficiency we need to replace individual functions \(k\) by function sequences \(\{k_{s}\}_{s\in\mathbb{N}}\), where \(s\) is the scale parameter. When we refer to an efficient \(\varepsilon\)-approximation, we refer to an algorithm making use of up to \(\operatorname{poly}(s,1/\varepsilon)\) resources, for each \(k_{s}\) in the sequence. We now formally present the question we aim at answering. **Question 1** (Expressivity of efficient EQKs).: _Let \(\{k_{s}\}_{s\in\mathbb{N}}\) be a sequence of kernel functions, let \(\varepsilon>0\) be a precision parameter, and consider the properties:_ 1. Quantum efficiency: _There is an algorithm that takes a specification of_ \(k_{s}\) _as input and produces an_ \(\varepsilon\)_-approximation of_ \(k_{s}\) _with a quantum computer efficiently in_ \(s\) _and_ \(1/\varepsilon\)_._ 2. Embedding-quantum efficiency: _There is an algorithm that takes a specification of_ \(k_{s}\) _as input and produces an_ \(\varepsilon\)_-approximation of_ \(k_{s}\) _as an EQK efficiently in_ \(s\) _and_ \(1/\varepsilon\)_._ 3. Classical inefficiency: _Any algorithm that takes a specification of_ \(k_{s}\) _as input and produces an_ \(\varepsilon\)_-approximation of_ \(k_{s}\) _with a classical computer must be inefficient in either_ \(s\) _or_ \(1/\varepsilon\)_._ _Then, assuming \(\{k_{s}\}_{s}\) fulfills classical inefficiency, does quantum efficiency imply embedding-quantum efficiency?_ The question above contains a few moving pieces which still need to be made fully precise, as for instance: the meaning of the scaling parameter \(s\), the sequence of domains \(\mathcal{X}_{s}\) from where each \(k_{s}\) takes its input, any restrictions on the functions \(k_{s}\), in what form must the functions \(k_{s}\) be specified, the choice of notion of \(\varepsilon\)-approximation, and the choice between space and time efficiency. These are left open on purpose to admit for diverse approaches to studying the question. We could have required a stronger sense of inefficiency, namely that there cannot exist an efficient and uniform construction for \(\varepsilon\)-approximating \(k_{s}\) with classical computers. Instead, we judge it enough to require that, even if such an efficient approximation could exist, it would be impossible to find efficiently, conditional on \(\mathsf{BQP}\nsubseteq\mathsf{P/poly}\). We added classical inefficiency because otherwise, in principle, one could have all: quantum efficiency, embedding-quantum efficiency, and classical efficiency, in which case the result would not be interesting for QML. An interesting question is then to imagine how classical kernel functions could also be embedding classical kernels, in the sense of data being mapped onto the Hilbert space of classical computation. In later sections, we fix all of these in our answers to Question 1. For instance, \(s\) for us is the dimension of the domains \(\mathcal{X}_{s}\) from where data is taken, the kernel functions can be specified either as black-boxes or as the description of circuits, we will take infinity-norm approximation almost everywhere, and we alternate between qubit number efficiency and run-time efficiency. As an aside, Question 1 also invites research in the existence of quantum kernels beyond EQKs, so efficient quantum kernels which do not admit efficient EQK-based approximations. It should be said that searching for quantum kernels beyond EQKs is slightly contradictory to the foundational philosophy of QML in the beginnings, which actively sought to express everything in terms of inner products in the Hilbert space of density operators. The foundational works [13; 14; 15] have motivated the use of embedding quantum kernels precisely because the inner product could be taken directly efficiently. Nevertheless, we point out the possibility of alternative constructions, keeping focus on methods which could still harbour quantum advantages. ## V The universality of efficient shift-invariant embedding quantum kernels In this section we present our second result: All shift-invariant kernels admit space-efficient EQK approximation provided they are smooth enough. We also give sufficient conditions for a constructive time-efficient EQK approximation. We arrive at these results in two steps: We first prove an upper bound on the Hilbert space dimension required for an approximation as an explicit inner product, classically. We next construct and EQK based on this classical approximation. Shift-invariant kernels have enjoyed significant attention in the ML literature. On the one hand, the Gaussian RBF kernel (arguably the most well-known shift-invariant kernel) has been found useful in a range of data-driven tasks. On the other hand, as we see in this section, shift-invariant kernels are more amenable to analytical study than other classes of kernels. The property of shift-invariance, combined with exchange symmetry and PSD, allows for deep mathematical characterization of functions. Let us first introduce the class of functions of interest: **Definition 3** (Shift-invariant kernel function).: _A kernel function \(k\colon\mathcal{X}\times\mathcal{X}\to\mathbb{R}\) is called shift-invariant if, for every \(x,x^{\prime},\xi\in\mathcal{X}\), it holds that_ \[k(x+\xi,x^{\prime}+\xi)=k(x,x^{\prime}). \tag{18}\] As is standard, we then write shift-invariant kernels as a function of a single argument \(k(x-x^{\prime})\) (which amounts to taking \(\xi=-x^{\prime}\).) We define \(\Delta\coloneqq x-x^{\prime}\) and then talk about \(k(\Delta)\). One motivation for using shift-invariant kernels, as a more restricted function family, is that they have a few useful prop Figure 3: Venn diagram with set relations of different classes of kernels and quantum kernels. Each of the arrows represents a reduction found in this manuscript, they should be read as “for any element of the first set, there exists an element of the second set which is a good approximation.” In the case of Theorem 1, elements are individual functions. In every other case, elements are sequences of kernel functions, for which the notions of efficiency make sense. In summary, we find that efficient _embedding quantum kernels_ (EQK) can approximate two important classes of kernels: shift-invariant and composition kernels. erties that ease their analysis. Also, it is difficult to decide whether an arbitrary function is PSD, so characterizing general kernel functions is difficult. Conversely, for shift-invariant functions, Bochner's theorem gives a condition that is equivalent with being PSD. **Theorem 4** (Bochner [44]).: _Let \(k\) be a continuous, even, shift-invariant function on \(\mathbb{R}^{d}\). Then \(k\) is PSD if and only if \(k\) is the Fourier transform (FT) of a non-negative measure \(p\),_ \[k(\Delta)=\int_{\mathbb{R}^{d}}p(\omega)e^{i\langle\omega,\Delta\rangle} \mathrm{d}\omega. \tag{19}\] _Furthermore, if \(k(0)=1\), then \(p\) is a probability distribution \(\int_{\mathbb{R}^{d}}p(\omega)\mathrm{d}\omega=1\)._ Bochner's theorem is also a central ingredient in a powerful kernel-approximation algorithm known as _random Fourier features_ (RFF) [42] presented here as Algorithm 2. **Input:** A PSD, continuous, shift-invariant kernel \(k(x-x^{\prime})\). **Output:** A feature map \(z\) so that \(\langle z(x),z(x^{\prime})\rangle\approx k(x-x^{\prime})\). \(p(\omega)\leftarrow\frac{1}{2\pi}\int_{\mathbb{R}^{d}}e^{-i\langle\omega, \Delta\rangle}k(\Delta)\mathrm{d}\Delta\)\(\triangleright\) Inverse FT of \(k\). ``` 1:\(\{\omega_{1},\ldots,\omega_{D/2}\}\sim p^{D/2}\)\(\triangleright\) Draw \(D/2\) i.i.d. samples from \(p\). 2:\(z(\cdot)\leftarrow\sqrt{\frac{2}{D}}\begin{pmatrix}\cos(\omega_{1},\cdot)\\ \sin(\omega_{1},\cdot)\\ \vdots\\ \cos(\omega_{D/2},\cdot)\\ \sin(\omega_{D/2},\cdot)\end{pmatrix}\). 3:return\(z\) ``` **Algorithm 2** random Fourier features [42], RFF The input to Algorithm 2 is any shift-invariant kernel function \(k\), and the output is a feature map \(z\) such that the same kernel can be \(\varepsilon\)-approximated as an explicit inner product. In Step 1, the inverse Fourier transform \(p\) of the kernel \(k\) is produced. In Step 2, \(D/2\) many samples \(\omega_{i}\) are drawn i.i.d. according to \(p\). In Step 3, the feature map \(z\) is constructed using the samples \(\omega_{i}\) and the sine and cosine functions. That the inner product of \(z\) gives a good approximation to \(k\) is further elucidated in Appendix A.2. With RFF, given a kernel function, Algorithm 2 produces a randomized feature map such that the kernel corresponding to the inner product of pairs of such maps is an unbiased estimator of the initial kernel. The vital question is how large the dimension \(D\) has to be in order to ensure \(\varepsilon\) approximation error, which is the object of study of Theorem 5. **Theorem 5** (Random Fourier features, Claim 1 in Ref. [42]).: _Let \(\mathcal{X}\subseteq\mathbb{R}^{d}\) be a compact data domain. Let \(k\) be a continuous shift-invariant kernel acting on \(\mathcal{X}\), fulfilling \(k(0)=1\). Then, for the probabilistic feature map \(z(\cdot)\colon\mathcal{X}\to\mathbb{R}^{D}\) produced by Algorithm 2,_ \[\mathbb{P} \left[\sup_{x,x^{\prime}\in\mathcal{X}}\lvert\langle z(x),z(x^{ \prime})\rangle-k(x,x^{\prime})\rvert\geq\varepsilon\right]\leq \tag{20}\] \[\leq 2^{8}\left(\frac{\sigma_{p}\operatorname{diam}(\mathcal{X})}{ \varepsilon}\right)^{2}\exp\left(-\frac{D\varepsilon^{2}}{8(d+2)}\right) \tag{21}\] _holds for almost every \(x,x^{\prime}\in\mathcal{X}\). Here \(\operatorname{diam}(\mathcal{X})=\sup_{x,x^{\prime}\in\mathcal{X}}\{\lVert x- x^{\prime}\rVert\}\) is the diameter of \(\mathcal{X}\), and \(\sigma_{p}\) is the variance of the inverse FT of \(k\) interpreted as a probability distribution_ \[p(\omega) =\operatorname{FT}^{-1}[k](\omega), \tag{22}\] \[\sigma_{p}^{2} =\mathbb{E}_{p}\left[\lVert\omega\rVert^{2}\right]. \tag{23}\] _In particular, it follows that for any constant success probability, there exists an \(\varepsilon\)-approximation of \(k\) as an Euclidean inner product where the feature space dimension \(D\) satisfies_ \[D\in\mathcal{O}\left(\frac{d}{\varepsilon^{2}}\log\frac{\sigma_{p} \operatorname{diam}(\mathcal{X})}{\varepsilon}\right). \tag{24}\] The RFF construction can fail to produce a good approximation with a certain probability, but the failure probability can be pushed down arbitrarily close to \(0\) efficiently. This theorem can be understood as a probabilistic existence result for efficient embedding-quantum approximations Figure 4: Conceptual sketch of three constructions to approximate kernel functions as Embedding Quantum Kernels (EQKs). The three parts correspond to different kernel families: (a) General kernels refers to any PSD kernel function, as introduced in Definition 1; (b) Shift-invariant kernels are introduced in Definition 3; and (c) Composition kernels are a new family that we introduce in Section V1. The boxes refer to routines taken either from the existing literature (Mercer, from Corollary A.3 in Appendix A; and RFF, from Theorem 5 originally in Ref. [42]) or introduced in this manuscript (Algorithm 1; QRFF, as Algorithm 3; RFF_pp, as Algorithm 4; and QRFF_pp, as Algorithm 5). The details for each of the three families are elucidated in the corresponding sections: the universality of EQKs for general kernels is explained in Section IV, with Theorem 1; the universality of efficient EQKs for shift-invariant kernels appears in Section V, studied formally in Corollaries 6 and 8; finally, composition kernels are introduced in Section VI, with Proposition 9 stating their efficient approximation as EQKs, and Proposition 11 confirming that composition kernels contain the so-called projected quantum kernels presented in Refs. [27; 43]. of kernel functions, which we present next as our second main contribution. We consider the input dimension \(d\) as our scaling parameter, which plays the role of \(s\). We further take \(\varepsilon\)-approximation to be the supremum of the pointwise difference almost everywhere, which we inherit from Theorem5. In this result we do not need to fix how the input kernel sequence \(\{k_{s}\}_{s\in\mathbb{N}}\) is specified, but we assume it is specified in a way that allows us to approximate it using a quantum computer. When it comes to what definition of efficiency we need to use, we consider first space efficiency, since we talk about the required number of dimensions to approximate the kernel as an explicit inner product. Later on we pin down the time complexity as well. In the following, for clarity of presentation, we decouple the construction of EQKs via RFFs as a two-step process: first produce the (classical) feature map \(z\), and second realize the inner product as an EQK as in Lemma3. As of now, Algorithm2 produces a classical approximation via Random Fourier Features, not an EQK yet. Next we add a smoothness assumption to Theorem5 to produce Corollary6. Further below we introduce Algorithm3, which takes the feature map \(z\) from Algorithm2 and produces an EQK from it using Algorithm1. **Corollary 6** (Smooth shift-invariant kernels).: _Let \(\mathcal{X}_{d}\subseteq[-R,R]^{d}\) be a compact domain. For \(\varepsilon>0\), let \(k_{d}\) be a continuous shift-invariant kernel function. Assume the kernel fulfills \(k_{d}(0)=1\) and it has bounded second derivatives at the origin \(|\partial_{i}^{2}k_{d}(0)|\leq B\). Then Algorithm2 produces an \(\varepsilon\)-approximation of \(k_{d}\) as an explicit inner product. In particular, the scaling of the required dimension \(D\) of the (probabilistic) feature map is_ \[D\in\mathcal{O}\left(\frac{d}{\varepsilon^{2}}\log\frac{Rd\sqrt{B}}{ \varepsilon}\right). \tag{25}\] Proof.: From Bochner's theorem (Theorem4), we know that \(k_{d}\) is the FT of a probability distribution \(p_{d}\). Next, a standard Fourier identity allows us to relate the variance \(\sigma_{d}^{2}\) of \(p_{d}\) and the trace of the Hessian \(H(k_{d})\) at the origin \(\Delta=0\) (see, e.g., Refs. [42, 45]): \[\sigma_{d}^{2}=-\operatorname{tr}\left\{\left.H(k_{d})\right|_{\Delta=0} \right\}=-\sum_{i=1}^{d}\left.\frac{\partial^{2}k_{d}}{\partial\Delta_{i}^{2} }\right|_{\Delta=0}. \tag{26}\] Using the assumption that \(|\partial_{i}^{2}k(0)|\leq B\) for all \(i\in[d]\), this results in \(\sigma_{d}^{2}\leq dB\). In parallel, we can upper bound the diameter of \(\mathcal{X}_{d}\) by the diameter of \([-R,R]^{d}\) to obtain \(\operatorname{diam}(\mathcal{X}_{d})\leq 2R\sqrt{d}\). By plugging the bounds on \(\sigma_{d}^{2}\) and \(\operatorname{diam}(\mathcal{X}_{d})\) into the bound of Theorem5 we obtain the claimed result \[D\in\mathcal{O}\left(\frac{d}{\varepsilon^{2}}\log\frac{\sigma_{d} \operatorname{diam}(\mathcal{X}_{d})}{\varepsilon}\right)\in\mathcal{O}\left( \frac{d}{\varepsilon^{2}}\log\frac{Rd\sqrt{B}}{\varepsilon}\right). \tag{27}\] In AppendixC, we provide CorollaryC.1, which fixes the number of bits of precision required to achieve a close approximation to the second derivative using finite difference methods. Indeed, Corollary6 ensures that the Hilbert-space dimension required to \(\varepsilon\)-approximate any shift-invariant kernel fulfilling mild conditions of smoothness scales at most polynomially in the relevant scale parameters, if \(R\) and \(B\) are considered to be constant. This does not provide an answer to Question1 yet, as the result only talks about existence of a space-efficient approximation, and not about the complexity of finding such an approximation only from the specification of each kernel in the sequence \(\{k_{d}\}_{d}\). Noteworthy is that so far we have not made any assumptions on the complexity of finding the inverse FT of \(k_{d}\), nor the complexity of sampling from it. Before going forward, one could enquire whether the upper bound from Corollary6 could be forced to scale exponentially in \(d\). In this direction, one would e.g. consider cases where \(R\) and \(B\) are not fixed, but rather also depend on \(d\). But, since both \(R\) and \(B\) appear inside a logarithm, in order to achieve a scaling exponential in \(d\) overall, we would need to let either \(R\) or \(B\) to grow doubly-exponentially. While that remains a possibility, we point out that in such a case we would easily lose the ability to \(\varepsilon\)-approximate each of the kernels \(k_{d}\) quantum-efficiently to begin with, so we judge these scenarios as less relevant. And even then, it should be noted that Theorem5 offers only an upper bound to the required dimension \(D\). In order to discuss whether \(D\) can be forced to scale exponentially, we would also need a lower bound. The same reasoning applies for \(\varepsilon\). Notice in Corollary6 we talk about the required feature dimension, not the number of qubits. Indeed, we can encode the feature vectors (which are nicely normalized) onto quantum states, as presented in Algorithm3, which we call _quantum random Fourier features_ (QRFF). What QRFF does is first obtain the probabilistic map \(z\) from RFF, and then encode it into quantum states and take their inner product as in Lemma3. By construction, the feature maps produced by RFF are unit vectors with respect to the \(2\)-norm, \(\|z(x)\|_{2}=1\) for any \(x\in\mathcal{X}\). For Algorithm1 we require unit vectors with respect to the \(1\)-norm, so we need to renormalize the vectors and this introduces another multiplicative factor, which now depends on the input vectors: **Lemma 7** (Inner product normalization).: _Let \(r,r^{\prime}\in\ell_{2}^{d}\) be \(2\)-norm unit vectors \(\|r\|_{2}=\|r^{\prime}\|_{2}=1\). Then, the identity_ \[\langle r,r^{\prime}\rangle=\|r\|_{1}\|r^{\prime}\|_{1}\left(2^{n}\operatorname {tr}\left\{\rho_{\hat{r}}\rho_{\hat{r}^{\prime}}\right\}-1\right) \tag{28}\] _holds, where \(\tilde{r}^{(\prime)}=r^{(\prime)}/\|r^{(\prime)}\|_{1}\in\ell_{1}^{d}\) corresponds to renormalizing with respect to the \(1\)-norm. Here \(\rho_{\hat{r}}\) refers to encoding \(\tilde{r}\) onto a quantum state using Algorithm1._ Proof.: The proof is given in AppendixB. **Remark** (Bounded pre-factors). The re-normalization is of no concern, since the fact \(r,r^{\prime}\) are \(2\)-norm unit vector implies that their \(1\)-norm is bounded \(\|r\|_{1}\in[1,\sqrt{d}]\). This explains the extra factor \(g(x)g(x^{\prime})\) appearing in Algorithm 3, where we have \[g(x) =\|z(x)\|_{1} \tag{29}\] \[=\sqrt{\frac{2}{D}}\sum_{i=1}^{D/2}\lvert\cos\langle\omega_{i},x \rangle\rvert+\lvert\sin\langle\omega_{i},x\rangle\rvert, \tag{30}\] and \(g(x)\in[1,\sqrt{D}]\), so \(g(x)g(x^{\prime})\in[1,D]\) for all \(x,x^{\prime}\in\mathcal{X}\). ``` 0: A PSD, continuous, shift-invariant kernel \(k(x-x^{\prime})\). 0: A quantum feature map \(\rho\) so that \(k(x-x^{\prime})\approx g(x)g(x^{\prime})(2^{n}\operatorname{tr}\{\rho(x),\rho( x^{\prime})\}-1)\vartriangleright g(x)\) defined in Eq. (29). 1:\(z\leftarrow\texttt{RFF}(k)\)\(\triangleright\) Apply Algorithm 2. 2:\(\rho(\cdot)\leftarrow\texttt{C2QE}\left(z(\cdot)\right)\)\(\triangleright\) Apply Algorithm 1. 3:return\(\rho\) ``` **Algorithm 3** Quantum random Fourier features, QRFF The number of qubits required \(n\) scales logarithmically in the dimension of the feature vector \(n\in\mathcal{O}(\log(D))\). So the number of qubits \(n\) necessary for \(\varepsilon\)-approximating shift-invariant kernel functions as EQKs has scaling \(n\in\tilde{\mathcal{O}}\left(\log d/\varepsilon^{2}\right)\), where the tilde hides doubly-logarithmic contributions. With these, we can almost conclude that Corollary 6 results in a positive answer to Question 1 for shift-invariant kernels provided they are smooth, with smoothness quantified as the magnitude of the second derivative at the origin. We are still missing the complexity of producing samples from the inverse FT of each \(k_{d}\), as in Step 2 of Algorithm 2. The efficiency criterion taken here would be the required number of qubits. Nevertheless, it is true that for any smooth shift-invariant kernel, there exists an \(\varepsilon\)-approximation as an EQK using at most logarithmically many qubits following Algorithm 3. The complexity of Algorithm 2 could become arbitrarily large based on the difficulty of sampling from the distribution \(p_{d}\) corresponding to the inverse FT of \(k_{d}\). In order to ensure embedding-quantum efficiency, we need to add the requirement of efficient sampling from the inverse FT of the kernel. With this, we proceed to state our main result: efficient EQKs are universal within the class of smooth shift-invariant kernels whose inverse FT is efficiently sampleable. **Corollary 8** (Polynomial run-time).: _Under the assumptions of Corollary 6, let \(p_{d}\) be the inverse FT of the kernel \(k_{d}\). If obtaining samples from \(p_{d}\) can be done in time \(t\in\mathcal{O}(\operatorname{poly}(d))\), then both Algorithms 2 and 3 have total run-time complexity polynomial in \(d\)._ Proof.: For Algorithm 2, Step 1 is only a mathematical definition, Step 2 runs in time linear in \(D\) and polynomial in \(d\) by assumption, finally Step 3 also runs in time linear in \(D\) and polynomial in \(d\). Corollary 6 further states that \(D\) is at most essentially linear in \(d\). In total, Algorithm 2 has run-time complexity at most polynomial in \(d\). In turn, the first step of Algorithm 3 is calling Algorithm 2, which we just saw takes time polynomial in \(d\). Step 2 uses Algorithm 1, which also runs in time polynomial in \(d\). With this it is also clear that the total run-time complexity of Algorithm 3 is at most polynomial in \(d\). With the added condition of producing samples in polynomial time, we obtain a positive answer to Question 1. Namely, we show the existence of time-efficient EQK approximations for any kernel fulfilling the smoothness and efficient sampling conditions from Corollaries 6 and 8. Notice our result does not even require quantum efficiency of \(k_{d}\), so in fact we have proved something stronger. One point to address is whether assuming quantum efficiency for evaluating \(k_{d}\) directly implies the ability to sample efficiently from the inverse FT of \(k_{d}\). In the cases where this is true, adding quantum-efficiency to the assumptions of Corollary 6 suffices to prove that efficient EQKs are universal within the class of efficient shift-invariant kernels. At the face of it, it is unclear whether for all reasonable ways of specifying \(k_{d}\) the capacity to efficiently evaluate it using a quantum computer implies an efficient algorithm of sampling from the distribution obtained by the inverse FT. If the task were to sample from \(k_{d}\) itself (in case \(k_{d}\) were a probability distribution), then we know that being able to evaluate \(k_{d}\) does not imply the capacity to efficiently sample from \(k_{d}\)3. Then, it is unclear why sampling from the inverse FT of \(k_{d}\) would be easy in every case, especially, e.g., in the black max model. Resolving this question is left as an open problem, and we note it is not an entirely new one [47; 48]. Footnote 3: In the black box model, lower bounds on Grover’s search algorithm imply an exponential cost of sampling. Also, NP-hardness of e.g. sampling from low-temperature Gibbs states of the Ising model [46] provides examples relevant for other input models. Finally, it should be noted that, although the feature map produced by Algorithm 2 can be stored classically (assuming \(D\in\mathcal{O}(d/\varepsilon^{2})\)), this is not a de-quantization algorithm. Algorithm 2 requires sampling from the inverse FT of the input kernels \(k_{d}\). Reason dictates that, if the kernel is quantum efficient and classically inefficient to \(\varepsilon\)-approximate, in general sampling from its inverse FT should also be at least classically inefficient. This is what we meant earlier when we said we consider quantum time: even though the intermediate variable \(z(x)\) can be stored classically, producing it requires usage of a quantum computer. In this section, we show that smooth shift-invariant kernels admit space-efficient EQK approximations. We also give sufficient conditions for the same result to hold for embedding-quantum efficiency in total runtime. In the next section, we see how the same ideas extend to another class of kernel functions, beyond shift-invariant ones. ## VI Composition kernel and projected quantum kernel In this section we introduce a new class of quantum kernels which can still be turned into EQKs using another variant of Algorithm 2. We show that the new class also admits time-efficient approximations as EQK, and that the projected quantum kernel from Ref. [27] belongs to this class. Again, we separate the EQK-based approximation into two steps: first we propose a variant of RFF producing a classical feature map, and second we construct an EQK that evaluates the inner product of pairs of features. First we introduce the new class. Consider the usual Gaussian kernel with parameter \(\sigma>0\) defined as \[k(x,x^{\prime})=e^{-\frac{\|x-x^{\prime}\|^{2}}{2\sigma^{2}}}, \tag{31}\] only now we allow for some pre-processing of the inputs \(x\mapsto f(x)\), resulting in \[k_{f}(x,x^{\prime})=e^{-\frac{\|f(x)-f(x^{\prime})\|^{2}}{2\sigma^{2}}}. \tag{32}\] The introduction of \(f\) breaks shift invariance in general, hence the need to specify both arguments independently \(x,x^{\prime}\in\mathcal{X}\). Since \(k_{f}\) is a PSD kernel for any function \(f\), we refer to such constructions as _composition kernel_. Next we propose a generalization of Algorithm 2 that also works for composition kernels, as Algorithm 4, which we call simply _random Fourier features with pre-processing_ (RFF_pp). ``` 0: A composition kernel \(k_{f}(x,x^{\prime})\), see Eq. (32). 0: A feature map \(z_{f}\) so that \(k_{f}(x,x^{\prime})\approx(z_{f}(x),z_{f}(x^{\prime}))\). 1:\(z\leftarrow\texttt{RFF}(k\colon f(\mathcal{X})\to\mathbb{R})\)\(\triangleright\) Apply Algorithm 2. 2:\(z_{f}(\cdot)\leftarrow z(f(\cdot))\)\(\triangleright\) Apply the pre-processing function. 3:return\(z_{f}\) ``` **Algorithm 4** RFF with pre-processing, RFF_pp Since the core feature of Algorithm 4 is to call Algorithm 2, we inherit the \(\varepsilon\)-approximation guarantee. Notice in line 1 of Algorithm 4 we invoke Algorithm 2, but taking as input domain the range of the pre-processing function, \(f(\mathcal{X})\), instead of the original domain \(\mathcal{X}\). If we assume \(f\) to be continuous, it follows that \(f(\mathcal{X})\) is also compact, which we require for the application of Theorem 5. **Proposition 9** (Performance guarantee of Algorithm 4).: _Let \(f\colon\mathcal{X}\to[-B,B]^{g_{1}(d)}\) be a pre-processing function, and let \(k_{f}\) be the Gaussian kernel composed with \(f\), as introduced in Eq. (32). Let finally the parameter of the Gaussian kernel be \(\sigma=g_{2}(d)\). If \(g_{1}(d)\in\mathcal{O}(\operatorname{poly}(d))\) and \(g_{2}(d)\in\Omega(\operatorname{poly}(d)^{-1})\), then Algorithm 4 produces an \(\varepsilon\)-approximation of \(k_{f}(x,x^{\prime})\) as an explicit inner product. In particular, the required dimension \(D\) of the randomized feature map is at most polynomial in the input dimension \(d\) and the inverse error \(1/\varepsilon\)._ \[D\in\mathcal{O}\left(\frac{\operatorname{poly}(d)}{\varepsilon^{2}}\log\left( \frac{dB}{\varepsilon}\right)\right)\in\tilde{\mathcal{O}}\left(\frac{ \operatorname{poly}(d)}{\varepsilon^{2}}\right). \tag{33}\] Proof.: The proof is provided in Appendix D. Notice Proposition 9 shares a deep similarity with Corollary 6. Both are direct applications of Theorem 5 to kernels fulfilling different properties. They have nevertheless one stark difference. The assumptions in Proposition 9 are sufficient to guarantee we can sample efficiently from the inverse FT of the kernels in the sequence \(k_{d}\). This is because the probability distributions involved, \(p_{d}\), are nothing but Gaussian distribution themselves. As shown in Algorithm 4, the pre-processing function \(f\) shows up only after the sampling step. So, unlike Corollary 6, Proposition 9 already represents a direct positive answer to Question 1, relying on of Algorithm 5, which follows. In Algorithm 5, _quantum random Fourier features with pre-processing_ (QRFF_pp), we take the output of Algorithm 4 and convert it into a quantum feature map with Algorithm 1, leading to the corresponding EQK approximation. Again now the multiplicative term \(g(f(x))g(f(x^{\prime}))\) appears from the re-normalization in Lemma 7. ``` 0: A composition kernel \(k_{f}(x,x^{\prime})\), see Eq. (32). 0: A quantum feature map \(\rho_{f}\) so that \(k_{f}(x,x^{\prime})\approx g(f(x))g(f(x^{\prime}))(2^{n}\operatorname{tr}\{ \rho_{f}(x)\rho_{f}(x^{\prime})\}-1)\). \(\triangleright\)\(g(x)\) defined in Eq. (29). 1:\(z_{f}\leftarrow\texttt{RFF}\_\texttt{pp}(k_{f})\)\(\triangleright\) Apply Algorithm 4. 2:\(\rho_{f}(\cdot)\leftarrow\texttt{C2QE}\left(z_{f}(\cdot)\right)\)\(\triangleright\) Apply Algorithm 1. 3:return\(\rho_{f}\) ``` **Algorithm 5** QRFF with pre-processing, QRFF_pp The scaling in the number of qubits is once more logarithmic in the scaling of the number of required dimensions given in Proposition 9. For the number of qubits \(n\), the scaling is \(n\in\tilde{\mathcal{O}}\left(\log(d/\varepsilon^{2})\right)\). Moreover, the run-time complexity of Algorithm 5 is polynomial in \(d\) and in the run-time complexity of evaluating \(f\), since now the sampling step corresponds to sampling from a product Gaussian distribution. For completeness sake, it would be good also to rule out the possibility of classically efficient approximation. Although it might be counter-intuitive, in principle there could exist pre-processing functions \(f\) that are hard to evaluate but which result in a composition kernel \(k_{f}\) that is not hard to evaluate. In the following we just confirm that there exist pre-processing functions which are hard to evaluate classically which result in composition kernels which are also hard to evaluate classically. **Proposition 10** (No efficient classical approximation).: _There exists a function \(f\colon\mathcal{X}\to[0,1]^{d}\) which can be \(\varepsilon\)-approximated quantum efficiently in \(d\) for which the composition kernel \(k_{f}\) cannot be \(\varepsilon\)-approximated classically efficiently in \(d\), with_ \[k_{f}(x,x^{\prime})\coloneqq\exp\left(-\|f(x)-f(x^{\prime})\|^{2}\right). \tag{34}\] _We take \(\sigma^{2}=1/2\) for simplicity._ Proof.: Here, the proof is presented in Appendix D. By selecting pre-processing functions that are quantum efficient but classically inefficient, we reach a class of quantum kernels that are not shift-invariant, but for which the RFF_pp construction still applies, from Algorithm 4. Next, we show that this class of kernels contains the recently introduced _projected quantum kernel_[27]. Going back to the quantum feature map \(x\mapsto\rho(x)\), the authors of Ref. [27] proposed a mapping from the exponentially-sized \(\rho(x)\), to an array of reduced density matrices \((\rho_{k}(x))_{k=1}^{N}\), for an \(N\)-qubit quantum state4. According to our definitions, we would not call this a quantum feature map, since the data is not mapped onto the Hilbert space of quantum states. Rather, it is natural to think of this as a quantum pre-processing function. A nice property of this alternative mapping is that the feature vectors can be efficiently stored in classical memory, even though obtaining each of the \(\rho_{k}(x)\) matrices can only be done efficiently using a quantum computer in general. Once these are stored, the projected quantum kernel \(k^{\text{PQ}}\) is the composition kernel as we introduced earlier, just setting \(f\) to be the function that computes the entries of all the reduced density matrices Footnote 4: We distinguish \(N\), the number of qubits used to evaluate the projected quantum kernel, from \(n\), the number of qubits required to approximate it as an EQK. \[k^{\text{PQ}}(x,x^{\prime})=\exp\left(-\gamma\sum_{k=1}^{N}\lVert\rho_{k}(x)- \rho_{k}(x^{\prime})\rVert_{F}^{2}\right), \tag{35}\] where \(\gamma>0\) can be safely taken as \(\gamma=1/(2\sigma^{2})\), \(\rho_{k}(x)\) is the reduced density matrix of the \(k^{\text{th}}\) qubit of \(\rho\), and recall \(\lVert\cdot\rVert_{F}\) is the Frobenius norm. The number of qubits \(N\) is left as a degree of freedom, but for \(d\)-dimensional input data, reason would say the number of qubits used would be \(N\propto d\), or at most \(N\in\mathcal{O}(\operatorname{poly}(d))\). The projected quantum kernel enjoys valuable features. On the one hand, it was used in an effort to prove quantum-classical learning separations in the context of data from quantum experiments [27]. On the other hand, the projected quantum kernel is less vulnerable to the exponential kernel concentration problem [37]. The projected quantum kernel is also deeply related to the shadow tomography formalism and guarantees can be placed on its performance for quantum phase recognition [43] among others. **Proposition 11** (Projected quantum kernel as an efficient EQK).: _The projected quantum kernel \(k^{PQ}\) fulfills the assumptions of Proposition 9, so it can be efficiently approximated as an EQK with the number of dimensions required \(D\) fulfilling_ \[D\in\tilde{\mathcal{O}}\left(\frac{\operatorname{poly}(d)}{\varepsilon^{2}} \right). \tag{36}\] Proof.: The proof of this statement is presented in Appendix D. **Remark** (General projected quantum kernels).: In the proof, we have assumed that the projected quantum kernel is taken with respect to the subsystems being each individual qubit. A general definition of the projected quantum kernel would allow for other subsystems, but the requirement that the feature map must be efficient to store classically prevents that the number of subsystems scales more than polynomially, and that the local dimension of each subsystem scales more than logarithmically, in which case we still have \(g_{1}(d)\in\mathcal{O}(\operatorname{poly}(d))\). The projected quantum kernel poses a clear example of a quantum kernel that is not constructed from a feature map onto the Hilbert space of density operators. Its closeness to the classical shadow formalism has earned it attention from the moment it was proposed. Thus, proving that also the projected quantum kernel admits an efficient approximation as an EQK results in another important kernel family that is also covered by EQKs. In order to identify scenarios in which the scaling \(D\) could become super-polynomial in the relevant parameters \(d,1/\varepsilon\), one would require for example \(g_{1}(d)\propto\exp(d)\), or \(g_{2}(d)\propto 1/\exp(\exp(d))\), using the definitions from Proposition 9. Indeed, in those scenarios we would not be able to guarantee the space efficiency of Algorithm 4 while keeping good \(\varepsilon\)-approximation. Nevertheless the conditions \(g_{1}(d)\propto\exp(d)\) or \(g_{2}(d)\propto 1/\exp(\exp(d))\) alone would not be sufficient to prove that Algorithm 4 must fail. Said otherwise, this is again because from Theorem 5 we only have an upper bound on the required space complexity, and not a lower bound. At the same time, though, both cases would prevent us from being able to quantum-efficiently \(\varepsilon\)-approximate the projected quantum kernel in the first place, so these scenarios must be ruled out as they break the hypotheses. And even if they were allowed, recall that we can store classical vectors in quantum states using only logarithmically many qubits in the length of the classical vector. That means that single exponential \(g_{1}(d)\) and inverse doubly-exponential \(g_{2}(d)\) would not be enough to require exponential embedding-quantum complexity in the number of qubits. In order to have the number of qubits to be, e.g., \(n\in\mathcal{O}(\exp(d))\), we would require either \(g_{1}(d)\in\mathcal{O}(\exp(\exp(d)))\), or \(g_{2}(d)\in\Omega(1/\exp(\exp(\exp(d))))\), which would prevent our ability to evaluate the projected kernel even further. With these considerations, we have shown that projected quantum kernels, despite their using a different feature map and inner product, can be efficiently realized as EQKs. As a recapitulation, Fig. 3 summarizes all our contributions as a collection of set inclusions in a Venn diagram. With this, to the best of our knowledge, we conclude that all quantum kernels in the literature are either EQKs directly, or they can be efficiently realized as EQKs. ## VII Outlook This manuscript so far offers restricted positive answers to Question 1. In this section, we list some promising directions to search for other restricted answers to the same question. Answering Question 1 negatively would involve proving an non-existence result, for which one needs a different set of tools than the ones we have used so far. In particular, one would need to wield lower bounds for non-existence, which in this corner of the literature appear to be trickier to find. ### Time efficient EQKs It should be possible to come up with a concrete construction that ensures also time efficiency of Algorithm 2 while applying Corollary 8. For that to happen, it would be enough to give not only the kernel function, but also an efficient algorithm to sample from its inverse FT. We recognize this question as the clearest next step in this new research direction. ### Non-stationary kernels and indefinite kernels The theory of integral kernel operators and their eigenvalues has been already extensively developed in the context of functional analysis [49; 50; 51], also with applications to random processes and correlation functions [52]. In particular, for \(1\)-dimensional kernel functions, it is known that reasonable smoothness assumptions lead to kernel spectra which are concentrated on few eigenfunctions. This fact alone hints strongly toward the existence of low-dimensional EQK-based approximations for a larger class of kernel functions than the ones we explored here. While we restricted ourselves to shift-invariant and composition kernels, the theory of eigenvalues of integral kernel operators seems to suggest that similar results could be obtained for any smooth kernel function. That being said, the generalization from \(1\) to \(d\)-dimensional data might well come with factors which grow exponentially with \(d\), somewhat akin to the well-known _curse of dimensionality_. Alternatively, instead of arbitrary PSD kernel functions, one might also consider other restricted classes of known kernels. For example rotation-invariant kernels, of which polynomial kernels are a central example, offer an interesting starting point. Polynomial kernels are not shift-invariant, but they are often derived with an explicit feature map in mind. A potentially interesting research line would be to generalize polynomial kernels in a way that is less straightforward to turn into an EQK. Our results relied strongly on the _random Fourier features_ (RFF) algorithm of Ref. [42]. The RFF approximation in turn rests on a sampling protocol that owes its success to Bochner's theorem for shift-invariant PSD functions (Theorem 4). Yaglom's theorem (Theorem A.4 in Appendix A) is referred to as the generalization of Bochner's theorem to functions that are not shift-invariant. It would be interesting to see whether there are other, non shift-invariant kernels for which Yaglom's theorem could be used to furbish an explicit feature map with approximation guarantees akin to those of Theorem 5. The extension from PSD to non-PSD functions could be similar to the extension from shift-invariant to smooth functions we just described. Intuitively, finding an EQK approximation for a kernel is similar to finding its singular value decomposition in an infinite-dimensional function space. In this space, PSD functions result in the "left"- and "right"-hand matrices in the singular value decomposition to be equal (up to conjugation). Conversely, for non-PSD functions, the main difference would be the difference between left and right feature maps. In this direction, we identify four promising research lines: first, generalize our study to the general class of \(1\)-dimensional smooth functions; second, study restricted classes of higher-dimensional kernel functions for which the integral kernel operator retains desirable qualities for approximation; third, find restricted cases where a kernel approximation based on Yaglom's theorem is possible; and fourth, extend similar results for indefinite kernel functions. We comment on these directions in Appendix A. ### The variational QML lens When studying EQK-based approximations to given kernel functions, we have not restricted ourselves to PQC-based functions. Even though the literature on quantum kernel methods up to this point has not dealt exclusively with variational circuits, a large body of work has indeed focused on functions estimated as the expectation value of a fixed observable with respect to some parametrized quantum state. In this sense, combining the results from Ref. [30] and the rules for generating PSD-functions from Ref. [45], one can develop a framework for how to approach quantum kernel functions holistically. We give some first steps explicitly in Appendix E. We expect many discoveries to result from an earnest study of what are the main recipes to construct PQC-based quantum kernels other than by the established EQK principles. ## VIII Conclusion In this manuscript we raise and partially answer a fundamental question regarding the expressivity of mainstream quantum kernels. We identify that most quantum kernel approaches are of a restricted type, which we call _embedding quantum kernels_ (EQKs), and we ask whether this family covers all quantum kernel functions. If we leave notions of efficiency aside, we show that all kernel functions can be realized as EQKs, proving their universality. Universality is an important ground fact to establish, as it softly supports the usage of EQKs in practice. Learning whether EQKs are indeed everything we need for QML tasks or whether we should look beyond them is important in the context of model selection. In ML for kernel methods, model selection boils down to choosing a kernel. For ML to be successful, it is long known that models should be properly aligned with the data; for different learning tasks, different models should be used. This notion is captured by the concept of _inductive bias_[53; 54], which is different for every kernel. Given a learning task with data, the first step should always be to gather information about potential structures in the data in order to select a model with a beneficial inductive bias. In this scenario, it becomes crucial to then have access to a model with good data-alignment. In Ref. [53] some simple EQKs were found to possess inductive biases which are uninteresting for practically relevant data sets. These findings fueled the need to ask whether EQKs can also have interesting inductive biases, which is softly confirmed based on their universality. When searching for new quantum advantages in ML, the more interesting classes of kernels we have access to, the better. We propose Question 1 as a new line of research in _quantum machine learning_ (QML). Characterizing a relation of order between general quantum kernel functions and the structured EQK functions when considering computational efficiency could have an important impact in the quest for quantum advantages. Indeed, if it were found that all practically relevant kernels can be realized as EQKs, then researchers in quantum kernel methods would not need to look for novel, different models to solve learning tasks with. Nevertheless, for now there is still room for practically interesting kernels beyond efficient EQKs to exist. After raising Question 1, we give an answer for the restricted case of shift-invariant kernels. In Corollary 6 we show that, under reasonable assumptions, all shift-invariant kernels admit a memory-efficient approximation as EQKs. Shift-invariant kernels are widely used also in classical ML, so showing that EQKs are still universal for shift-invariant kernels with efficiency considerations is an important milestone. While shift-invariant kernels enjoy a privileged position in the classical literature, they have not been as instrumental in QML so far. Indeed, many well-known quantum feature maps lead to EQK functions which are not shift-invariant. Also, the milestone work [27] has introduced the so-called projected quantum kernel, which has been used to prove learning separations between classical and quantum ML models. The projected quantum kernel is not shift-invariant. In Proposition 9, we adapt our result for shift-invariant kernels so that they carry over to another class of kernel functions, which we call composition kernels, in which the projected quantum kernel is contained. This way, we present the phenomenon that even kernel functions that did not arise from an explicit quantum feature map can admit an efficient approximation as EQKs. In all, we have seen that two relevant classes of kernels, namely shift-invariant and composition kernels, always admit an efficient approximation as EQKs. Our results are clear manifestations of the expressive power of EQKs. Nevertheless, many threads remain open to fully characterize the landscape of all efficient quantum kernel functions. We invite researchers to join us in this research line by listing promising approaches as outlook in Section VII. ## Acknowledgements The authors would like to thank Ryan Sweke, Sofiene Jerbi, and Johannes Jakob Meyer for insightful comments in an earlier version of this draft, and also thank them and Riccardo Molteni for helpful discussions. This work has been in part supported by the BMWK (EniQmA), BMBF (RealistiQ, MUNIQC-Atoms), the Munich Quantum Valley (K-8), the Quantum Flagship (PasQuans2, Millenion), QuantERA (HQCC), the Cluster of Excellence MATH+, the DFG (CRC 183), and the Einstein Foundation (Einstein Research Unit on Quantum Devices). VD acknowledges the support from the Quantum Delta NL program. This work was in part supported by the Dutch Research Council (NWO/OCW), as part of the Quantum Software Consortium program (project number 024.003.037), and is also supported by the European Union under Grant Agreement 101080142, the project EQUALITY.
2309.17110
D-Band 2D MIMO FMCW Radar System Design for Indoor Wireless Sensing
In this article, we present system design of D-band multi-input multi-output (MIMO) frequency-modulated continuous-wave (FMCW) radar for indoor wireless sensing. A uniform rectangular array (URA) of radar elements is used for 2D direction-of-arrival (DOA) estimation. The DOA estimation accuracy of the MIMO radar array in the presence of noise is evaluated using the multiple-signal classification (MUSIC) and the minimum variance distortionless response (MVDR) algorithms. We investigate different scaling scenarios for the radar receiver (RX) SNR and the transmitter (TX) output power with the target distance. The DOA estimation algorithm providing the highest accuracy and shortest simulation time is shown to depend on the size of the radar array. Specifically, for a 64-element array, the MUSIC achieves lower root-mean-square error (RMSE) compared to the MVDR across 1--10\,m indoor distances and 0--30\,dB SNR (e.g., $\rm 0.8^{\circ}$/$\rm 0.3^{\circ}$ versus $\rm 1.0^{\circ}$/$\rm 0.5^{\circ}$ at 10/20\,dB SNR and 5\,m distance) and 0.5x simulation time. For a 16-element array, the two algorithms provide comparable performance, while for a 4-element array, the MVDR outperforms the MUSIC by a large margin (e.g., $\rm 8.3^{\circ}$/$\rm 3.8^{\circ}$ versus $\rm 62.2^{\circ}$/$\rm 48.8^{\circ}$ at 10/20\,dB SNR and 5\,m distance) and 0.8x simulation time. Furthermore, the TX output power requirement of the radar array is investigated in free-space and through-wall wireless sensing scenarios, and is benchmarked by the state-of-the-art D-band on-chip radars.
Subbarao Korlapati, Reza Nikandish
2023-09-29T10:08:34Z
http://arxiv.org/abs/2309.17110v3
# D-Band 2D MIMO FMCW Radar System Design for Indoor Wireless Sensing ###### Abstract In this article, we present system design of D-band multi-input multi-output (MIMO) frequency-modulated continuous-wave (FMCW) radar for indoor wireless sensing. A uniform rectangular array (URA) of radar elements is used for 2D direction-of-arrival (DOA) estimation. The DOA estimation accuracy of the MIMO radar array in the presence of noise is evaluated using the multiple signal classification (MUSIC) and the minimum variance distortionless response (MVDR) algorithms. We investigate different scaling scenarios for the radar receiver (RX) SNR and the transmitter (TX) output power with the target distance. The DOA estimation algorithm providing the highest accuracy and shortest simulation time is shown to depend on the size of the radar array. Specifically, for a 64-element array, the MUSIC achieves lower root-mean-square error (RMSE) compared to the MVDR across 1-10 m indoor distances and 0-30 dB SNR (e.g., 0.8\({}^{\circ}\)/0.3\({}^{\circ}\) versus 1.0\({}^{\circ}\)/0.5\({}^{\circ}\) at 10/20 dB SNR and 5 m distance) and 0.5x simulation time. For a 16-element array, the two algorithms provide comparable performance, while for a 4-element array, the MVDR outperforms the MUSIC by a large margin (e.g., 8.3\({}^{\circ}\)/3.8\({}^{\circ}\) versus 62.2\({}^{\circ}\)/48.8\({}^{\circ}\) at 10/20 dB SNR and 5 m distance) and 0.8x simulation time. Furthermore, the TX output power requirement of the radar array is investigated in free-space and through-wall wireless sensing scenarios, and is benchmarked by the state-of-the-art D-band on-chip radars. D-band, direction-of-arrival (DOA) estimation, mm-wave, minimum variance distortion-less response (MVDR), multiple signal classification (MUSIC), multi-input multi-output (MIMO), radar, uniform rectangular array, wireless sensing. ## I Introduction Millimeter-wave bands above 100 GHz are promising for the next generation of wireless communication, sensing, and imaging systems. The extremely wide bandwidths available in these bands can enable the data rates on the order of 100 Gbps for communication and the millimeter-scale resolutions for sensing and imaging. The D-band, 110-170 GHz frequency range, is a potential candidate for the sixth generation (6G) mobile communications and radar sensing systems [1, 2]. There have recently been several developments to realize wireless communications above 100 GHz, at the system and circuit levels, including experimental verification of the Friis free-space path loss (FSPL) model [1], characterization of the propagation channel [3], modeling and measurement of surface scattering [1], measurements of loss of outdoor and indoor materials [1, 4], and implementation of integrated transceivers [2, 5, 6, 7, 8, 9, 10, 11, 12]. State-of-the-art D-band on-chip radars include 145 GHz frequency-modulated continuous-wave (FMCW) radar in 28-nm CMOS [5], 140 GHz FMCW radar with 25 GHz bandwidth in 130-nm SiGe [6], dual-mode W-band and D-band radar in 130-nm SiGe [8], 150 GHz radar-communication transceiver with 30 GHz bandwidth and 13 dBm output power in 28-nm CMOS [9], and 155 GHz FMCW orthogonal frequency-division multiplexing (OFDM) radar with chirp duration 1-100 \(\mu\)s in 22-nm FD-SOI [10]. Wireless sensing with millimeter-wave radars is a key technology for many emerging applications in integrated radar-communication systems, autonomous vehicles, gesture recognition for human-computer interaction, contactless health monitoring, and medical imaging. Multi-input multi-output (MIMO) radars are essential for highly accurate direction finding and target localization [13, 14, 15, 16, 17, 18]. MIMO radars with a sufficiently large number of radar elements can provide high angular resolutions and improve the effective SNR of the received signals. A 2D MIMO array should be used to capture azimuth and elevation angular information [15, 16, 17, 18, 19]. The range information is obtained using the FMCW radars, which in the D-band can feature millimeter range resolutions. The angular and range data can be combined to realize a high-resolution 3D sensing/imaging system. The design and implementation of efficient D-band MIMO radars can be very challenging as a result of the multidisciplinary nature of the problem and the practical issues involved [2, 20, 21]. The number of array elements is determined based on the required angular resolutions and the direction finding accuracy which is dependent on the directional-of-arrival (DOA) estimation algorithm. Moreover, hardware limitations in the system, antenna, and circuit levels can have significant effects on the performance of the radar array and, therefore, should be accurately included in the design process [2, 22, 23]. In this paper, we present a system design approach for D-band 2D MIMO FMCW radar in indoor wireless sensing. The direction finding performance of the 2D radar array is evaluated using two popular DOA estimation algorithms: multiple signal classification (MUSIC) [24] and minimum variance distortionless response (MVDR) [25]. The paper is structured as follows. In Section II, the 2D MIMO FMCW radar model and operation are presented. In Section III, 2D DOA estimation simulation are presented. In section IV, feasibility of the D-band indoor wireless sensing is evaluated in two operating scenarios, free-space and through-wall sensing, based on the transmitter (TX) power requirements of the radar elements and performance of the state-of-the-art on-chip radars. Finally, conclusions are presented in Section V. ## II 2D MIMO FMCW Radar ### _MIMO Radar Architecture_ We consider a uniform rectangular array (URA) of \(N_{x}\times N_{y}\) radar elements, as shown in Fig. 1. This array can be constructed as a virtual array comprising \(N_{\rm TX}\) transmitters (TX) and \(N_{\rm RX}\) receivers (RX). The positions of the virtual elements are determined on the basis of all possible combinations of the TX and RX locations. The number of virtual array elements is given by \(N_{x}N_{y}=N_{\rm TX}N_{\rm RX}\). Therefore, there are many options for selecting the number of TX and RX radar components and their distances. We assume that the distances are selected such that the inter-element spacing of the URA is \(\lambda/2\). In D-band frequencies, \(\lambda/2\approx 1\,\rm mm\), which allows large arrays to be realized on small physical areas, e.g., a 100-element array on \(1\,\rm cm\times 1\,\rm cm\) area. ### _Range Resolution_ The FMCW radar transmits a frame of multiple chirp signals, correlates the received target reflection with the transmit signal, and detects the target range using the signal time of flight (ToF). The transmit signal for each chirp can be modeled as \[S_{\rm tx}(t)=A_{t}e^{j[\omega_{0}t+\pi St^{2}]}\ \ 0\leq t\leq T_{c}, \tag{1}\] where \(\omega_{0}\) is the chirp start frequency, \(S=B/T_{c}\) is the chirp slope, \(B\) is the chirp bandwidth, and \(T_{c}\) is the chirp duration. This signal is repeated in time intervals \((k-1)T_{c}\leq t\leq kT_{c}\) for \(k=1,2,...,N_{c}\), where \(N_{c}\) is the number of chirps in the frame. The received signal is given by \[S_{\rm rx}(t)=A_{r}e^{j[\omega_{0}(t-\tau)+\pi S(t-\tau)^{2}]}, \tag{2}\] where \(\tau=2R/c\) is the radar signal ToF and \(R\) is the target distance. The received signal is mixed with a scaled version of the transmit signal and is passed through a bandpass filter (BPF) to achieve the IF signal as \[S_{\rm if}(t)=A_{t}e^{j[\omega_{0}\tau-\pi S\tau^{2}+2\pi St\tau)}, \tag{3}\] which is a sinusoidal signal with the frequency of \(f_{i}=S\tau\). The target range with respect to the radar is estimated by using FFT applied to the IF time-domain signal. The filtered IF signal is sampled using an A/D converter. In practice, a frame of multiple chirps is used for target detection. The received signal samples of each chirp are combined as rows of a matrix, which is processed by a 2D FFT to construct the Range-Doppler map of the radar. This provides opportunities for further processing of the received signal to mitigate impacts of noise, clutter, interference, nonlinearities, and other imperfections. The range resolution of the radar is derived as the minimum distance between two close targets that their corresponding IF frequencies fall into two consecutive range bins, i.e., \(\Delta f_{i}=1/T_{c}\), where \(\Delta f_{i}=S\Delta\tau\). Therefor, the range resolution is derived as \[\Delta R=\frac{c}{2B}. \tag{4}\] If the radar can use all the bandwidth available in the D-band, 110-170 GHz, it will achieve a theoretical range resolution of 2.5 mm. The bandwidth limitations of the radar circuit components still cannot cover the whole D-band. A performance summary of the state-of-the-art D-band on-chip FMCW radars is shown in Table I. The radar with 30 GHz bandwidth presented in [9] can provide range resolution of 5.0 mm, which is promising for emerging sensing applications. Moreover, the ultra-fast chirp signals, e.g., with a chirp sweep time of 1-100 \(\mu\)s and the slope of 1-8 GHz/\(\mu\)s, can enable real-time high-resolution sensing scenarios. The TX output power is typically 10-13 dBm (10-20 mW) and the RX noise figure (NF) is 8-12 dB for D-band on-chip radar. ### _Angular Resolution_ The array factor for a 2D array can be derived as \[AF(\theta,\phi)=\sum_{n=1}^{N_{y}}\sum_{m=1}^{N_{x}}e^{j[(m-1)\psi_{x}+(n-1) \psi_{y}]} \tag{5}\] \[\psi_{x}=\beta d_{x}\sin\theta\cos\phi \tag{6}\] \[\psi_{y}=\beta d_{y}\sin\theta\sin\phi \tag{7}\] Fig. 1: Architecture of 2D MIMO radar comprising \(N_{x}\times N_{y}\) virtual arrays. The virtual array be realized using a combination of \(N_{\rm TX}\) radar transmitters and \(N_{\rm RX}\) radar receivers, in which the number of virtual arrays is \(N_{\rm TX}N_{\rm RX}\). For a D-band uniform rectangular array (URA), \(d_{x}=d_{y}=\lambda/2\approx 1\,\rm mm\). where \(\beta=\frac{2\pi}{\lambda}\) is the wave number, \(d_{x}\) and \(d_{y}\) are the inter-element spacing in the \(x\) and \(y\) directions, and \((\theta,\phi)\) are the elevation and azimuth angles [26]. It is assumed that, for the URA, \(d_{x}=d_{y}=\lambda/2\). Surface area of the array can be derived based on the number of elements, their spacing, and an extra margin of \(\lambda/4\) in each side as \(N_{x}\lambda/2\times N_{y}\lambda/2\). The shape of array factor is dependent on the number of elements. A larger array features higher directivity, narrower beamwidth, and lower side lobe level. In Fig. 2, heatmaps of the magnitude and the normalized value of the array factor versus the azimuth and elevation angles is shown for different URA structures. The maximum array factor is given by \(N_{x}N_{y}\), which is the highest for the \(8\times 8\) array [Fig. 2(a)]. The narrowest beamwidth is also achieved for the \(8\times 8\) array, as shown in Figs. 2(b), (d), (f), indicating its higher directivity and capability to concentrate the radiated energy. In the all three structures, the array factor is weekly dependent on the azimuth angle as a result of their symmetric structures. Furthermore, the side lobe levels can be read from the normalized array factor heatmaps, which are the lowest for the \(8\times 8\) array. The angular resolution of the array can be derived as the separation between two consecutive nulls in the array factor versus azimuth or elevation angle. The results, in analogy with the 1D array, can be approximated as \[\Delta\phi \approx\frac{\lambda}{N_{x}d_{x}\cos\phi} \tag{8}\] \[\Delta\theta \approx\frac{\lambda}{N_{y}d_{y}\cos\theta}. \tag{9}\] The angular resolutions are dependent on angles \((\theta,\phi)\). The antenna elements have a limited beamwidth, which can be modeled as \(|\phi|<\phi_{m}\) and \(|\theta|<\theta_{m}\). Also, for the URA, \(d_{x}=d_{y}=\lambda/2\). These lead to the following expressions for the worst-case angular resolutions \[\Delta\phi \approx\frac{2}{N_{x}\cos\phi_{m}} \tag{10}\] \[\Delta\theta \approx\frac{2}{N_{y}\cos\theta_{m}}. \tag{11}\] Therefore, the number of elements in the array should be selected based on the expected angular resolutions and the beamwidth of the antennas. For example, assuming a beamwidth of 45\({}^{\circ}\) in both azimuth and elevation (\(|\phi|<22.5^{\circ}\) and \(|\theta|<22.5^{\circ}\)), the number of rows and columns of the array (\(N_{x}\) and \(N_{y}\)) for an angular resolution of 5\({}^{\circ}\) should be at least 25. In the presence of noise and interference signals, certain signal processing algorithms should be used to extract the signal features and estimate the DOA of the target reflected signal. The accuracy is dependent on the radar array size and structure, the SNR level, and the algorithm. Effective SNR of the MIMO radar under optimal signal combining is improved by a factor equal to the number of virtual array elements compared to SNR of each radar element \[\mathrm{SNR}_{\mathrm{MIMO}}=\mathrm{SNR}_{\mathrm{SISO}}+10\log_{10}(N_{ \mathrm{TX}}N_{\mathrm{RX}}). \tag{12}\] ## III 2D DOA Estimation ### _MUSIC Algorithm_ The MUSIC algorithm is used to separate multiple signals in the presence of noise [24]. It is assumed that the received signal vector \(\mathbf{x}(t)\) comprises \(K\) complex exponential source signals \(\mathbf{s}(t)\) with unknown frequencies \(\omega_{i}\) and additive white Gaussian noise (AWGN). The received signal vector is related to the source signal vector as \[\mathbf{x}(t)=\mathbf{A}\mathbf{s}(t)+\mathbf{n}(t). \tag{13}\] The matrix \(\mathbf{A}\) is defined as \[\mathbf{A}=[\mathbf{a}(\theta_{1},\phi_{1}),...,\mathbf{a}(\theta_{K},\phi_{K} )], \tag{14}\] where \(\mathbf{a}(\theta,\phi)=[a_{mm}(\theta,\phi)]^{T}\) is a vector of the length \(N_{x}N_{y}\), \(a_{mn}(\theta,\phi)\) is the element of array factor in (5), \((\theta_{k},\phi_{k})\) denotes angular coordinates of the k-th target, and \(K\) is the total number of targets. The autocorrelation matrix of \(\mathbf{x}(t)\) is derived as \[\mathbf{R}_{xx}=\mathbf{A}\mathbf{R}_{ss}\mathbf{A}^{H}+\sigma_{n}^{2}\mathbf{ I}, \tag{15}\] where \(\mathbf{R}_{ss}\) is the autocorrelation matrix of \(\mathbf{s}(t)\) (unknown), \(\sigma_{n}^{2}\) is the noise power, and \(\mathbf{I}\) is the unity matrix. The autocorrelation of the received signal is estimated using its time-domain samples \[\mathbf{R}_{xx}=\frac{1}{N_{s}}\sum_{k=1}^{N_{x}}\mathbf{x}[k]\mathbf{x}^{H}[k]. \tag{16}\] It is proved that the matrix \(\mathbf{R}_{xx}\) can be separated into a signal subspace \(\mathbf{U}_{s}\) and a noise subspace \(\mathbf{U}_{n}\). The spatial spectrum function for DOA estimation using the MUSIC algorithm is defined as \[\mathrm{P}_{\mathrm{MUSIC}}(\theta,\phi)=\frac{1}{\mathbf{a}^{H}(\theta,\phi) \mathbf{U}_{n}\mathbf{U}_{n}^{H}\mathbf{a}(\theta,\phi)}. \tag{17}\] The DOAs of the signal sources are estimated as peaks of the spatial spectrum. ### _MVDR Algorithm_ The MVDR algorithm, also known as the Capon beamformer [25], evaluates the power of received signal in all directions, imposes the gain of unity in the DOA direction, and minimizes the power of signals from other directions. It is shown that the DOA estimation can be realized by detecting the peaks in the MVDR spatial spectrum \[\mathrm{P}_{\mathrm{MVDR}}(\theta,\phi)=\frac{1}{\mathbf{a}^{H}(\theta,\phi) \mathbf{R}_{xx}^{-1}\mathbf{a}(\theta,\phi)}, \tag{18}\] where \(\mathbf{R}_{xx}^{-1}\) is the inverse of the autocorrelation matrix. ### _Scaling of SNR and TX Output Power with Distance_ In theoretical DOA estimation problems, the SNR is considered as an independent design parameter. As a result, the RX SNR is independent of the distance between the radar and the target. However, in the radar sensing application, this requires scaling of the TX output power with distance as \(P_{\mathrm{TX}}\propto d^{4}\). This can lead to an impractical TX output power requirement at long distances. On the other hand, if a fixed TX output power is used, the RX SNR will decrease with the distance as \(\mathrm{SNR}\propto d^{-4}\). A compromise between these two extremes is to assume that both the TX output power and the RX SNR are scaled with the distance as \(P_{\mathrm{TX}}\propto d^{2}\) and \(\mathrm{SNR}\propto d^{-2}\). The three scenarios for the scaling of the SNR and TX output power with distance are presented in Fig. 3. We assume that the TX output power is set to achieve a certain RX SNR at a reference distance \(d_{0}\). This results in the following scaling rules for the RX SNR and TX output power \[\mathrm{SNR}(d) =\mathrm{SNR}(d_{0})-20\log_{10}(d/d_{0}) \tag{19}\] \[P_{\mathrm{TX}}(d) =P_{\mathrm{TX}}(d_{0})+20\log_{10}(d/d_{0}), \tag{20}\] where \(d_{0}=1\,\mathrm{m}\) is used as the reference distance. SNR and \(P_{\mathrm{TX}}\) are, respectively, given in dB and dBm. We evaluate the performance of different 2D array structures in terms of the SNR and distance with the MUSIC and MVDR DOA estimation algorithms. ### _DOA Estimation Results_ The two DOA estimation algorithms are evaluated for different 2D radar array architectures. Root-mean-square error (RMSE) is used as the performance metric to compare algorithms \[\mathrm{RMSE}=\sqrt{\frac{1}{KM}\sum_{j=1}^{M}\sum_{i=1}^{K}\Big{(}(\hat{\phi }_{i}(j)-\phi_{i})^{2}+(\hat{\theta}_{i}(j)-\theta_{i})^{2}\Big{)}}, \tag{21}\] where \((\phi_{i},\theta_{i})\) is the actual DOA of the i-th target, \((\hat{\phi}_{i}(j),\hat{\theta}_{i}(j))\) is the estimated DOA of the i-th target in the j-th sample of the Monte Carlo runs, \(K\) is the number of targets, and \(M\) is the total number of the Monte Carlo runs. We model the noise present in the received signal as a random Gaussian variable with zero mean and a variance determined based on the SNR. Therefore, RMSE should be calculated using multiple Monte Carlo simulations. We evaluated the simulation results for different number of runs and selected \(M=100\) to achieve high accuracy with practical simulation times. The number of targets is unknown in practice and should be estimated using other signal processing algorithms. In this work, we assume that there are three targets, \(K=3\), with equal signal amplitudes. Fig. 3: Scaling of RX SNR and TX output power (\(\mathrm{P_{TX}}\)) with distance. In the fixed SNR scenario (blue), \(\mathrm{P_{TX}}\) should be increased with the slope of \(+40\log(d)\), while in the fixed \(\mathrm{P_{TX}}\) scenario (red), SNR is reduced with the slope of \(-40\log(d)\). A compromise can be achieved by scaling the SNR with the slope of \(-20\log(d)\) and \(\mathrm{P_{TX}}\) with the slope of \(+20\log(d)\) (green). Fig. 2: Heatmaps of the array factor magnitude and normalized value for different structures: (a), (b) \(8\times 8\) array, (c), (d) \(4\times 4\) array, (e), (f) \(2\times 2\) array. #### Iv-A1 64-Element Array A 64-element array can be realized as different rectangular structures. For the \(8\times 8\) square array, the RMSE for the MUSIC and MVDR algorithms in terms of SNR and distance is shown in Fig. 4. The SNR and distance are, respectively, varied in the range of 0-30 dB and 1-10 m, as typical values in indoor wireless sensing scenarios. The MUSIC algorithm outperforms the MVDR algorithm in the all SNRs and distances, especially at long-distance and low-SNR conditions. Fig. 4(a) indicates that at short distances, e.g., 1 m, the RMSE lower than \(1^{\circ}\) can be maintained even in very low SNR levels. This is the result of an improved SNR of the MIMO array that for the 64-element array is derived using (12) as \(10\log_{10}(64)\approx 18\,\mathrm{dB}\). As shown in Fig. 4(b), for SNR of 20 dB, RMSE lower than \(0.6^{\circ}\) can be achieved across the 1-10 m distances using the MUSIC algorithm. Generally, a higher SNR is required for longer distances to achieve a fine DOA estimation accuracy. A 64-element array can be realized as other structures, e.g., \(16\times 4\), \(32\times 2\), as usually a higher resolution in the azimuth angle compared to the elevation angle is required. Simulation results of these alternative array structures are not shown due to limited space. The DOA estimation results for the \(16\times 4\) array indicate RMSE of about \(1.2-1.6^{\circ}\) for SNR of 0-30 dB at 1 m distance. Compared to the RMSE of the \(8\times 8\) array shown in Fig. 4 (a), the RMSE of the \(16\times 4\) array is higher by 2x-3x. This can be due to reduced number of array elements in the vertical direction which leads to a lower elevation resolution. The simulation times for the DOA estimation algorithms for each array structure are presented in Table II. Simulations are performed using an Intel Core i5-7300HQ 2.50 GHz CPU with 8 GB RAM. The simulation time comprises the time required to complete simulations across a SNR of 0-30 dB with a step of 5 dB, a distance of 1-10 m with a step of 1 m, and 100 Monte Carlo runs. For \(8\times 8\) array, run time of the MUSIC algorithm is about 0.5x of the MVDR algorithm, while RMSE is also lower for the MUSIC algorithm. #### Iv-A2 16-Element Array For a 16-element array realized as a \(4\times 4\) structure, the RMSE in terms of the SNR and distance is shown in Fig. 5. Interestingly, the MUSIC and MVDR algorithms provide very close accuracy across the ranges of SNR and distance considered in the simulations. This is in contrary with the \(8\times 8\) array in which the MUSIC led to higher accuracy compared to the MVDR. The SNR improvement by the MIMO array is \(10\log_{10}(16)\approx 12\,\mathrm{dB}\). Fig. 5(b) indicates that with the 20 dB SNR, the RMSE lower than \(2^{\circ}\) can be achieved in the 1-10 m range. Moreover, Table II shows that simulation times for the two algorithms are very close. #### Iv-A3 4-Element Array A 4-element array intuitively can only provide a coarse DOA estimation. However, such radar array with simple hardware implementation and low power consumption is a promising candidate for integration in electronic devices for indoor wireless sensing. It can be used in several new applications (e.g., smart home appliances and health monitoring devices) with DOA estimation as an additional add-on functionality. The SNR boost by the MIMO structure is \(10\log_{10}(4)\approx 6\,\mathrm{dB}\) for the 4-element array. The RMSE for a \(2\times 2\) array versus SNR and distance is shown in Fig. 6. It is noted that in this case the MVDR algorithm significantly outperforms the MUSIC algorithm. This is an important result indicating that the MVDR algorithm is more accurate for small radar arrays. Fig. 6(b) shows that with 20 dB SNR, RMSE lower than \(6^{\circ}\) can be achieved in 1-10 m using the MVDR algorithm. This accuracy can still be sufficient for many applications. The maximum RMSE, under the same Fig. 4: RMSE for the MUSIC and MVDR algorithms with \(8\times 8\) radar array. MUSIC outperforms MVSR especially in the low-SNR conditions. SNR and distance conditions, was \(2^{\circ}\) and \(0.6^{\circ}\) for the \(4\times 4\) and \(8\times 8\) array, respectively. A higher SNR of 30 dB can reduce the RMSE to \(0.4^{\circ}\) at 1 m and \(1.3^{\circ}\) at 5 m [Fig. 6(a)]. Furthermore, Table II indicates that MVDR also features a simulation time 0.8x shorter than that of the MUSIC algorithm. #### Iii-C4 Accuracy Versus the Number of Elements The number of array elements is the most important feature of the MIMO radar which can be determined based on the required DOA estimation accuracy. In Fig. 7, the RMSE is shown versus the number of array elements. The minimum RMSE achieved using the MUSIC and MVDR algorithms in each condition is presented here. This approach can be used to determine the number of elements of the array required to achieve a desired RMSE for a specific SNR and distance. ## IV Indoor Wireless Sensing Scenarios We consider two indoor wireless sensing scenarios: free-space and through-wall sensing. It is assumed that there is a dominant line-of-sight (LOS) propagation path between the radar array and the target, and multipath effects are negligible. The required TX output power per radar element of the array is derived in terms of the SNR for a certain RMSE of the DOA estimation and the target distance. Fig. 5: RMSE for the MUSIC and MVDR algorithms with \(4\times 4\) radar array. MUSIC and MVDR provide very close results. Fig. 6: RMSE for the MUSIC and MVDR algorithms with \(2\times 2\) radar array. MVDR outperforms MUSIC across all SNR and distance ranges. Fig. 7: RMSE versus the number of radar array elements. The minimum RMSE achieved using the MUSIC and MVDR algorithms is presented. ### _Free-Space Sensing_ In free-space, the required TX output power can be derived using the radar equation \[\mathrm{SNR}=\frac{P_{TX}G_{TX}G_{RX}\lambda^{2}\sigma T_{meas}}{(4\pi)^{3}kTFd ^{4}}, \tag{22}\] where \(P_{TX}\) is the TX output power, \(G_{TX}\) and \(G_{RX}\) are gain of the TX and RX antennas, \(\lambda\) is the signal wavelength, \(\sigma\) is the radar cross section (RCS) of the target, \(T_{meas}\) is the radar measurement time, \(k=1.38\times 10^{-23}\,\mathrm{J/K}\), \(T\) is the absolute temperature, \(F\) is the RX noise factor, \(\mathrm{NF_{RX}}=10\log_{10}(\mathrm{F})\), and \(d\) is the target distance. In the following simulations, the radar parameters are considered as \(G_{TX}=G_{RX}=10\,\mathrm{dB}\), \(\lambda=2.1\,\mathrm{mm}\) (at 140 GHz), \(\sigma=100\,\mathrm{cm^{2}}\) as an estimate of the human hand RCS for gesture sensing applications [27], the chirp duration \(T_{c}=10\,\mathrm{\mu s}\), and \(\mathrm{NF_{RX}}=10\,\mathrm{dB}\) (Table I). The radar frame is assumed to comprise 100 chirps and, therefore, the measurement time is \(T_{meas}=100T_{c}=1\,\mathrm{ms}\). In Fig. 8(a), the required TX output power versus SNR and distance is shown for free-space wireless sensing. Power is derived per TX element of the radar array and is independent of the structure of the array. It is noted that the requited TX output power is less than 0 dBm in most ranges of SNR and distance and reaches to the maximum of about 3 dBm at 30 dBm SNR and 10 m distance. Comparing this with the TX output power of the state-of-the-art on-chip radars presented in Table I, indicates that the required TX output power in free-space sensing is within the practical power levels of the available D-band radars. Furthermore, if we increase the SNR to 40 dB to achieve a higher accuracy of the DOA estimation, or consider a target with 10x smaller RCS, the required TX output power will increase to 13 dBm at the extreme distance of 10 m, which is still feasible based on the TX output power levels presented in Table I. This investigation reveals the promise of high-accuracy indoor wireless sensing in the free-space scenario. Another practical consideration is the power consumption of the radar array system. The power consumption is generally dependent on the radar system architecture, circuits, and integrated circuit technology used for implementation. The power consumed by the TX elements of the array, as the major source of power consumption, can be estimated as \[P_{\mathrm{DC,TX}}\approx\frac{N_{\mathrm{TX}}P_{\mathrm{TX}}}{\eta_{\mathrm{ TX}}}, \tag{23}\] where \(\eta_{\mathrm{TX}}\) is the TX output power efficiency. For a 64-element radar array comprising 8 TX elements with output power in the range of 13-20 dBm (20-100 mW) and power efficiency of 20%, the power consumption is derived as 0.8-4 W, which is practically feasible provided that a proper solution can be applied for the dissipation of heat power. ### _Through-Wall Sensing_ Through-wall sensing has been successfully realized in low RF bands (e.g., below 8 GHz) [28]. The wireless signal can pass through the indoor walls with moderate loss in these frequency bands. However, the long wavelength of the signal in this band (more than 3 cm) leads to coarse resolutions. The millimeter-wave frequency bands can provide very fine resolutions, but the wireless signal is significantly attenuated by the indoor occlusions. Therefore, through-wall sensing in millimeter-wave bands can require high levels of output power. In this section, we investigate the feasibility of throughput-wall sensing in the D-band. The loss of three representative indoor materials measured at 140 GHz is presented in Table III[1]. We assume that the penetration loss through the material is dominant and neglect other losses, including the surface scattering and multipath propagation. The loss of wall material should be doubled in the radar link budget to account for the dual transmit and receive paths \[\mathrm{SNR_{through-wall}}=\mathrm{SNR_{free-space}}-2L_{\mathrm{wall}}, \tag{24}\] where SNR and loss values are given in dB. In Fig. 8(b), the required TX output power per radar element versus SNR and the target distance for the detection through the clear glass with 0.6 cm thickness and 8.6 dB measured loss at 140 GHz. The simulation conditions are the same as in the free-space case. A higher TX output power is required compared to the free-space sensing, to compensate for the wall material loss. The required TX output power is practically feasible (e.g., lower than the maximum power of the state-of-the-art radars presented in Table I) for most of the SNR and distance ranges, except for extreme values at the top right of the figure. In Fig. 8(c), simulations results are presented for sensing through drywall with 14.5 cm thickness and 15.0 dB loss at 140 GHz. The required TX output power is higher than in the previous case due to the greater loss of drywall. It is observed that close to half of the figure towards the bottom-left are within the practical ranges of output power. In Fig. 8(d), simulation results are shown for sensing through wood door with 3.5 cm thickness and 25.5 dB loss at 140 GHz. In this condition, the required TX output power is within the feasibility range only for a small area around the bottom-left of the figure. ### _Discussion_ A comparison of the required TX output power versus RX SNR for different indoor materials is shown in Fig. 9. The results are shown at two distances 1 m and 5 m. Fig. 9(a) indicates that the required TX output power, either in free space or through the indoor materials, is very low at 1 m distance. However, in Fig. 9(b), the required TX output power at 5 m distance can become very high and unfeasible in some conditions. We note that the simulations are performed for a specific set of system parameters (e.g., target RCS, radar measurement time, number of array elements) which can be modified, within certain practical limits, to achieve higher performance. We evaluate the results of our investigations with practical considerations to develop insights into system design of 2D MIMO radar for D-band indoor wireless sensing. #### Iv-A1 **Number of Array Elements** The effective SNR of the MIMO radar can be improved using more array elements to achieve higher DOA estimation accuracy. However, increasing the number of array elements can lead to several practical limitations, including the higher power consumption by the radar circuits, the increased antenna mutual coupling [23], self-interference, and spurs in radar [22], and higher loss in the signal distribution paths [29]. These limitations are all more severe in the D-band compared to lower millimeter-wave bands. Therefore, the number of array elements should be selected on the basis of the limitations mentioned. #### Iv-A2 **DOA Estimation** The evaluation of two popular DOA estimation algorithms, MUSIC and MVDR, indicates that the MUSIC outperforms the MVDR for large arrays (e.g., over 16 elements), while the MVDR can provide superior performance for small arrays (e.g., less than 16 elements). In practice, the required computational power, memory, and simulation time can also be important criteria in resource-constrained (e.g., edge IoT sensors) and real-time (e.g., robotics and automotive) applications. Furthermore, the emerging deep learning based DOA estimation techniques [30] can provide higher performance and adaptation to operating conditions than classic DOA estimation algorithms. #### Iv-A3 **Target RCS** In the simulations presented, the target RCS was assumed as \(\sigma=$100\,\mathrm{cm}^{2}$\), as an estimate of the human hand RCS for gesture sensing applications [27]. In practice, the target RCS can exhibit significant variations in indoor wireless sensing scenarios, due to different types of targets (e.g., human hand for gesture sensing or human chest for vital signs monitoring) and their frequently changing orientation with respect to the radar array. This can result in large changes in the radar RX SNR [see (22)] and the TX output power requirements. A practical approach is to design the system for a range of target RCS values. #### Iv-A4 **Radar Measurement Time** The radar measurement time, \(T_{meas}\) in (22), has a direct effect on the RX SNR. The measurement time can be increased to achieve a higher RX SNR and thus lower TX output power requirement (e.g., a \(20\,\mathrm{dB}\) gain by increasing from \(1\,\mathrm{m}\)s used in simulations to \(100\,\mathrm{m}\)s). However, this results in a higher power consumption of the radar, more computational resources to process the received signal samples, and longer processing times. Therefore, the measurement time should be selected based on all criteria. Fig. 8: Required TX output power per radar element versus SNR and distance in different wireless sensing scenarios: (a) free space, (b) clear glass, (c) drywall, (d) wood door. Loss of indoor materials in \(140\,\mathrm{GHz}\) is extracted from the measurements (Table III) [1]. Fig. 9: A comparison of the required TX output power versus the RX SNR for different indoor materials. (a) \(1\,\mathrm{m}\) distance, (b) \(5\,\mathrm{m}\) distance. #### Iv-B5 **TX Output Power Requirement** The required TX output power is determined to achieve a specific RX SNR for a target distance. The TX output power is linearly dependent on the RX SNR through (22). The impact of important radar parameters was discussed. The gain of TX and RX antennas as well as NF of the RX are usually limited by the implementation technology. The radar measurement time can be used as a design parameter within the practical limits. In the through-wall sensing scenario, a higher TX output power is required to compensate for the extra loss of the occluding material. In practice, there are higher losses due to wave scattering by the material in the D-band [1]. Therefore, the radar should be able to provide a wide range of TX output power and adapt to operating conditions. ## V Conclusion A design approach of 2D multi-input multi-output (MIMO) frequency-modulated continuous-wave (FMCW) radar arrays for D-band indoor wireless sensing applications was presented. The directional-of-arrival (DOA) estimation in the presence of noise was evaluated using the MUSIC and MVDR algorithms. It was shown that the MUSIC algorithm can provide higher accuracy and faster simulations for large arrays (over 16 elements), while the MVDR algorithm outperforms for small arrays (less than 16 elements). The power requirement of the radar transmitter was investigated in free-space and through-wall indoor sensing scenarios, and it was shown that the current state-of-the-art D-band on-chip radars can enable indoor wireless sensing in certain conditions.
2309.07303
EXPRESSing Session Types
To celebrate the 30th edition of EXPRESS and the 20th edition of SOS we overview how session types can be expressed in a type theory for the standard $\pi$-calculus by means of a suitable encoding. The encoding allows one to reuse results about the $\pi$-calculus in the context of session-based communications, thus deepening the understanding of sessions and reducing redundancies in their theoretical foundations. Perhaps surprisingly, the encoding has practical implications as well, by enabling refined forms of deadlock analysis as well as allowing session type inference by means of a conventional type inference algorithm.
Ilaria Castellani, Ornela Dardha, Luca Padovani, Davide Sangiorgi
2023-09-13T20:50:33Z
http://arxiv.org/abs/2309.07303v1
# EXPRESSing Session Types ###### Abstract To celebrate the 30th edition of EXPRESS and the 20th edition of SOS we overview how session types can be expressed in a type theory for the standard \(\pi\)-calculus by means of a suitable encoding. The encoding allows one to reuse results about the \(\pi\)-calculus in the context of session-based communications, thus deepening the understanding of sessions and reducing redundancies in their theoretical foundations. Perhaps surprisingly, the encoding has practical implications as well, by enabling refined forms of deadlock analysis as well as allowing session type inference by means of a conventional type inference algorithm. ## 1 Origins of EXPRESS: some personal memories This year marks an important milestone in the history of the EXPRESS/SOS workshop series. Before joining their destinies in 2012, the two workshops EXPRESS and SOS had been running on their own since 1994 and 2004, respectively. Hence, the EXPRESS/SOS'23 workshop in Antwerp will constitute the 30th edition of EXPRESS and the 20th edition of SOS. Two of us (Ilaria Castellani and Davide Sangiorgi) were personally involved in the very first edition of EXPRESS in 1998, and indeed, they may be said to have carried the workshop to the baptismal font, together with Robert de Simone and Catuscia Palamidessi. Let us recall some facts and personal memories. The EXPRESS workshops were originally held as meetings of the European project EXPRESS, a Network of Excellence within the Human Capital and Mobility programme, dedicated to expressiveness issues in Concurrency Theory. This NoE, which lasted from January 1994 till December 1997, gathered researchers from several European countries and was particularly fruitful in supporting young researchers' mobility across different sites. The first three workshops of the NoE were held in Amsterdam (1994), Tarquinia (1995), and Dagstuhl (1996). The fourth and final workshop was held in Santa Margherita Ligure (1997). It was co-chaired by Catuscia Palamidessi and Joachim Parrow, and stood out as a distinctive event, open to external participants and organised as a conference with a call for papers. A few months after this workshop, in the first half of 1998, the co-chairs of the forthcoming CONCUR'98 conference in Nice, Robert de Simone and Davide Sangiorgi, were wondering about endowing CONCUR with a satellite event (such events were still unusual at the time) in order to enhance its attractiveness. Moreover, Davide was sharing offices with Ilaria, who had been the NoE responsible for the site of Sophia Antipolis and was also part of the organising committee of CONCUR'98. It was so, during informal discussions, that the idea of launching EXPRESS as a stand-alone event affiliated with CONCUR was conceived, in order to preserve the heritage of the NoE and give it a continuation. Thus the first edition of EXPRESS, jointly chaired by Catuscia and Ilaria, took place in Nice in 1998, as the first and unique satellite event of CONCUR. However, EXPRESS did not remain a lonely satellite for too long, as other workshops were to join the orbit of CONCUR in the following years (INFINITY, YR-CONCUR, SecCo, TRENDS,...), including SOS in 2004. The workshop EXPRESS'98 turned out to be successful and very well attended. Since then, EXPRESS has been treading its path as a regular satellite workshop of CONCUR, with a new pair of co-chairs every year, each co-chair serving two editions in a row. The workshop, which is traditionally held on the Monday preceding CONCUR, has always attracted good quality submissions and has maintained a faithful audience over the years. Coincidentally, this double anniversary of EXPRESS/SOS falls in the 30th anniversary of Kohei Honda's first paper on session types [26]. For this reason, we propose an overview of a particular expressiveness issue, namely the addition of session types to process calculi for mobility such as the \(\pi\)-calculus. ## 2 Session types and their expressiveness: introduction Expressiveness is a key topic in the design and implementation of programming languages and models. The issue is particularly relevant in the case of formalisms for parallel and distributed systems, due to the breadth and variety of constructs that have been proposed. Most importantly, the study of expressiveness has practical applications. If the behaviours that can be programmed by means of a certain formalism \(L_{1}\) can also be programmed using another formalism \(L_{2}\), then methods and concepts developed for the latter language (e.g., reasoning and implementation techniques) may be transferred onto the former one that, in turn, may be more convenient to use from a programming viewpoint. An important instance is the case when \(L_{2}\) is, syntactically, a subset of \(L_{1}\). Indeed the quest for a "minimal" formalism is central in the work on expressiveness. This paper is an overview of a particular expressiveness issue, namely the addition of session types onto calculi for mobility such as the \(\pi\)-calculus. We will review the encoding of binary session types onto the standard \(\pi\)-calculus [14, 15], based on an observation of Kobayashi [33]. The key idea of the encoding is to represent a sequence of communications within a session as a chain of communications on linear channels (channels that are meant to be used exactly once) through the use of explicit continuations, a technique that resembles the modelling of communication patterns in the actor model [25]. We discuss extensions of the encoding to subtyping, polymorphism and higher-order communication as well as multiparty session types. Finally, we review two applications of the encoding to the problems of deadlock analysis and of session type inference. _Session types_, initially proposed in [26, 51, 27], describe _sessions_, i.e., interaction protocols in distributed systems. While originally designed for process calculi, they have later been integrated also in other paradigms, including (multi-threaded) functional programming [54, 44, 37, 40, 20, 35], component-based systems [52], object-oriented languages [18, 19, 7], languages for Web Services and Contracts [9, 38]. They have also been studied in logical-based type systems [5, 55, 6, 13, 36]. Session types allow one to describe the sequences of input and output operations that the participants of a session are supposed to follow, explicitly indicating the types of messages being transmitted. This structured _sequentiality_ of operations makes session types suitable to model protocols. Central (type and term) constructs in session types are also branch and select, the former being the offering of a set of alternatives and the latter being the selection of one of the possible options at hand. Session types were first introduced in a variant of the \(\pi\)-calculus to describe binary interactions. Subsequently, they have been extended to _multiparty sessions_[28], where several participants interact with each other. In the rest of this paper, we will focus on _binary session types_. Session types guarantee privacy and communication safety within a session. Privacy means that session channels are known and used only by the participants involved in the session. Communication safety means that interaction within a session will proceed without mismatches of direction and of message type. To achieve this, a session channel is split into two endpoints, each of which is owned by one of the participants. These endpoints are used according to dual behaviours (and thus have dual types), namely one participant sends what the other one is expecting to receive and vice versa. Indeed, _duality_ is a key concept in the theory of session types. To better understand session types and the notion of duality, let us consider a simple example: the _equality test_. A _server_ and a _client_ communicate over a session channel. The endpoints \(x\) and \(y\) of the session channel are owned by the server and the client, respectively and exclusively, and must have dual types. To guarantee duality of types, static checks are performed by the type system. If the type of the server endpoint \(x\) is \[S\triangleq?\texttt{Int.?Int.!Bool.end}\] -- meaning that the process owning the channel endpoint \(x\) receives (?) an integer value followed by another integer value and then sends (!) back a boolean value corresponding to the equality test of the integers received -- then the type of the client endpoint \(y\) should be \[\overline{S}\triangleq!\texttt{Int.!Int.?Bool.end}\] -- meaning that the process owning the channel endpoint \(y\) sends an integer value followed by another integer value and then waits to receive back a boolean value -- which is exactly the dual type. There is a precise moment at which a session between two participants is established. It is the _connection_ phase, when a fresh (private) session channel is created and its endpoints are bound to each communicating process. The connection is also the moment when duality, hence mutual compliance of two session types, is verified. In order to establish a connection, primitives like accept/request or \((\nu xy)\), are added to the syntax of terms [51, 27, 53]. When session types and session terms are added to the syntax of standard \(\pi\)-calculus types and terms, respectively, the syntax of types (and, as a consequence, of type environments) usually needs to be split into two separate syntactic categories, one for session types and the other for standard \(\pi\)-calculus types [51, 27, 56, 22]. Common typing features, like subtyping, polymorphism, recursion have then to be added to both syntactic categories. Also the syntax of processes will contain both standard \(\pi\)-calculus process constructs and session process constructs (for example, the constructs mentioned above to create session channels). These syntactic redundancies bring in redundancies also in the theory, and can make the proofs of properties of the language heavy. Moreover, if a new type construct is added, the corresponding properties must be checked both on standard \(\pi\)-types and on session types. By "standard type systems" we mean type systems originally studied in depth for sequential languages such as the \(\lambda\)-calculus and then transplanted onto the \(\pi\)-calculus as types for channel names (rather than types for terms as in the \(\lambda\)-calculus); they include, for instance, constructs for products, records, variants, polymorphism, linearity, capabilities, and so on. A further motivation for investigating the expressiveness of the \(\pi\)-calculus with or without session types is the similarity between session constructs and standard \(\pi\)-calculus constructs. Consider the type \(S=?\texttt{Int.?Int.!Bool.end}\). This type is assigned to a session channel endpoint and it describes a structured sequence of inputs and outputs by specifying the type of messages that the channel can transmit. This way of proceeding reminds us of the _linearised_ channels [34], which are channels used multiple times for communication but only in a sequential manner. Linearised types can, in turn, be encoded into linear types--i.e., channel types used _exactly once_[34]. Similarly, there are analogies between the branch and select constructs of session types and the _variant_ types [45, 46] of standard \(\pi\)-calculus types, as well as between the duality of session types, in which the behaviour of a session channel is split into two endpoints, and the _capability types_ of the standard \(\pi\)-calculus, that allow one to separate the input and output usages of channels. In this paper we follow the encoding of binary session types into linear \(\pi\)-types from [14, 15], then discuss some extensions and applications. The encoding was first suggested by Kobayashi [33], as a proof-of-concept without however formally studying it. Later, Demangeon and Honda [17] proposed an encoding of session types into \(\pi\)-types with the aim of studying the subtyping relation, and proving properties such as soundness of the encoding with respect to typing and full abstraction. Structure of the paper.The rest of the paper is organised as follows. In Section 3 we introduce the necessary background about the session \(\pi\)-calculus and the linear \(\pi\)-calculus. In Section 4 we recall the encoding from the session \(\pi\)-calculus into the linear \(\pi\)-calculus, as well as its correctness result. In Section 5 and Section 6 we discuss respectively some extensions and some applications of the encoding. ## 3 Background: \(\pi\)-calculus and session types In this section, we recall the syntax and semantics of our two calculi of interest: the session \(\pi\)-calculus and the standard typed \(\pi\)-calculus. We also introduce the notion of duality for session types. Figure 1: Syntax and reduction semantics of the session \(\pi\)-calculus Session types and terms.The syntax for session types and session \(\pi\)-calculus terms is reported in Figure 1, together with the rules for the reduction semantics, in which \(\equiv\) is the usual _structural congruence_ relation, allowing one to rearrange parallel compositions and the scope of restrictions and to remove useless restrictions. We refer to, e.g., [53, 22] for the rules for typing. Session types range over \(S\) and types range over \(T\); the latter include session types, standard channel types denoted by \(\sharp T\), data types, such as Unit and any other type construct needed for mainstream programming. Session types are: end, the type of a terminated channel; \(?T.S\) and \(!T.S\) (used in the equality test example given in the introduction) indicating, respectively, the receive and send of a value of type \(T\), with continuation type \(S\). Branch and select are sets of labelled session types, whose labels have indices ranging over a non-empty set \(I\). Branch &\(\{l_{i}:S_{i}\}_{i\in I}\) indicates an external choice, namely what is offered, and it is a generalisation of the input type in which the continuation \(S_{i}\)_depends_ on the received label \(l_{i}\). Dually, select \(\oplus\{l_{i}:S_{i}\}_{i\in I}\) indicates an internal choice, where only one of the available labels \(l_{i}\)'s will be chosen, and it is a generalisation of the output type. Session processes range over \(P,Q\). The output process \(x!\langle v\rangle.P\) sends a value \(v\) on channel endpoint \(x\) and continues as \(P\); the input process \(x?(y).P\) receives on \(x\) a value to substitute for the placeholder \(y\) in the continuation \(P\). The selection process \(x\triangleleft l_{j}.P\) selects label \(l_{j}\) on channel \(x\) and proceeds as \(P\). The branching process \(x\triangleright\{l_{i}:P_{i}\}_{i\in I}\) offers a range of labelled alternative processes on channel \(x\). The session restriction construct \((\nu xy)P\) creates a session channel, more precisely its two endpoints \(x\) and \(y\), and binds them in \(P\). As usual, the term \(\mathbf{0}\) denotes a terminated process and \(P\mid Q\) the parallel composition of \(P\) and \(Q\). Figure 2: Syntax and reduction rules of the standard typed \(\pi\)-calculus DualitySession type duality is a key ingredient in session types theory as it is necessary for communication safety. Two processes willing to communicate, e.g., the client and the server in the equality test, must first agree on a session protocol. Intuitively, client and server should perform dual operations: when one process sends, the other receives, when one offers, the other chooses. Hence, the dual of an input must be an output, the dual of branch must be a select, and vice versa. Formally, duality on session types is defined as the following function: \[\begin{array}{rcl}\overline{\mathsf{end}}&\triangleq&\mathsf{end}\\ \overline{!T.S}&\triangleq&?T.\overline{S}\\ \overline{?T.S}&\triangleq&!T.\overline{S}\\ \overline{\oplus\{l_{i}:S_{i}\}_{i\in I}}&\triangleq&\&\{l_{i}:\overline{S}_{i}\}_{ i\in I}\\ \overline{\&\{l_{i}:S_{i}\}_{i\in I}}&\triangleq&\oplus\{l_{i}:\overline{S}_{i}\}_{ i\in I}\end{array}\] The static checks performed by the typing rules make sure that the peer endpoints of the same session channel have dual types. In particular, this is checked in the restriction rule \((\textsc{t-Res})\) below: \[\begin{array}{l}(\textsc{t-Res})\\ \frac{\Gamma,x:T,y:\overline{T}\vdash P}{\Gamma\vdash(vxy)P}\end{array}\] Standard \(\pi\)-calculus.The syntax and reduction semantics for the standard \(\pi\)-calculus are shown in Figure 2. We use \(t\) to range over standard \(\pi\)-types, to distinguish them from types \(T\) and session types \(S\), given in the previous paragraph. We also use the notation \(\widetilde{\cdot}\) to indicate (finite) sequences of elements. Standard \(\pi\)-types specify the _capabilities_ of channels. The type \(\emptyset[]\) is assigned to a channel without any capability, which cannot be used for any input/output action. Standard types \(\ell_{1}[\overline{i}]\) and \(\ell_{0}[\overline{i}]\) are assigned to channels used _exactly once_ to receive and to send a sequence of values of type \(\widetilde{t}\), respectively. The variant type \(\langle l_{i}.\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! describes the type of a channel as it will be used by the receiver process. The branch and the select types are encoded as linear input and linear output channels carrying variant types having labels \(l_{i}\) and types that are respectively the encoding of \(S_{i}\) and the encoding of \(\overline{S_{i}}\) for all \(i\in I\). Again, the reason for using duality of the continuation in the encoding of the select type is the same as for the output type, as select is a generalisation of output type. Process encoding.The encoding of session processes into standard \(\pi\)-processes is shown at the bottom of Figure 3. The encoding of a process \(P\) is parametrised by a function \(f\) from channel names to channel names. We say that \(f\) is a _renaming function for \(P\)_ if, for all the names \(x\) that occur free in \(P\), either \(f(x)=x\) or \(f(x)\) is a fresh name not occurring in \(\mathtt{n}(P)\), where \(\mathtt{n}(P)\) is the set of all names of \(P\), both free and bound. Also, \(f\) is the identity function on all bound names of \(P\). Hereafter we write \(\mathtt{dom}(f)\) for the domain of \(f\) and \(f_{x}\) as an abbreviation for \(f(x)\). During the encoding of a session process, its renaming function \(f\) is progressively updated. For example, we write \(f\{x\mapsto c\}\) or \(f\{x,y\mapsto c\}\) for the update of \(f\) such that the names \(x\) and \(y\) are associated to \(c\). The notion of a renaming function is extended also to values as expected. In the uses of the definition of the renaming function \(f\) for \(P\) (respectively \(v\)), process \(P\) (respectively value \(v\)) will be typed in a typing context, say \(\Gamma\). It is implicitly assumed that the fresh names used by \(f\) (that is, the names \(y\) such that \(y=f(x)\), for some \(x\neq y\)) are also fresh for \(\Gamma\). The motivation for parametrising the encoding of processes and values with a renaming function stems from the key idea of encoding a structured communication over a session channel as a chain of one-shot communications over _linear_ channels. Whenever we transmit some payload on a linear channel, the payload is paired with a fresh continuation channel on which the rest of the communication takes place. Such continuation, being fresh, is different from the original channel. Thus, the renaming Figure 3: Encoding of types, values and processes. function allows us to keep track of this fresh name after each communication. We now provide some more details on the encoding of terms. Values are encoded as expected, so that a channel name \(x\) is encoded to \(f_{x}\) and the \(\star\) unit value is encoded to itself. This encoding is trivially extended to every ground value added to the language. In the encoding of the output process, a new channel name \(c\) is created and is sent together with the encoding of the payload \(v\) along the channel \(f_{x}\). The encoding of the continuation process \(P\) is parametrised by an updated \(f\) where the name \(x\) is associated to \(c\). Similarly, the input process listens on channel \(f_{x}\) and receives a pair whose first element (the payload) replaces the name \(y\) and whose second element (the continuation channel \(c\)) replaces \(x\) in the continuation process by means of the updated renaming function \(f\{x\mapsto c\}\). As indicated in Section 3, session restriction \((vxy)P\) creates two fresh names and binds them in \(P\) as the opposite endpoints of the same session channel. This is not needed in the standard \(\pi\)-calculus. The restriction construct \((\nu x)P\) creates and binds a unique name \(x\) in \(P\); this name identifies both endpoints of the communicating channel. The encoding of a session restriction process \((\nu x)P\) is a standard channel restriction process \((\nu c)[\![P]\!]_{f\{x,y\mapsto c\}}\) with the new name \(c\) used to substitute both \(x\) and \(y\) in the encoding of \(P\). Selection \(x\!<\!\!l_{j}.P\) is encoded as the process that first creates a new channel \(c\) and then sends on \(f_{x}\) a variant value \(l_{j}.c\), where \(l_{j}\) is the selected label and \(c\) is the channel created to be used for the continuation of the session. The encoding of branching receives on \(f_{x}\) a value, typically being a variant value, which is the guard of the **case** process. According to the transmitted label, one of the corresponding processes \([\![P_{i}]\!]_{f\{x\mapsto c\}}\) for \(i\in I\) will be chosen. Note that the name \(c\) is bound in any process \([\![P_{i}]\!]_{f\{x\mapsto c\}}\). The encoding of the other process constructs, namely inaction, standard channel restriction, and parallel composition, acts as a homomorphism. **Example 4.1** (Equality test).: _We illustrate the encoding of session types and terms on the equality test from the introduction. Thus we also make use of boolean and integer values, and simple operations on them, whose addition to the encoding is straightforward._ _The encoding of the server's session type \(S\) is_ \[[\![S]\!]=\ell_{\downarrow}[\![\mathtt{Int},\ell_{\downarrow}[\![\mathtt{ Int},\ell_{\circ}[\![\mathtt{Bool},\emptyset]\!]]\!]\!]\] _while that of the client's session type \(\overline{S}\) is_ \[[\![\overline{S}]\!]=\ell_{\circ}[\![\mathtt{Int},\ell_{\downarrow}[\![ \mathtt{Int},\ell_{\circ}[\![\mathtt{Bool},\emptyset]\!]]\!]\!]\!]\] _Note how the encoding of dual session types boils down to linear channel types that have the same payload and dual outermost capabilities \(\ell_{\downarrow}[\cdot]\) and \(\ell_{\circ}[\cdot]\). This property holds in general and can be exploited to express the (complex) notion of session type duality in terms of the (simple) property of type equality, as we will see in Section 6._ _The server process, communicating on endpoint \(x\) of type \(S\), is_ \[\text{server}\triangleq x?(z_{1}).x?(z_{2}).x!(z_{1}==z_{2}).\mathbf{0}\] _and the client process, communicating on endpoint \(y\) of type \(\overline{S}\), is_ \[\text{client}\triangleq y!(3).y!(5).y?(eq).\mathbf{0}\] _Then we have_ \[[\![\text{server}]\!]_{\{x\mapsto s\}} = s?(z_{1},c).[\![x?(z_{2}).x!(z_{1}==z_{2}).\mathbf{0}]\!]_{\{x \mapsto c\}}\] \[= s?(z_{1},c).?(z_{2},c^{\prime}).(\nu c^{\prime\prime})c^{\prime}!(z_{1}==z_{2},c^{\prime\prime}).\mathbf{0}\] _Similarly,_ \[[\![\mathit{client}]\!]_{\{y\mapsto s\}}=(\nu c)s!\langle 3,c\rangle.(\nu c^{ \prime})c!\langle 5,c^{\prime}\rangle.c^{\prime}?(eq,c^{\prime\prime}).\mathbf{0}\] _The whole server-client system is thus encoded as follows, using \(\emptyset\) for the identity function._ \[[\![(\nu xy)(\mathit{server}\mid\mathit{client})]\!]_{\emptyset}=(\nu s)[\![ (\mathit{server}\mid\mathit{client})]\!]_{\{x,y\mapsto s\}}=(\nu s)\left([\![ \mathit{server}]\!]_{\{x\mapsto s\}}\mid[\![\mathit{client}]\!]_{\{y\mapsto s\}}\right)\] _(The update \(\{x,y\mapsto s\}\) reduces to \(\{x\mapsto s\}\) on the server and to \(\{y\mapsto s\}\) on the client because they do not contain occurrences of \(y\) and \(x\) respectively.)_ Correctness of the encoding.The presented encoding can be considered as a semantics of session types and session terms. The following theoretical results show that indeed we can derive the typing judgements and the properties of the \(\pi\)-calculus with sessions via the encoding and the corresponding properties of the linear \(\pi\)-calculus. First, the correctness of an encoded typing judgement on the target terms implies the correctness of the judgement on the source terms, and conversely. Similar results hold for values. The encoding is extended to session typing contexts in the expected manner. **Theorem 4.2** (Type correctness).: _The following properties hold:_ 1. _If_ \(\Gamma\vdash P\)_, then_ \([\![\Gamma]\!]_{f}\vdash[\![P]\!]_{f}\) _for some renaming function_ \(f\) _for_ \(P\)_;_ 2. _If_ \([\![\Gamma]\!]_{f}\vdash[\![P]\!]_{f}\) _for some renaming function_ \(f\) _for_ \(P\)_, then_ \(\Gamma\vdash P\)_._ Theorem 4.2, and more precisely its proof [15, 12], shows that the encoding can be actually used to reconstruct the typing rules of session types. That is, the typing rules for an operator op of the session \(\pi\)-calculus can be'read back' from the typing of the encoding of op. Next we recall the operational correctness of the encoding. That is, the property that the encoding allows one to faithfully reconstruct the behaviour of a source term from that of the corresponding target term. We recall that \(\rightarrow\) is the reduction relation of the two calculi. We write \(\hookrightarrow\) for the extension of the structural congruence \(\equiv\) with a _case normalisation_ indicating the decomposition of a variant value (Section 3). **Theorem 4.3** (Operational correspondence).: _Let \(P\) be a session process, \(\Gamma\) a session typing context, and \(f\) a renaming function for \(P\) such that \([\![\Gamma]\!]_{f}\vdash[\![P]\!]_{f}\). Then the following statements hold._ 1. _If_ \(P\to P^{\prime}\)_, then_ \([\![P]\!]_{f}\rightarrow\hookrightarrow[\![P^{\prime}]\!]_{f}\)_._ 2. _If_ \([\![P]\!]_{f}\to Q\)_, then there is a session process_ \(P^{\prime}\) _such that_ * _either_ \(P\to P^{\prime}\)_;_ * _or there are_ \(x\) _and_ \(y\) _such that_ \((\nu xy)P\to P^{\prime}\) _and_ \(Q\hookrightarrow[\![P^{\prime}]\!]_{f}\)_._ Statement 1 of the above theorem tells us that the reduction of an encoded process mimics faithfully the reduction of the source process, modulo structural congruence or case normalisation. Statement 2 of the theorem tells us that if the encoding of a process \(P\) reduces to the encoding of a process \(P^{\prime}\) (via some intermediate process \(Q\)), then the source process \(P\) will reduce directly to \(P^{\prime}\) or it might need a wrap-up under restriction. The reason for the latter is that in the session \(\pi\)-calculus [53], reduction only occurs under restriction and cannot occur along free names. In particular, in the theorem, \(f\) is a generic renaming function; this function could map two free names, say \(x\) and \(y\), onto the same name; in this case, an input at \(x\) and an output at \(y\) in the source process could not produce a reduction, whereas they might in the target process. The two theorems above allow us to derive, as a straightforward corollary, the subject reduction property for the session calculus. **Corollary 4.4** (Session Subject Reduction).: _If \(\Gamma\vdash P\) and \(P\to Q\), then \(\Gamma\vdash Q\)._ Other properties of the session \(\pi\)-calculus can be similarly derived from corresponding properties of the standard \(\pi\)-calculus. For instance, since the encoding respects structural congruence (that is, \(P\equiv P^{\prime}\) if and only if \(\llbracket P\rrbracket_{f}\equiv\llbracket P^{\prime}\rrbracket_{f}\)), we can derive the invariance of typing under structural congruence in the session \(\pi\)-calculus. **Corollary 4.5** (Session Structural Congruence).: _If \(\Gamma\vdash P\) and \(P\equiv P^{\prime}\), then also \(\Gamma\vdash P^{\prime}\)._ ## 5 Extensions In this section we discuss several extensions for the presented encoding, which have been proposed in order to accommodate the additional features of subtyping, polymorphism, recursion, higher-order communication and multiparty interactions. Subtyping.Subtyping is a relation between types based on a notion of substitutability. If \(T\) is a subtype of \(T^{\prime}\), then any channel of type \(T\) can be safely used in a context where a channel of type \(T^{\prime}\) is expected. In the standard \(\pi\)-calculus, subtyping originates from _capability types_ -- the possibility of distinguishing the input and output usage of channels [43, 46]. (This is analogous to what happens in languages with references, where capabilities are represented by the read and write usages.) Precisely, the input channel capability is co-variant, whereas the output channel capability is contra-variant in the types of values transmitted (the use of capabilities is actually necessary with linear types, as reported in Figure 2). Subtyping can then be enhanced by means of the variant types, which are co-variant both in depth and in breadth. In the case of session \(\pi\)-calculus, subtyping must be dealt with also at the level of session types [22]; for instance, branch and select are both co-variant in depth, whereas they are co-variant and contra-variant in breadth, respectively. This duplication of effort can become heavy, particularly when types are enriched with other constructs (a good example are recursive types). The encoding of session types naturally accommodates subtyping, indeed subtyping of the standard \(\pi\)-calculus can be used to derive subtyping on session types. Writing \(\,<:\) and \(\,\leq\,\) for, respectively, subtyping for session types and for standard \(\pi\)-types, for instance we have: **Theorem 5.1** (Encoding for Subtyping).: \(T<:T^{\prime}\) _if and only if \(\llbracket T\rrbracket\leq\llbracket T^{\prime}\rrbracket\)._ Polymorphism and Higher-Order Communication._Polymorphism_ is a common and useful type abstraction in programming languages, as it allows operations that are generic by using an expression with several types. Parametric polymorphism has been studied in the standard \(\pi\)-calculus [46], and in the \(\pi\)-calculus with session types [4]; for bounded polymorphism in session \(\pi\)-calculus see Gay [21]. The _Higher-Order \(\pi\)-calculus_ (HO\(\pi\)) models mobility of processes that can be sent and received and thus can be run locally [46]. Higher-order communication for the session \(\pi\)-calculus [39] has the same benefits as for the \(\pi\)-calculus, in particular, it models code mobility in a distributed scenario. Extensions of the encoding to support polymorphism and HO\(\pi\) have been studied in [14, 15, 12] and used to test its robustness. The syntax of types and terms is extended to accommodate the new constructs. For polymorphism, session types and standard \(\pi\)-types are extended with a _type variable_\(X\) and with _polymorphic types_\(\langle X;T\rangle\) and \(\langle X;t\rangle\), respectively. For higher-order communication, session types and standard \(\pi\)-types are extended with the functional type \(T\to\sigma\), assigned to a functional term that can be used without any restriction, and with the linear functional type \(T\stackrel{{ 1}}{{\to}}\sigma\) that must be used exactly once. Correspondingly, the syntax of processes is extended to accommodate the _unpacking_ process (**open**\(v\) **as**\((X;x)\) **in**\(P\)) to deal with polymorphism, and with call-by-value \(\lambda\)-calculus primitives, namely _abstraction_ (\(\lambda x:T.P\)) and _application_ (\(PQ\)), to deal with higher-order communication. The encoding of the new type and process constructs is a homomorphism in all cases. Consequently, the proof cases added to Theorems 4.2 and 4.3 are trivial. Recursion.The encoding was also extended to accommodate recursive types and replicated processes by Dardha [11]. Here, the new added types are a recursive type \(\mu X.T\) and a type variable \(X\), as well as the replicated process \(*P\). Recursive (session) types are required to be _guarded_, meaning that in \(\mu X.T\), variable \(X\) may occur free in \(T\) only under at least one of the other type constructs. The paper uses a new duality function, called _complement_, which is inspired by the work of Bernardi et al. [3, 2]. Some new cases for the encoding of recursive session types and processes are: \[\begin{array}{rcl}\llbracket X\rrbracket&\triangleq&X\\ \llbracket\mu X.S\rrbracket&\triangleq&\mu X.\llbracket S\rrbracket\\ \llbracket*P\rrbracket_{f}&\triangleq&*\llbracket P\rrbracket_{f}\end{array}\] The extended encoding is proved to be sound and complete with respect to typing and reduction (aka operational correspondence). We refer the interested reader to [11, 12]. Multiparty Session Types.Multiparty Session Types (MPSTs) [29, 30] accommodate communications between more than two participants. Since their introduction, they have become a major area of investigation within the session type community. Their meta-theory is more complex than that of the binary case, and it is beyond the scope of this paper to revise it in detail. The core syntax of _multiparty session types_ is given by the following grammar \[\begin{array}{rcl}S&::=&\mathtt{end}\mid X\mid\mu X.S&\text{(termination, type variable, recursive type)}\\ &&\mathtt{p}\oplus_{i\in I}!\mathit{l}_{i}(U_{i}).S_{i}&\text{(select towards role p)}\\ &&\mathtt{p}\&i_{i\in I}?\mathit{l}_{i}(U_{i}).S_{i}&\text{(branch from role p)}\\ \end{array}\] \[\begin{array}{rcl}B&::=&\mathtt{Unit}\mid\ldots&\text{(base type)}\end{array}\qquad U\ ::=&B\mid S\text{(closed under $\mu$)}\quad\text{(payload type)}\end{array}\] where selection and branching types are annotated with _roles_ identifying the participant of a multiparty session to which a message is sent or from which a message is expected. The message consists of a label \(l_{i}\) and a payload of type \(U_{i}\), whereas the continuation \(S_{i}\) indicates how the session endpoint is meant to be used afterwards. A multiparty session type describes the behaviour of a participant of a multiparty session with respect to all the other participants it interacts with, identified by their role in the session type. In order to obtain the behaviour of a participant with respect to another _particular_ participant of the multiparty session, say \(\mathtt{q}\), the multiparty session type must be _projected_ onto \(\mathtt{q}\). Hereafter, we write \(S\!\upharpoonright\mathtt{q}\) for the _partial projection of S onto \(\mathtt{q}\)_, referring to [48, 49] for its precise definition. Projection yields a type defined by the following syntax, which resembles that of binary session types: \[\begin{array}{rcl}H&::=&\mathtt{end}\mid X\mid\mu X.H&\text{(termination, type variable, recursive type)}\\ &&\oplus_{i\in I}!\mathit{l}_{i}(U_{i}).H_{i}&\text{(select)}\\ &&\&i_{i\in I}?\mathit{l}_{i}(U_{i}).H_{i}&\text{(branch)}\end{array}\] Projection is a key feature of MPSTs as it is needed in the technical development of a sound type system. At the same time, it also provides a hook by which multiparty sessions and multiparty session types can be encoded in the standard \(\pi\)-calculus through the encoding of (binary) session types that we have outlined in Section 3. Let us briefly comment on the encoding of MPST into linear types given by Scalas et al. [48, 49]. This encoding is fully fledged as it covers the whole MPST and it preserves the theory's _distributivity_. Previous work by Caires and Perez [4] presents an encoding of MPST into binary session types via a _medium_ process, which acts as an orchestrator for the encoding, thus losing distributivity. In the encoding of Scalas et al. no orchestrator is used, hence the encoding preserves its intended choreographic nature as opposed to being orchestrated. The encoding of a multiparty session type from Scalas et al. is formally defined as: \[\llbracket S\rrbracket\triangleq\llbracket\mathtt{p}:\llbracket S\!\cdot\! \mathtt{p}\rrbracket\rrbracket_{\mathtt{p}\in S}\] resulting in a _record_ of types with an entry _for each role_\(\mathtt{p}\) occurring in the multiparty session type \(S\). The encoding of a projected type, namely \(\llbracket S\!\cdot\!\mathtt{p}\rrbracket\), can then be obtained by suitably adapting the function defined in Figure 3. The main cases are summarised below, and the encoding is a homomorphism for the other constructs in the syntactic category \(H\) presented above. \[\llbracket\oplus_{i\in I}!l_{i}(U_{i}).H_{i}\rrbracket \triangleq\ell_{\mathtt{o}}[\langle l_{i}\_(\llbracket U_{i} \rrbracket,\llbracket H_{i}\rrbracket)\rangle_{i\in I}]\] \[\llbracket\delta_{i\in I}?l_{i}(U_{i}).H_{i}\rrbracket \triangleq\ell_{\mathtt{1}}[\langle l_{i}\_(\llbracket U_{i} \rrbracket,\llbracket H_{i}\rrbracket)\rangle_{i\in I}]\] The encoding of processes is quite complex and beyond the scope of this paper. The interested reader may refer to Scalas et al. [48, 49] for the formal details and a Scala implementation of multiparty sessions based on this encoding. The encoding of MPST into linear types satisfies several properties, including duality and subtyping preservation, correctness of the encoding with respect to typing, operational correspondence and deadlock freedom preservation. These properties are given in Section 6 of [48]. ## 6 Applications The encoding from session types to linear channel types can be thought of as a way of "explaining" a high-level type language in terms of a simpler, lower-level type language. Protocols written in the lower-level type language tend to be more cumbersome and less readable than the session types they encode. For this reason, it is natural to think of the encoding as nothing more than a theoretical study. Yet, as we are about to see in this section, the very same encoding has also enabled (or at least inspired) further advancements in the theory and practice of session types. ### A Type System for Deadlock Freedom A well-typed session \(\pi\)-calculus process (and equivalently a well-typed standard \(\pi\)-calculus one) enjoys communication safety (no message with unexpected type is ever exchanged) but not deadlock freedom. For example, both the session \(\pi\)-calculus process \[(\nu x_{1}x_{2})(\nu y_{1}y_{2})\left(x_{1}?(z).y_{1}!\langle z\rangle. \mathbf{0}\mid y_{2}?(w).x_{2}!\langle w\rangle.\mathbf{0}\right) \tag{1}\] and the standard \(\pi\)-calculus process \[(\nu x)(\nu y)\left(x?().y!\langle\rangle.\mathbf{0}\mid y?().x!\langle\rangle.\mathbf{0}\right) \tag{2}\] are well-typed in the respective typing disciplines, but the behaviours they describe on the two sessions/channels are intermingled in such a way that no communication can actually occur: the input from each session/channel must be completed in order to perform the output on the other session/channel. Several type systems that ensure deadlock freedom in addition to communication safety have been studied for session and standard typed \(\pi\)-calculi. In a particular line of work, Kobayashi [31, 33] has studied a typing discipline that associates _priorities_ to channel types so as to express, at the type level, the relative order in which channels are used, thus enabling the detection of circular dependencies, such as the one shown above. Later on, Padovani [42] has specialised this technique for the linear \(\pi\)-calculus and, as an effect of the encoding illustrated in Section 3, for the session \(\pi\)-calculus as well. To illustrate the technique, in this section we consider a refinement of the linear input/output types in Figure 2 as follows \[t::=\ell_{\circ}[\widehat{t}]^{m}\;\mid\;\ell_{\downarrow}[\widehat{t}]^{n}\; \mid\;\cdots\] where \(m\) and \(n\) are integers representing priorities: the smaller the number, the higher the priority with which the channel must be used. In the process (2) above, we could assign the types \(\ell_{\downarrow}[]^{m}\) and \(\ell_{\circ}[]^{n}\) to respectively \(x\) and \(y\) on the lhs of \(|\) and the types \(\ell_{\circ}[]^{m}\) and \(\ell_{\downarrow}[]^{n}\) to respectively \(x\) and \(y\) on the rhs of \(|\). Note that each channel is assigned two types having _dual polarities_ (each channel is used in complementary ways on the two sides of \(|\)) and the _same priority_. Then, the type system imposes constraints on priorities to reflect the order in which channels are used: on the lhs of \(|\) we have the constraint \(m<n\) since the input on \(x\) (with priority \(m\)) blocks the output on \(y\) (with priority \(n\)); on the rhs of \(|\) the opposite happens, resulting in the constraint \(n<m\). Obviously, these two constraints are not simultaneously satisfiable, hence the process as a whole is ruled out as ill typed. In such simple form, this technique fails to deal with most recursive processes. We illustrate the issue through the following server process that computes the factorial of a natural number, in which we use a few standard extensions (replication, conditional, numbers and their operations) to the calculus introduced earlier. \[*fact?(x,y).\textbf{if}\;x=0\textbf{ then}\;y!\langle 1\rangle\textbf{ else}\;(\nu z)\left(fact!\langle x-1,z\rangle\mid z?(k).y!\langle x \times k\rangle\right) \tag{3}\] The server accepts requests on a shared channel _fact_. Each request carries a natural number \(x\) and a linear channel \(y\) on which the factorial of \(x\) is sent as response. The modelling follows the standard recursive definition of the factorial function. In particular, in the recursive case a fresh linear channel \(z\) is created from which the factorial \(k\) of \(x-1\) is received. At that point, the factorial \(x\times k\) of \(x\) can be sent on \(y\). Now assume, for the sake of illustration, that \(m\) and \(n\) are the priorities associated with \(y\) and \(z\), respectively. Since \(z\) is used in the same position as \(y\) in the recursive invocation of _fact_, we expect that \(z\) and \(y\) should have the same type hence the same priority \(m=n\). This clashes with the input on \(z\) that blocks the output on \(y\), requiring \(n<m\). The key observation we can make in order to come up with a more flexible handling of priorities is that a replicated process like (3) above cannot have any _free_ linear channel. In fact, the only free channel _fact_ is a shared one whereas \(y\) is received by the process and \(z\) is created within the process. As a consequence, the absolute value of the priorities \(m\) and \(n\) we associate with \(y\) and \(z\) does not matter (as long as they satisfy the constraint \(n<m\)) and they can vary from one request to another. In more technical terms, this corresponds to making _fact polymorphic_ in the priority of the channel \(y\) received from it and allowing a (priority-limited) form of _polymorphic recursion_ when we type outputs such as \(fact!\langle x-1,z\rangle\). It must be pointed out that a process such as (3) is in the scope of Kobayashi's type systems [33]. The additional expressiveness resulting from priority polymorphism enables the successful analysis of recursive processes that interleave actions on different linear channels also in cyclic network topologies. We do not showcase these more complex scenarios in this brief survey, instead referring the interested reader to [42] for an exhaustive presentation of the technique and to [43] for a proof-of-concept implementation. As a final remark, it is interesting to note that this technique can be retrofitted to a calculus with native sessions, but it was born in the context of the standard \(\pi\)-calculus, which features a more primitive communication model. The point is that, in the standard \(\pi\)-calculus, sequential communications are encoded in a continuation-passing style, meaning that higher-order channels are the norm rather than the exception. So, the quest for expressive type systems ensuring (dead)lock freedom in the standard \(\pi\)-calculus could not ignore this feature, and this necessity has been a major source of inspiration for the support of priority polymorphism. In this direction, Carbone et al. [9] study (dead)lock freedom for session \(\pi\)-processes using the encoding from Section 4 and the technique from [33] and show that this combined technique is more fine-grained than other ones adopted in session \(\pi\)-calculi. Dardha and Perez [17] present a full account of the deadlock freedom property in session \(\pi\)-calculi, and compare deadlock freedom obtained by using the encoding and the work from [33] to linear logic approaches, which are used as a yardstick for deadlock freedom. ### Session Type Inference A major concern regarding all type systems is their realisability and applicability in real-world programming languages. In this respect, session type systems pose at least three peculiar challenges: (1) the fact that session endpoints must be treated as _linear resources_ that cannot be duplicated or discarded; (2) the need to _update_ the session type associated with a session endpoint each time the endpoint is used; (3) the need to express _session type duality_ constraints in addition to the usual _type equality_ constraints. The first challenge can be easily dealt with only in those (few) languages that provide native support for linear (or at least affine) types. Alternatively, it is possible to devise mechanisms that detect linearity (or affinity) violations at runtime with a modest overhead. The second challenge can be elegantly addressed by adopting a _functional_ API for sessions [25], whereby each function/method using a session endpoint returns (possibly along with other results) the same endpoint with its type suitably updated. The last challenge, which is the focus of this section, is a subtle one since session type duality is a complex relation that involves the whole structure of two session types. In fact, it has taken quite some time even just to _correctly define_ duality in the presence of recursive session types [3, 24]. Somewhat surprisingly, the encoding of session types into linear channel types allows us to cope with this challenge in the most straightforward way, simply by _getting rid of it_. In Example 4.1 we have shown two session types, one dual of the other, whose respective encodings are _equal_ except for the outermost capabilities. This property holds in general. **Proposition 6.1**.: _Let \(\overline{\theta[\!]}=\emptyset[\!]\) and \(\overline{\ell_{1}[\!]}=\ell_{0}[\!]\) and \(\overline{\ell_{0}[\!]}=\ell_{1}[\overline{t}]\). Then \([\overline{S}]=\overline{[\!]}\) for every \(S\)._ In fact, it is possible to devise a slightly different representation of capabilities so that (session) type duality can be expressed solely in terms of type equality. To this aim, let \(\circ\) and \(\bullet\) be any two types which we use to represent the absence and presence of a given capability, respectively. We do not need any particular property of \(\circ\) and \(\bullet\) except the fact that they must be different. In fact, they need not even be inhabited. Now, we can devise a slightly different syntax for linear channel types, as follows: \[t::=\ell_{\kappa,\kappa}[\overline{t}]\mid\cdots\qquad\kappa::=\circ\mid\bullet\] The idea is that a linear channel type carries two separate input and output capabilities (hereafter ranged over by \(\kappa\) and \(\iota\)), each of which can be either present or absent. For example, \(\ell_{\circ,\circ}[\!]\) would be the same as \(\emptyset|\), \(\ell_{\kappa,\diamond}[\widehat{t}]\) would be the same as \(\ell_{\downarrow}[\widehat{t}]\) and \(\ell_{\diamond,\bullet}[\widehat{t}]\) would be the same as \(\ell_{\diamond}[\widehat{t}]\). With this representation of linear channel types the dual of a type can be defined simply as \(\overline{\ell_{\kappa,t}[\widehat{t}]}=\ell_{\iota,\kappa}[\widehat{t}]\), where the input/output capabilities are swapped. Now, suppose that we wish to express a duality constraint \(S=\overline{T}\) stating that \(S\) is the dual of \(T\) and let \(\ell_{\kappa,t}[\widehat{s}]=[\![S]\!]\) and \(\ell_{\kappa^{\prime},t^{\prime}}[\widehat{t}]=[\![T]\!]\) be the encodings of \(S\) and \(T\), respectively. Using Proposition 6.1 and the revised representation of linear channel types we obtain \[S=\overline{T}\iff\kappa=\iota^{\prime}\wedge\iota=\kappa^{\prime}\wedge \widetilde{s}=\widetilde{t}\] thereby turning a session type duality constraint into a conjunction of type equality constraints. This apparently marginal consequence of using encoded (as opposed to native) session types makes it possible to rely on completely standard features of conventional type systems to express and infer complex structural relations on session types. In particular, it allows any Hindley-Milner type inference algorithm to perform _session type inference_. FuSe[42] is a library implementation of session types for OCaml that showcases this idea at work. The library supports higher-order sessions, recursive session types and session subtyping by piggybacking on OCaml's type system. Clearly, the inferred (encoded) session types are not as readable as the native ones. This may pose problems in the presence of type errors. To address this issue, the library is accompanied by an external tool called Rosetta that decodes encoded session types and pretty prints them as native ones using the inverse of the encoding function \([\![\cdot]\!]\).1 On similar lines, Scalas and Yoshida [50] develop lchannels, a Scala library for session types fully based on the encoding of session types into linear types. As a result, the structure of a session type is checked statically by analysing its encoding onto channel types in Scala, while linearity is checked dynamically at run time as in FuSe, as Scala has no support for linearity. Footnote 1: The source code of FuSe and Rosetta is publicly available at [https://github.com/boystrange/FuSe](https://github.com/boystrange/FuSe).
2306.17795
Hierarchical Bayesian Regression for Multi-Location Sales Transaction Forecasting
The features in many prediction models naturally take the form of a hierarchy. The lower levels represent individuals or events. These units group naturally into locations and intervals or other aggregates, often at multiple levels. Levels of groupings may intersect and join, much as relational database tables do. Besides representing the structure of the data, predictive features in hierarchical models can be assigned to their proper levels. Such models lend themselves to hierarchical Bayes solution methods that ``share'' results of inference between groups by generalizing over the case of individual models for each group versus one model that aggregates all groups into one. In this paper we show our work-in-progress applying a hierarchical Bayesian model to forecast purchases throughout the day at store franchises, with groupings over locations and days of the week. We demonstrate using the \textsf{stan} package on individual sales transaction data collected over the course of a year. We show how this solves the dilemma of having limited data and hence modest accuracy for each day and location, while being able to scale to a large number of locations with improved accuracy.
John Mark Agosta, Mario Inchiosa
2023-06-30T16:53:10Z
http://arxiv.org/abs/2306.17795v1
# Hierarchical Bayesian Regression for Multi-Location Sales Transaction Forecasting ###### Abstract The features in many prediction models naturally take the form of a hierarchy. The lower levels represent individuals or events. These units group naturally into locations and intervals or other aggregates, often at multiple levels. Levels of groupings may intersect and join, much as relational database tables do. Besides representing the structure of the data, predictive features in hierarchical models can be assigned to their proper levels. Such models lend themselves to hierarchical Bayes solution methods that "share" results of inference between groups by generalizing over the case of individual models for each group versus one model that aggregates all groups into one. In this paper we show our work-in-progress applying a hierarchical Bayesian model to forecast purchases throughout the day at store franchises, with groupings over locations and days of the week. We demonstrate using the stan package on individual sales transaction data collected over the course of a year. We show how this solves the dilemma of having limited data and hence modest accuracy for each day and location, while being able to scale to a large number of locations with improved accuracy. ## 1 Introduction The field of hierarchical statistical modeling blossomed once simulation methods such as Markov Chain Monte Carlo (MCMC) made it feasible to solve models without analytic solutions (Gelman et al. (2013)). Approximation methods for multi-level models existed in statistics, but subsequently the field adopted these simulation methods and their Bayesian versions (Gelman and Hill (2007)). Similarly in AI these techniques appeared in the form of Probabilistic Graphical Models (PGMs) (Koller and Friedman (2009)). These graphical DAGs have spawned a bewildering variety of synonymous terms, going back to the original "Bayesian Belief Networks," (Pearl (2009) with current interest in causal reasoning introducing yet another term: "Causal Graphical Models" Concurrently hierarchical modelling is well represented in the Economics literature (Geffner et al. (2022)). The similarity in data structures leads to some curiosity of cross fertilization with Statistical Relational Learning (Getoor and Tasker (2007)). It is our belief that these methods are widely applicable and deserve wider recognition. Most data do not come in flat tables, but come in relational tables representing different levels of aggregation. The goal of this paper is to demonstrate the value of modeling hierarchy explicitly in an actual application. ### "Point of Sale" Demand Forecasting In this paper we present a sales demand model that applies over the fleet of fast-food store locations, to be used to inform real-time customer demand. Fast-food items have a short shelf-life; There is no carry-over from one prediction period to the next. Each location has control of how many products it prepares in anticipation of customer demand. Demand is uncertain, and each store needs to anticipate how many items to set aside in anticipation of the demand in the current period. There is a trade-off: If an item is not sold punctually-- within less than an hour--it is discarded. When too many are made, at the end of the period items are discarded. When not enough are made, customers' demand is not satisfied, creating an "opportunity cost" of lost revenue. ### The Current Cloud-It Infrastructure With the current IT infrastructure, demand is calculated centrally by a separate model for each franchise location that generates an estimate of its demand for the next period. Estimates are transmitted to the "Point of Sales" (POS) terminals at each location by means of the distributed "Cloud" application that manages all data interaction between individual store locations and the firm. The demand model makes up a small part of this recently implemented centralized system. After the system went into operation and a few months of data had been collected, an initial demand prediction model was built using proprietary automated time-series software. The predictions either tended toward the median value of zero, or failed to converge and our data science group was brought in to formulate a replacement model. A primary design concern is the computational cost of scaling. The intent is to deploy forecasts to thousands of stores in near-real-time. Our approach is presented here, where in Section2 we detail the problem and the available data; then in Section3 explain the hierarchical model we developed. Section4 presents our results, where we consider these in the context of the original problem and future approaches in Section5.3. ## 2 Description of the Problem Forecasting the demand distribution--the rate that customers arrive and the quantity of products they request promises not only to reduce cost due to minimizing waste, but also to increase revenue from anticipating times of unmet demand. Optimizing production to meet uncertain demand is characteristic of many production and inventory problems. When as in this case, there is no inventory hold-over from one period to the next, this specifically is known as the "news-vendor" problem, (Porteus [2002]) a reference to a newsboy or girl who must decide how many papers to buy for the day based on their estimate of how many they will sell. Sales transactions data consists of a sequence of events in continuous time--a _stochastic renewal process_. Each event consists of a random number of items ordered at a random time. The events appear as a dis-continuous function of time. A sense of the arrival-events data and how the arrival rate varies over the course of a day is shown in Figure 1. Such event processes do not constitute a time-series in the conventional sense, confounding conventional time-series forecasting methods that inevitably fail when applied to such data. The best one can hope for is to approximate the underlying arrival rate as a smoothly varying function as the time-series of interest. Given the uncertainty in the arrival process even with a known, fixed arrival rate, there is a finite limit in the savings possible by any continuous function approximation to arrival events. In the simplest case one can imagine arrivals described by a Poisson distribution whose variance equals the arrival rate. In actuality arrivals tend to be "over-dispersed", making variance even larger. Large variance implies that any prediction based on the arrival rate is likely to be far from the actual value. For the same reason conventional time-series error estimates, such as "MAE" are of the order of the arrival rate parameter, and are of little use for evaluation. To emphasize the point, as the prediction interval for perishable inventory gets smaller, the inherent increase in variability in arrivals leads to unavoidable losses. To illustrate, if the rate is such that one customer is expected to arrive in a period there are roughly equal probabilities of no customers arriving, the one customer actually arriving, or more than one customer arriving. Thus at any instant, even with the best prediction, inventory will more likely than not be either under or over the instantaneous predicted amount. So given the intrinsic uncertainty of a stochastic renewal process that best describes demand for items, the best one can hope for is to predict an instantaneous demand rate. We make this concession, apply a rudimentary regression model at the daily forecast level and focus on modelling the variability at the level of store locations and days; hence the attractiveness of a hierarchical approach. The hierarchical model is more interpretable, and since the upper level data is more concise by a couple orders of magnitude, the model can scale to thousands of stores. Short of being able to model the larger in-store production optimization problem, we leave it up to the store manager to decide on the actual production policy as informed by our prediction of the instantaneous rate of demand. ### Forming the Hierarchy At the lowest level of the hierarchy are the individual sales transaction events consisting of 1,271,639 transactions at 49 stores over the course of about half a year, for which we Figure 1: Sales events as time and item count at one location, for one day. The red curve is a \(2^{nd}\)-order fit to the log of the item count. have data from 243 days of data, about 150 which are usable. We group the daily samples by location and day_of_week, and fit a parameterized arrival rate curve to that location-day to create the upper level data. At the upper level of the hierarchy, the number of samples is reduced significantly, to \(|locations|\times|days|\times|local\_model\_coefficients|\). There are 8649 "location-day" records, one for each day and location, each described by 3 coefficients, a reduction to 2% of the original data size! The schema for the upper level data is show in Table3. The relations among the groupings--location, day-of-week and transaction are analogous to those in a relational data schema, with location and day_of_week as common keys. We form two upper-level groupings; by location and by day-of-week. Each grouping could have co-variates; for instance, location could be influenced by regional economic conditions, and day_of_week by holidays. These extensions to the model fit naturally into our formulation and arguably could substantially improve predictions, but here we focus on the basic hierarchical design, and inclusion of upper-level co-variates is not considered. ### Data Sources and Preparation We were provided by the client with a snapshot of the aforementioned 1,271,639 POS daily sales transaction records over the past year. Each sales transaction event records the time and number of items purchased by a customer with one-minute resolution. The sales record fields are shown in Table 1. This transaction event data is binned on 15-minute intervals to generate counts on a fixed time grid reducing the transactions to 357,224 records. Intervals when no sales occur are preserved and recorded as zero counts. Binning is necessary to fit an estimate of the instantaneous transaction rate not biased by the timing of transaction events. To create the upper hierarchy levels the binned data for each day and location were summarized by a curve fit by 3 coefficients. Figure 2 is an example of a curve for one case. These cases make up the coefficients data set used for the hierarchical model, with fields described in Table 3. We assign records in the coefficients data set randomly with probability \(1/2\) to train and test sets. Inspection of the data over the year's interval showed a slight increasing demand trend plus a substantial interval of missing data for the middle quarter of the year. Our initial analysis, using the early data for train and later for test revealed that the data set was not stationary. This approach confounded any test results on the data split and was abandoned. Characteristic of hierarchical models, our premise is that the variation within the two groups, location and day_of_week explain the variation in the coefficients data. This boxplot, Figure3, shows the variation of the first coefficient \(c0\), between and within days of the week. A similar plot for stores is shown in Figure4. As one might expect, the finer-grained store grouping shows less within-group variation and more between-group variation than among days of the week. Figure 4: Sales events as time and item count at one location, for one day. The red curve is a \(2^{nd}\)-order fit to the log of the item count. Figure 3: These barplots show the variation in the \(c0\) coefficient for each day of the week. The values for each day varies widely, with some weekly variation peaking on Saturdays. Figure 2: Binned sales events over 15 minute intervals for the same one location and day. The red curve is a \(2^{nd}\)-order fit to the log of the binned item count. ## 3 Sales Demand Model Given the stochastic nature of minute-to-minute demand, most of the value in prediction comes from how this rate depends on "upper level" factors. As mentioned at the upper levels the data is grouped by location and day of the week. Our stochastic renewal process assumes that variation from minute to minute is not predictable from preceding events. We approximate the instantaneous demand rate function for each day, and model it by three parameters: \[c0 :\text{the average daily demand},\] \[c1 :\text{the daily trend},\] \[c2 :\text{the ``peakedness" of the rate}.\] We arrive at these values by regressing the centered daily sales arrivals using a \(2^{nd}\) order polynomial--equivalent to an orthonormal basis of the first 3 Lagrange polynomials--one has 3 independent values that describe each day and store, of which there are, as mentioned, 8649 instances. \(c0\) coefficients tend to small positive values, as Figures3 and 4 show. \(c1\) and \(c2\) coefficients are several orders of magnitude smaller, clustering around zero. This suggests that the daily transaction rate is relatively constant with minor tendencies to either rise or fall during the day. Each regression might encompass a few hundred points, but some instances are sparse and include just a few points. The ability for hierarchical Bayesian methods to "share" information among units from upper levels improves predictions in these cases. The regression step is computationally inexpensive and, in a production system, would be done at each location, offloading much of the central computing load to make it possible to scale the model to 1000s of locations. Certainly more sophisticated local models can be applied, but we use this approximation that we pre-calculate of the lowest level of the hierarchy to focus attention on the core problem. ### The Graphical Model The network diagram, Figure 5 shows the graphical model using plate notation. Each upper level is represented by a separate overlapping plate. The shaded inner plate is solved separately by the regression step and fed into the upper level model. This approximation can be removed in the future, but it is adequate to demonstrate the benefits of the hierarchy. In the statistical literature this design is called a "two-way random effects model."1 Footnote 1: See Gelman and Hill (2007), p.245. Despite using it, the author argues against use of the the term “random effects”, since the Bayesian formulation has no need for the random effects – fixed effects distinction. The model equation, with the distribution assumptions is described by these four equations. The superscripts, \(D\) and \(J\) indicate the day and location grouping levels. Any parameter without an explicit prior (e.g. \(\mu\)) is assumed to have a uniform prior. \[y_{dj} =z_{d}^{(D)}+z_{j}^{(J)}+\mu \tag{1}\] \[z_{d}^{(D)} \sim\mathcal{N}(\tau_{d},\sigma^{(D)})\] (2) \[z_{j}^{(J)} \sim\mathcal{N}(\tau_{j},\sigma^{(J)})\] (3) \[y \sim\mathcal{N}(y_{dj},\sigma) \tag{4}\] We run this model separately for each class of coefficients, \((c0_{dj},c1_{dj},c2_{dj})\) by substituting them for the \(y\) variable, to create three versions of the model. The inputs to the model are the \(y\), indexed by location \(j\) and day-of-week \(d\). The model estimates the distribution of the contribution of each day-of-week \(z^{(D)}\), and location \(z^{(J)}\). Four global parameter distributions are output for \(\sigma^{(D)}\), \(\sigma^{(J)}\), \(\sigma\), and \(\mu\). The MCMC simulation of the model outputs marginal posteriors over each of the \(4+7+49\) parameters conditioned on the observed data \(y\). The stan file for these equations in shown in Figure 9 in the Appendix. ### Training The stan code to run the model is shown in Figure 6, showing the sampling data read as inputs, and the parameters (par) as outputs. An MCMC training run on the train dataset of 4302 samples completes in less than 1 minute per chain on a current laptop. We observed less-than-linear increases in processing time with larger samples, when running on the combined train and test dataset. \begin{table} \begin{tabular}{|c|c|c|} \hline **Field name** & **type** & **Description** \\ \hline \hline **LocationNumber** & int & Vendor identifier \\ \hline **SalesDayName** & string & Day of the week \\ \hline **DailyMinutesOpen** & int & Daily operating minutes \\ \hline **DateTimePlaced** & datetime & Transaction timestamp \\ \hline **SalesAsMinutes** & double & Minutes since opening \\ \hline **Quantity** & int & Number of items purchased \\ \hline \end{tabular} \end{table} Table 1: Fields in the sales transaction dataset. \begin{table} \begin{tabular}{|c|c|c|} \hline **Field name** & **type** & **Description** \\ \hline \hline **LocationNumber** & int & Store identifier \\ \hline **Day** & int & Calendar Day \\ \hline **SalesDayName** & string & Day of the week \\ \hline **Coefficient0** & double & Constant \\ \hline **Coefficient1** & double & Slope \\ \hline **Coefficient2** & double & Curvature \\ \hline \end{tabular} \end{table} Table 2: Fields in the hierarchy dataset. Runtime for each model was less than 4 minutes. To assure convergence for \(c1\) and \(c2\) values the \(y\) data was rescaled to have mean near 1. Stan returns several diagnostics including tests for sampling convergence, for which our model has no issues. Among other diagnostics is a log-likelihood measure useful for comparisons among runs: ``` #meanse_meansd #lp__1410.1210.1888 ``` Another sanity check is to see the estimates of variance, as one would in conventional analysis of variance. We see that the sum of variances closely approximates the \(y\) variance. \[0.445 =\sqrt{\sigma^{(D)^{2}}+\sigma^{(J)^{2}}+\sigma^{2}}\] \[\text{compared to}\] \[0.419 =\sigma_{y}\] The fraction of explained variation is \[R^{2}=1-\sigma^{2}/\sigma_{y}^{2}=0.638\] ## 4 Evaluation Despite the MCMC results returning the full sample marginals for all estimated parameters, we use a conventional accuracy measure for prediction of the coefficients. _Bias_ is simply the sum of the difference in the means of predicted to actual. _RMSE_ is the square root of the average squared deviation of the differences between predicted and actual. The model predicts the three coefficient values for each of the upper-level units: the 49 locations and 7 days of the week. These can be compared to a baseline prediction, of just using the average coefficient value for the hold-out set over those units. Those averages are exactly the means of the boxplots shown in Figures4 and 3 This is a challenging test; the average hold-out values are themselves an accurate predictor. Ideally one would want to evaluate improvements at the transactions level and not only at the level of the coefficients fit to the daily transactions. As argued, the variation of events around the instantaneous demand rate is so large it would obscure any improvements to the coefficients that determine the instantaneous demand rate. First as a sanity check we compute _bias_ and _RMSE_ error for "test on train", for \(c0\), by predicting from the values that were trained on. These errors are negligible, as expected: ``` Teston'train' $bias -0.0003 $rmse 0.00876 ``` Full evaluation tests were run for the hierarchical model, for both the location predictions and day-of-week predictions for the 3 separate components, and compared to baseline averages. The smaller of the errors between the baseline average prediction and the hierarchical prediction is shown in bold, in Table3. Except for the Day-of-week prediction for \(c0\) coefficients the hierarchical model incrementally outperforms the baseline. Observing that using the average value as a predictor obtains an RMSE error of just a few percent, this is an impressive improvement by the hierarchical simulation model and recommends its use in production. Figure 5: The graphical model network for a model with two upper levels, one for \(J\) locations, the other for \(D\) days of the week. Each node represents a distribution in the generative model. Nodes on a plate are replicated by the plate index. Overlapping plates indicate replication by the product of the indexes. Subscripts for nodes on a plate are implied. Nodes outside any plate are global parameters. Sales transaction data \(x\) is modelled separately as 3 component parameters, implying 3 separate models for \(y\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Coefficient** & **Group** & **Average** & **Hierarchy** \\ \hline \hline c0 & Location & 0.0398 & **0.0381** \\ \hline c0 & Day-Of-Week & **0.0277** & 0.252 \\ \hline c1 & Location & 2.860e-4 & **2.838e-4** \\ \hline c1 & Day-Of-Week & 6.61e-5 & **6.46e-5** \\ \hline c2 & Location & 1.012e-06 & **0.980e-6** \\ \hline c2 & Day-Of-Week & 2.858e-7 & **2.614e-7** \\ \hline \end{tabular} \end{table} Table 3: RMSE Errors. model<-stan_model("regression_coef.stan") rc_fit<-sampling(model, data=c("N","D", "J", "d_id", "j_id", "y"), iter=4000, chains=4) params<-summary(rc_fit, pars=c("mu", "s_d", "s_j", "s_elipson", "z_d", "z_j"))$summary ``` ## 5 Discussion ### Alternative Approaches Popular today are numerous deep-learning based time-series forecasting tools such asKuznetsov and Mariet (2018), which considers the whether multiple series should be learnt in a common model or separately. Although this approach may approximate the ability to generalize over multiple (e.g. the 324 time-of-day location) series, it has none of the advantages of the method presented. We believe that such a method invests vast effort in learning from an stochastic event series, where simple smoothing methods suffice, at the expense of analyzing the effects of groupings and the contextual variables on the more parsimonious upper-level data where interesting effects occur. It would scale to the thousands of locations at only comparatively large computational cost. An "off-the-shelf" option for hierarchical modelling is to use a current spatial-temporal forecasting tool instead, such as FOST. FOST combines traditional and ML-based temporal modeling with spatial modeling via graph neural networks. FOST requires two data inputs: time series data and "spatial" data in the form of a list of directed graph edges and their weights. Edges can represent truly spatial data, such as distances or travel times. Additionally, edges can represent relationships such as correlations between the items represented by graph nodes. If the nodes represent products for sale, then edges can represent cannibalization or halo effects. A halo effect is when reducing the price of one product increases sales of a related product--a cannibalization effect is the opposite. As for spatial effects, it depends on how close together stores are. Thus we could create a graph of locations, with edges weighted by distance or travel time. Alternatively, we could create a graph with edges weighted by the correlation of sales between pairs of stores. Effects among locations expressed on a graph structure is a level of model complexity not captured by a hierarchy. Relations in a hierarchy are mediated by presence of common parents, not by pair-wise relations. The choice of model depends speculatively on the relevance of relations that would be expressed in a graph, the feasibility of predicting from event process data, and scalability to the number locations. Figure 8: Predicted _coefficient1_ versus location means from the actual test set. Figure 6: rstan code to run the model. Figure 7: Predicted _coefficient0_ versus location means from the actual test set as a predictor. Each point is one location. The red line shows \(x=y\). ### Discussion The apparent non-stationarity of the current dataset has been managed by ignoring the temporal separation of test and train splits. Arguably this is of minor consequence because the groupings by day-of-week and location averaged over the year do not relate to each other temporally. We have many ideas how to extend the current model that at this point just demonstrates the feasibility of this approach. Extensions of the Graphical Model could bring in temporal predictors to see if they do have any effect. One could test if the conditional independence of day-of-week and location is valid, but a priori there is little reason to think it is not. Most interestingly one could introduce contextual variables to the upper levels for economic, demographic, weather, holiday, spatial and other effects that are presumed determinants of demand. Simulation methods such as MCMC do not always converge in fixed time, so a more reliable inference approximation method would be preferable. Using Expectation Propagation in place of MCMC, as for instance implemented by _infer.net_Wang and Wand (2011) is worth trying. By only forecasting recorded transactions we overlook demand by customers turned away because no items were available for sale. These events are not captured. This biases estimates downward. An outstanding challenge in forecasting is to predict unmet demand, so one would know that a sale would be missed because items were not available. One possible way to address this is to model the censoring process. We assume a distribution for transactions, say Poisson, or a more general renewal distribution, then "extrapolate" the distribution to estimate extreme values not seen in the data. Alternatively if one had sales transactions and production amounts data then the the full demand distribution could be estimated. Then one could tell when production limits sales (e.g. when a store is sold-out). So by analyzing sales data when stores are not sold out the full demand curve could be fit to estimate unmet sales and remove the source of forecast bias. An obvious oversight in this model formation is that lack of a "supply" model to complement the "demand" model. A comprehensive model would recommend the optimal production schedule for perishable items. For that we'd need a corresponding production model to complement this demand model. The value of improvements due to the model cannot be evaluated without also observing and modeling real-time production in which case one would also need historical data on actual production. ### Conclusion We've demonstrated promising results on an initial framework for a hierarchical model of sales transactions over groupings of locations and days. As in many applied modelling problems, there are natural groupings of the samples that can be expressed hierarchically, avoiding the need to pre-process the data to de-normalize it such as done by "one-hot" encoding. Such problems translate directly into Bayesian Graphical Models for which there are now mature solution tools. The advantages of formulating hierarchical as Graphical Models are several. 1. Groups with limited data benefit from inference from similar neighboring groups, increasing the effective sample size of groups. The degree of interaction is a consequence of the sample size within groups. 2. Co-variates can be applied directly to the relevant upper levels. 3. Models may be federated at lower levels for distributed processing. 4. The full value of Bayesian methods in terms of explainability and the derivation of true predictive distributions can be exploited. ## Acknowledgements Without the express help of my data science colleagues, and numerous sales, marketing, and engineering staff both in-house and at the client, none of this would be possible.
2309.08030
AV2Wav: Diffusion-Based Re-synthesis from Continuous Self-supervised Features for Audio-Visual Speech Enhancement
Speech enhancement systems are typically trained using pairs of clean and noisy speech. In audio-visual speech enhancement (AVSE), there is not as much ground-truth clean data available; most audio-visual datasets are collected in real-world environments with background noise and reverberation, hampering the development of AVSE. In this work, we introduce AV2Wav, a resynthesis-based audio-visual speech enhancement approach that can generate clean speech despite the challenges of real-world training data. We obtain a subset of nearly clean speech from an audio-visual corpus using a neural quality estimator, and then train a diffusion model on this subset to generate waveforms conditioned on continuous speech representations from AV-HuBERT with noise-robust training. We use continuous rather than discrete representations to retain prosody and speaker information. With this vocoding task alone, the model can perform speech enhancement better than a masking-based baseline. We further fine-tune the diffusion model on clean/noisy utterance pairs to improve the performance. Our approach outperforms a masking-based baseline in terms of both automatic metrics and a human listening test and is close in quality to the target speech in the listening test. Audio samples can be found at https://home.ttic.edu/~jcchou/demo/avse/avse_demo.html.
Ju-Chieh Chou, Chung-Ming Chien, Karen Livescu
2023-09-14T21:07:53Z
http://arxiv.org/abs/2309.08030v4
AV2Wav: Diffusion-based Re-synthesis from continuous self-supervised features for audio-visual speech enhancement ###### Abstract Speech enhancement systems are typically trained using pairs of clean and noisy speech. In audio-visual speech enhancement (AVSE), there is not as much ground-truth clean data available; most audio-visual datasets are collected in real-world environments with background noise and reverberation, hampering the development of AVSE. In this work, we introduce AV2Wav, a resynthesis-based audio-visual speech enhancement approach that can generate clean speech despite the challenges of real-world training data. We obtain a subset of nearly clean speech from an audio-visual corpus using a neural quality estimator, and then train a diffusion model on this subset to generate waveforms conditioned on continuous speech representations from AV-HuBERT with noise-robust training. We use continuous rather than discrete representations to retain prosody and speaker information. With this vocoding task alone, the model can perform speech enhancement better than a masking-based baseline. We further fine-tune the diffusion model on clean/noisy utterance pairs to improve the performance. Our approach outperforms a masking-based baseline in terms of both automatic metrics and a human listening test and is close in quality to the target speech in the listening test. Audio samples can be found at [https://home.ttic.edu/~jcchou/demo/avse/avse_demo.html](https://home.ttic.edu/~jcchou/demo/avse/avse_demo.html). Ju-Chieh Chou, Chung-Ming Chien, Karen Livescu Toyota Technological Institute at Chicago Audio-visual speech enhancement, diffusion models ## 1 Introduction Speech enhancement aims to improve the audio quality and intelligibility of noisy speech. Audio-visual speech enhancement (AVSE) uses visual cues, specifically video of the speaker, to improve the performance of speech enhancement. Visual cues can provide auxiliary information, such as the place of articulation, which is especially useful when the signal-to-noise ratio is low. Conventionally, audio-visual speech enhancement is formulated as a mask regression problem. Given a noisy utterance and its corresponding video, masking-based models attempt to recover the clean speech by multiplying the noisy signal with a learned mask [1, 2, 3, 4]. However, some signals are difficult or even impossible to reconstruct via masking. Masking operations tend to allow noise to bleed through, and they cannot effectively address unrecoverable distortion, such as frame dropping. Some recent work [5, 6] has proposed to formulate AVSE as a re-synthesis problem. These approaches learn discrete audio-visual representations from clean speech and train models to generate the discrete representations from the clean speech given the corresponding noisy speech. An off-the-shelf vocoder trained on clean speech is then used to produce clean speech signals. This formulation can better handle unrecoverable distortion and synthesize speech with better audio quality. However, such discrete representations often lose much of the speaker and prosody information [7]. Another challenge in AVSE is the suboptimal audio quality of audio-visual datasets which, unlike studio-recorded speech-only datasets, are mostly collected "in the wild" with varying recording environments and natural noise. Using these suboptimal data as clean data to train enhancement models leads to suboptimal results. In this work, we propose AV2Wav, a resynthesis-based ap Figure 1: Overview of our approach. We obtain a nearly clean subset of the audio-visual dataset using a neural quality estimator and use noise-robust AV-HuBERT to encode the audio-visual speech. These representations are used as conditioninput to a diffusion-based waveform synthesizer. proach to AVSE that addresses the challenges of noisy training data and lossy discrete representations (see Fig 1). Instead of discrete representations, we use continuous features from a pre-trained noise-robust AV-HuBERT [8], a self-supervised audio-visual speech model, to condition a diffusion-based waveform synthesizer [9]. The noise-robust training enables AV-HuBERT to generate similar representations given clean or mixed (containing noise or a competing speaker) speech. Several recent AVSE approaches have used AV-HuBERT, but they have either done so for mask prediction [2], for synthesis of a single speaker's voice [6], or with access to transcribed speech for fine-tuning [10]. In addition, we train the synthesizer on a nearly clean subset of an audio-visual dataset filtered by a neural quality estimator (NQE) to exclude low-quality utterances. Finally, we further fine-tune the waveform synthesizer on clean/noisy utterance pairs and studio-recorded clean speech. The contributions of this work include: (i) the AV2Wav framework for re-synthesis based AVSE conditioned on noise-robust AV-HuBERT representations; (ii) a demonstration that an NQE can be used for training data selection to improve AVSE performance; and (iii) a study on the effect of fine-tuning diffusion-based waveform synthesis on clean/noisy data and studio-recorded data. ## 2 Method ### Background: AV-HuBERT AV-HuBERT [8, 11] is a self-supervised model trained with masked prediction given speech and lip motion video sequences. The model is trained to predict a discretized label (a cluster assignment) for a masked region of the audio feature sequence \(Y^{a}_{1:L}\!\in\!\mathbb{R}^{F_{s}\times L}\) and video sequence \(Y^{v}_{1:L}\!\in\!\mathbb{R}^{F_{l}\times L}\) with \(L\) frame and feature dimension \(F_{s}\) and \(F_{l}\). The resulting model \(\mathcal{M}\) produces audio-visual representation \[f^{av}_{1:L}\!=\!\mathcal{M}(Y^{a}_{1:L}\!,\!Y^{v}_{1:L}) \tag{1}\] AV-HuBERT applies modality dropout during training [12], i.e., drops one of the modalities with some probability, to learn modality-agnostic representations, \[\begin{split} f^{a}_{1:L}\!=\!\mathcal{M}(Y^{a}_{1:L}\!,\! \mathbf{0}),\\ f^{v}_{1:L}\!=\!\mathcal{M}(\mathbf{0},\!Y^{v}_{1:L}),\end{split} \tag{2}\] Some versions of AV-HUBERT use noise-robust training [11]. [11], where an interferer (noise or competing speech) is added while the model must still predict cluster assignments learned from clean speech. In this case the model outputs the representation \[f^{avn}_{1:L}\!=\!\mathcal{M}(\mathrm{synth}(Y^{a}_{1:L}\!,\!Y^{n}_{1:L})\!, \!Y^{v}_{1:L}), \tag{3}\] where \(\mathrm{synth}(\cdot)\) is a function that synthesizes noisy speech given noise \(Y^{n}_{1:L}\) and speech \(Y^{a}_{1:L}\). Noise-robust AV-HuBERT is trained to predict the same clustering assignment given \(f^{a}\),\(f^{v}\),\(f^{av}\) and \(f^{avn}\), in order to learn modality- and noise-invariant features. As AV-HuBERT already learns to remove noise through the noise-invariant training, it is a natural choice as a conditioning input to our AVSE model. ### Diffusion waveform synthesizer Our diffusion-based waveform synthesizer is based on WaveGrad [9]. We changed the up-sampling rate to generate waveform from the AV-HuBERT features. For speech waveform \(x_{0}\!\in\!\mathbb{R}^{L_{w}}\) with length \(L_{w}\), the diffusion forward process is formulated as a Markov chain to generate \(T\) latent variables \(x_{1}\),...,\(x_{T}\) with the same dimensionality as \(x_{0}\), \[q(x_{1},x_{2},\!...,x_{T}|x_{0})\!=\!\prod_{t=1}^{T}\!q(x_{t}|x_{t-1}), \tag{4}\] where \(q(x_{t}|x_{t-1})\) is a Gaussian distribution: \[q(x_{t}|x_{t-1})\!=\!\mathcal{N}(x_{t};\!\sqrt{1\!-\!\beta_{t}}x_{t-1}\!,\! \beta_{t}\mathbf{I}) \tag{5}\] with a pre-defined noise schedule \(0\!<\!\beta_{1}\!<\!\beta_{2}\!\cdots\!<\!\beta_{T}\!<\!1\). The idea is to gradually add noise to the data distribution, until \(P(x_{T})\) is close to a multivariate Gaussian distribution with zero mean and unit variance: \(p(x_{T})\!\approx\!\mathcal{N}(x_{T};\!0,\!\mathbf{I})\). We can also directly sample from \(q(x_{t}|x_{0})\) by reparameterization, \[q(x_{t}|x_{0})\!=\!\mathcal{N}(x_{t};\!\sqrt{\alpha_{t}}x_{0},\!(1\!-\!\bar{ \alpha}_{t})\mathbf{I}) \tag{6}\] where \(\alpha_{t}\!=\!1\!-\!\beta_{t}\) and \(\bar{\alpha}_{t}\!=\!\prod_{i=1}^{t}\!\alpha_{i}\). The reverse process is parameterized by a neural network \(\epsilon_{\theta}(\cdot)\). We use the \(\epsilon\)-prediction approach, i.e. predicting the added Gaussian noise, proposed in [13] and the continuous noise level conditioning of [9] to predict \(\epsilon\) conditioned on the AV-HuBERT feature segment \(c\!\in\!\mathbb{R}^{F\times S}\) with feature dimension \(F\) and length \(S\). We first uniformly sample a segment of AV-HuBERT feature frames with length \(S\), \[l\!=\!\mathrm{Uniform}(1,\!L\!-\!S\!+\!1) \tag{7}\] \[c\!=\!\begin{cases}f^{av}_{1:L}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Data filtering with a neural quality estimator (NQE) We propose to use a neural quality estimator (NQE) to select a relatively clean subset from the AV2Wav training set. Conventional quality metrics (e.g., SI-SDR [14]) require a reference signal, which our training data lacks. NQE predicts an audio quality metric without a reference using a neural network. We use NQE--specifically, a predicted scale-invariant signal-to-distortion ratio (SI-SDR) [14] in [15]--as a proxy to select utterances that are sufficiently clean. ## 3 Experiments ### Datasets and baseline In the first training stage of our models, we use the combination of LRS3 [16] and an English subset (selected using Whisper-large-v2 [17]) of VoxCeleb2 [18] (total 1967 hours). In this stage, we train the waveform synthesizer to synthesize waveforms from AV-HuBERT features. In the second stage, we fine-tune the model on noisy/clean paired data from the AVSE challenge [19], containing 113 hours of speech. Interferers include noise and speech from a competing speaker. The noise sources are sampled from the Clarity Challenge [20], DEMAND [21], and the DNS Challenge [22]. Competing speakers are sampled from LRS3 [16]. For evaluation, we follow the recipe provided by the AVSE challenge [19] to synthesize a test set of clean/noisy pairs based on the LRS3 test set. We sample 30 speakers from the LRS3 test set as competing speakers. Noise interferers are sampled from the same noise datasets as in training (but excluding the files used in training/dev). The signal-to-noise ratio (SNR) is uniformly sampled as in [19]. As our baseline, we use the open-sourced masking-based baseline trained on the AVSE dataset provided in [19]. ### Architecture and training We use the WaveGrad [9] architecture, but adjust the upsampling rate sequence to (5,4,4,2,2,2), resulting in a total upsampling rate of 640, which can convert the 25Hz AV-HuBERT features to a 16000 Hz waveform. We use the features from the last layer of noise-robust AV-HuBERT-large (specifically the model checkpoint "Noise-Augmented AV-HuBERT Large") [11, 8]. In training, we uniformly sample \(24\) frames from the AV-HuBERT features of each utterance and apply layer normalization to them [23]. In the first stage, we train AV2Wav with \((p_{av},p_{a},p_{v},p_{awn})=(1/3,1/3,1/3,0)\) on the filtered dataset (LRS3 + VoxCeleb2) without adding interferers for 1M steps. We use the Adam optimizer [24] with a learning rate of \(0.0001\) and a cosine learning rate scheduler for 10k warm-up steps using a batch size of \(32\). In the second stage of training, we fine-tune the model on audio-visual clean/noisy speech pairs with \((p_{av},p_{a},p_{v},p_{awn})=(0,0,0,1)\) for 500k steps. To understand the effect of fine-tuning we also fine-tune AV2Wav on VCTK [25], which is a studio-recorded corpus, with \((p_{av},p_{a},p_{v},p_{awn})=(0,1,0,0)\). ### Evaluation Signal-level metrics are not ideal for generative models because perceptually similar generated and reference speech may be dissimilar on the signal level. In addition to objective metrics, we use subjective human-rated comparison mean opinion scores (CMOS) on a scale of +3 (much better), +2 (better), +1 (slightly better), 0 (about the same), -1 (slightly worse), - 2 (worse), and -3 (much worse) as in [26]. We sample 20 pairs for each system and collect at least 8 ratings for each utterance pair. To help listeners better distinguish the quality, we only use utterances longer than 4 seconds and provide the transcription. We use the same instructions as in [27]. Listeners are English (not necessarily native) speakers. ### Results #### 3.4.1 The effect of data filtering To study the effect of data filtering using NQE, we compare the following models: (1) _AV2Wav-23_: model trained on the filtered subset with P-SI-SDR above \(23\) (616 hours) (2) _AV2Wav-23-long_: \begin{table} \begin{tabular}{l|c c} \hline & WER \(\downarrow\) & P-SI-SDR \(\uparrow\) \\ \hline \multicolumn{4}{c}{**Target re-synthesis**} \\ \hline **(1)** Target & 6.72 & 21.56 \\ \hline **(2)** AV2Wav-23 & 3.15 & 21.36 \\ \hline **(3)** AV2Wav-23-long & **2.71** & **22.24** \\ \hline **(4)** AV2Wav-25 & 3.00 & 21.76 \\ \hline **(5)** AV2Wav-random & 2.94 & 19.79 \\ \hline \multicolumn{4}{c}{**Mixed speech**} \\ \hline **(6)** Mixed input & 48.45 & 0.12 \\ \hline **(7)** Baseline [19] & 26.40 & 13.79 \\ \hline **(8)** AV2Wav-23 & 18.17 & 19.41 \\ \hline **(9)** AV2Wav-23-long & **17.20** & **20.09** \\ \hline **(10)** AV2Wav-25 & 17.36 & 20.07 \\ \hline **(11)** AV2Wav-random & 19.69 & 14.57 \\ \hline \multicolumn{4}{c}{**After fine-tuning**} \\ \hline **(12)** AV2Wav-23-avse & **15.85** & 20.65 \\ \hline **(13)** AV2Wav-23-long-avse & 16.76 & 21.21 \\ \hline **(14)** AV2Wav-23-vctk & 16.71 & **21.79** \\ \hline **(15)** AV2Wav-23-avse-vctk & 16.90 & 19.96 \\ \hline \multicolumn{4}{c}{**Fast inference**} \\ \hline **(16)** AV2Wav-cont-100 & 17.46 & **18.43** \\ \hline **(17)** AV2Wav-ddim-100 & 16.21 & 18.17 \\ \hline **(18)** AV2Wav-ddim-50 & **15.73** & 18.15 \\ \hline **(19)** AV2Wav-ddim-25 & 16.41 & 17.58 \\ \hline \end{tabular} \end{table} Table 1: Objective evaluation in terms of WER (%) and predicted SI-SDR (P-SI-SDR). **Target re-synthesis** refers to re-synthesis of the target (clean) speech using AV2Wav. The remaining parts (**Mixed speech**, **After fine-tuning**, **Fast inference**) take mixed speech as input and synthesize the predicted clean speech (performing AVSE). **Mixed speech** refers to the first stage of AV2Wav training. **After fine-tuning** refers to further fine-tuning the synthesizer on AVSE, VCTK. The model name is given as AV2Wav-{filter criteria}-{fine-tuned dataset}. **Fast inference** compares fast inference approaches, using _AV2Wav-23-long-avse_. same as (1) but training the model for 2M steps with a batch size of \(64\) (3) _AV2Wav-25_: model trained on the filtered subset with P-SI-SDR above \(25\) (306 hours) (4) _AV2Wav-random_: randomly sample 306 hours from LRS3 + VoxCeleb2. We keep the data size of _AV2Wav-random_ similar to _AV2Wav-25_ to isolate the effectiveness of filtering. The objective evaluation results can be found in Table 1 and subjective evaluation can be found in Table 2. _AV2Wav-random_ (Table 1 line **11**) has a much lower P-SI-SDR than _AV2-Wav-25_ (**10**) on mixed speech. Since we use continuous features, we hypothesize that the representations encode more lower-level information than discrete tokens, including the background noise. As a result, _AV2Wav-random_ learns to generate noisy waveforms from the AV-HuBERT representations. #### 3.4.2 Fine-tuning on AVSE and/or VCTK We fine-tune the waveform synthesizer on AVSE and/or VCTK. The objective evaluation can be found in Table 1. We can see that fine-tuning on AVSE _(AV2Wav-23-avse_ (**12**)) or VCTK _(AV2Wav-23-vctk_ (**14**)) provides some improvement on WER and P-SI-SDR. However, when fine-tuning the model on both datasets, we see a smaller improvement than fine-tuning on one of the datasets. The reason could be that the two datasets have very different distributions, so training on them together does not provide additional improvement. For the subjective experiments in Table 2, after fine-tuning on AVSE _(AV2Wav-23-long-avse_ (**13**)), the CMOS improves slightly over the model trained solely on the near-clean subset _(AV2Wav-23-long_ (**3**)). #### 3.4.3 Comparing to the masking-based baseline We compare our model to the masking-based baseline in terms of WER in Table 3. Our model outperforms the baseline for most speech and noise interferers. It is slightly worse than the baseline when speech interferers are introduced at a lower SNR. In such cases, the AV-HuBERT model can not recognize the corresponding speech from the mixed speech and the lip motion sequence. We also find that our approach combines well with the masking-based baseline. By first applying the baseline and then re-synthesizing the waveform using AV2Wav given the output from the baseline, we observe an improvement for speech interferers, especially at lower SNR, than with either model alone. #### 3.4.4 Comparing audio quality to target speech From the target re-synthesis experiments in Table 1, we can see that the re-synthesized speech is generally more intelligible (has lower WERs) than the target speech, while maintaining similar estimated audio quality (similar P-SI-SDR). From the subjective evaluation (Table 2), the re-synthesized speech _(AV2Wav-23-long re-syn)_ is also on par with the original target in terms of CMOS. Both show that AV2Wav can re-synthesize natural-sounding speech. However, when comparing the enhanced speech (AV2Wav-23-long-avse) with the target speech (target) in the listening test (Table 2), our model is still slightly worse. #### 3.4.5 Fast inference A major disadvantage of diffusion models is their slow inference, which makes them difficult to use for real-time applications. As we train the model using continuous noise levels, we can use fewer steps at different noise levels as in [9]. Empirically, we find that \(100\) steps can provide good quality speech. We also compare DDIM [28], a sampling algorithm for diffusion models that uses fewer steps of non-Markovian inference. We can see that the WER is similar when using the fast inference algorithm comparing to the full inference steps _(AV2Wav-23-long_ line **9**). The P-SI-SDR is also worse than with 1000 inference steps. From informal listening, we find that _AV2Wav-cont-100_ tends to miss some words while _AV2Wav-ddim_ tends to produce some white noise in the background. ## 4 Conclusion AV2Wav is a simple framework for AVSE based on noise-robust AV-HuBERT and a diffusion-based waveform synthesizer. By training on a filtered subset of relatively clean speech, along with noise-robust AV-HuBERT, our AV2Wav model can learn to perform speech enhancement without explicitly training it to de-noise. Our model outperforms a masking-based baseline in a human listening test. One setting where AV2Wav performs worse than the masking-based baseline is with low SNR speech interferers, when the pre-trained AV-HuBERT may fail to identify the corresponding speech, suggesting that further fine-tuning AV-HuBERT using noise-robust training on the target domain might further boost the performance. \begin{table} \begin{tabular}{c c c} \hline \hline Tested & Other & CMOS \\ \hline AV2Wav-23-long-avse & Baseline [19] & 2.22 \(\pm\) 0.16 \\ \hline AV2Wav-23-long-avse & AV2Wav-23-long & 0.21 \(\pm\) 0.18 \\ \hline AV2Wav-23-long-avse & Target & -0.45 \(\pm\) 0.24 \\ \hline AV2Wav-23-long re-syn & Target & -0.06 \(\pm\) 0.22 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison mean opinion scores (CMOS) for several model comparisons. A positive CMOS indicates that the ”Tested” model is better than the ”Other” model. The ”re-syn” model simply re-synthesizes the target (clean) signal. \begin{table} \begin{tabular}{c|r r|r r|r} \hline \hline **interferer** & \multicolumn{2}{c|}{**speech**} & \multicolumn{2}{c|}{**noise**} & \multicolumn{1}{c}{Avg.} \\ \hline **SNR (dB)** & **[-15,5]** & **[-5,5]** & **[-10,0]** & **[0, 10]** & \\ \hline mixed speech (input) & 102.4 & 64.4 & 24.8 & 7.6 & 48.4 \\ \hline Baseline [19] & 40.3 & 24.1 & 30.6 & 11.6 & 26.4 \\ \hline VisualVoice [4] & 38.9 & 22.8 & N/A & N/A & N/A \\ \hline AV2Wav-23-long-avse & 43.0 & 11.7 & 9.8 & 5.3 & 16.8 \\ \hline Baseline [19] & 19.8 & 11.4 & 17.2 & 7.4 & **13.9** \\ + AV2Wav-23-long-avse & & & & & \\ \hline \hline \end{tabular} \end{table} Table 3: WER (%) for each interferer type (speech, noise) and SNR range. Baseline + AV2Wav denotes that the speech is processed by the baseline first, then re-synthesized using AV2Wav. The SNR ranges are selected based on the pilot evaluation in [19]. ## 5 Acknowledgement This work is partially supported by AFOSR grant FA9550-18-1-0166.
2309.17138
Speckled-speckle field as a resource for imaging techniques
Correlated states of light, both classical and quantum, can find useful applications in the implementation of several imaging techniques. Among the employed sources, pseudo-thermal states, generated by the passage of a laser beam through a diffuser, represent the standard choice. To produce light with a higher level of correlation, in this work we consider and characterize the speckled-speckle field obtained with two diffusers using both a numerical simulation and an experimental implementation. In order to discuss the potential usefulness of super-thermal light in imaging protocols, we analyze the behavior of some figures of merit, namely the contrast, the signal-to-noise ratio and the image resolution. The obtained results clarify the possible advantages offered by this kind of light, and at the same time better emphasize the reasons why it does not outperform pseudo-thermal light.
Silvia Cassina, Gabriele Cenedese, Alessia Allevi, Maria Bondani
2023-09-29T11:04:03Z
http://arxiv.org/abs/2309.17138v2
# Speckled-speckle field as a resource for imaging techniques ###### Abstract Correlated states of light, both classical and quantum, can find useful applications in the implementation of several imaging techniques, such as ghost imaging and differential ghost imaging. Among the classically-correlated states, pseudo-thermal states, generated by passing a laser beam through a diffuser, represent the standard choice. To produce light states with a higher level of correlation, a sequence of two or more diffusers can be used. In this work we describe and characterize the super-thermal states obtained with two diffusers using both a numerical simulation and an experimental implementation. In order to quantify the possible advantages in imaging protocols of super-thermal light over pseudo-thermal one, we analyze the behavior of some figures of merit, namely contrast and signal-to-noise ratio, as functions of the size of a binary object to be imaged and the number of images required by the protocol. The obtained results are a promising step towards the exploitation of super-thermal light as a valid alternative to pseudo-thermal one. Super-thermal light, Correlation functions, Imaging techniques ## I Introduction The development of imaging techniques is currently acquiring a strong interest due to their potential applications to medicine and biology [1; 2; 3]. One of the main issues in these fields is to prevent the illuminating light from damaging the samples to be imaged, or altering chemical and biological photosensitive processes [4; 5]. To this aim, two main solutions have been adopted over the years. One possibility is to operate in spectral regions by which the object is not affected [6], while the other one is to exploit correlated bipartite states of light, in which one field is strongly attenuated not to damage the sample, while the other arm, used to reconstruct the image, is much more intense but never interacts with the object [7]. Among the correlation-based techniques used so far we mention ghost imaging (GI), in which two spatially-correlated beams are used [8; 9]. One beam illuminates the object and then is sent to a detector without spatial resolution (bucket detector), while the other is addressed directly to a spatial-resolving detector, without interacting with the object. Neither of the two beams separately contains information on the absorption profile of the object, which can be retrieved exploiting the intensity correlations [10]. From the historical point of view, at first GI was experimentally implemented with entangled states of light [11; 12; 13], but later it has been demonstrated that the main resource required by the technique was the existence of correlations, so that also classically-correlated states of light could be used [14; 15; 16; 17; 18; 19]. Many experiments have been performed with pseudo-thermal light, which is obtained by sending a laser beam through a moving diffuser, such as a rotating ground-glass disk [20]. At variance with entangled states of light, which exhibit perfect photon-number correlations but are quite fragile, classically-correlated states are more robust against losses, such as those due to a lower-than-1 detection efficiency [21]. Nevertheless, the use of nonclassical states results in a better image quality, as quantified by some figures of merit, such as the visibility and the signal-to-noise ratio (SNR) [22]. Some years ago Ferri et al. [21] proposed the so-called "differential GI" (DGI) exploiting pseudo-thermal light, which addresses the problem of reconstructing small or faint objects, a case in which conventional GI fails because of the huge number of acquisitions required to reach a sufficiently high value of SNR. More recently, Losero et al. [10] applied the method to quantum states of light, demonstrating that also in this case DGI beats GI, for any value of losses and light brightness. In addition to pseudo-thermal light divided at a beam splitter (BS) and twin-beam states, which are both thermal in the photon-number statistics [23; 24; 25], over the years other kinds of correlated states have been proposed and tested in order to improve the values of the figures of merit of the image reconstruction [26; 27; 28; 29; 30]. In particular, it has been shown that light states more correlated than thermal ones can lead to higher visibility, higher contrast or higher SNR [31; 32]. Among them, it is worth mentioning that the statistics of frequency-doubled thermal states is definitely super-thermal, having it higher than thermal intensity fluctuations [33; 34]. Super-thermal statistics can be also obtained by using a sequence of diffusers instead of just a single one [35; 36; 37; 38; 39]. In particular, some of us have experimentally characterized in the mesoscopic intensity regime by means of photon-number-resolving detectors [40] the photon-number statistics of a light obtained by sending a laser beam to a sequence of two rotating ground-glass disks [41]. In this work we want to investigate the potential of super-thermal light for imaging applications by considering both GI and DGI schemes. After the preliminary characterization of the statistical properties, we prove that the exploitation of DGI instead of the standard GI yields better values of SNR, and we show that both GI and DGI images obtained with super-thermal light exhibit higher values of contrast than those achieved with pseudo-thermal light. Moreover, we discuss advantages and limitations of our scheme by studying the dependence of the figures of merit on both the size of a binary object and the number of acquired images. The good quality of the results and the perfect agreement between the theoretical model and the experimental outcomes, achieved with numerical simulation and a real experiment, suggest a more practical exploitation of this kind of light, and encourage the use of longer sequences of diffusers or more complex systems capable of generating new types of super-thermal light. ## II Theory of Speckled-Speckle Field A light field with super-thermal statistics can be generated by using a sequence of diffusers. It is well-known that when a coherent light beam impinges on a diffuser it produces a speckle field, composed of many coherence areas (speckles) that are the result of the constructive interference of the radiation coming from the small random scattering centers within the illuminated area of the diffuser [42]. The statistics of light corresponding to this speckle field is the thermal distribution \(p_{\rm th}(I)=(1/\langle I\rangle)\exp\{(-I/\langle I\rangle)\}\), where \(\langle I\rangle\) is the mean intensity. When a laser beam impinges on a rotating ground-glass disk (GD1), at whose output a number \(\mu_{f}\) of speckles is selected by a pin-hole (PH) and sent to a second rotating ground-glass disk (GD2), a speckled-speckle field is obtained. If \(\mu_{s}\) speckles of this speckled-speckle field are selected by a second pin-hole, the light intensity is characterized by the super-thermal statistics [42] \[p_{sth}(I)=\frac{2(\mu_{f}\mu_{s})^{\frac{(\mu_{f}+\mu_{s})}{2}}}{\langle I \rangle\Gamma(\mu_{f})\Gamma(\mu_{s})}\left(\frac{I}{\langle I\rangle}\right)^ {\frac{(\mu_{f}+\mu_{s}-2)}{2}}K_{|\mu_{f}-\mu_{s}|}\left(2\sqrt{\mu_{f}\mu_{ s}\frac{I}{\langle I\rangle}}\right), \tag{1}\] where \(\langle I\rangle\) is the mean intensity of the speckled-speckle field, \(\Gamma\) is the Gamma function, and \(K_{|\mu_{f}-\mu_{s}|}\) is the \(|\mu_{f}-\mu_{s}|\)-order modified Bessel function of the second kind. The \(q\)-th moments of this distribution are found to be \[\langle I^{q}\rangle=\left(\frac{\langle I\rangle}{\mu_{f}\mu_{s}}\right)^{q }\frac{\Gamma(\mu_{f}+q)\Gamma(\mu_{s}+q)}{\Gamma(\mu_{f})\Gamma(\mu_{s})}, \tag{2}\] and in particular the second-order moment is \[\langle I^{2}\rangle=\langle I\rangle^{2}\frac{(\mu_{f}+1)(\mu_{s}+1)}{\mu_{ f}\mu_{s}}. \tag{3}\] This is useful to calculate the second-order autocorrelation function for intensity, that is \[g^{2}(I)=\frac{\langle I^{2}\rangle}{\langle I\rangle^{2}}=\frac{(\mu_{f}+1)( \mu_{s}+1)}{\mu_{f}\mu_{s}}=\left(1+\frac{1}{\mu_{f}}\right)\left(1+\frac{1} {\mu_{s}}\right). \tag{4}\] We notice that \(g^{2}(I)\) is symmetric in the number of modes \(\mu_{f}\) and \(\mu_{s}\), and does not depend on the light intensity [41]. If \(\mu_{f}=\mu_{s}=1\), the autocorrelation function reaches the maximum value, that is \(g^{2}(I)=4\). On the contrary, when \(\mu_{f}=\mu_{s}\rightarrow\infty\), \(g^{2}(I)\to 1\). In general, for the implementation of a GI protocol the calculation of the cross-correlation function is required since the optical scheme involves two replicas of the same field obtained by dividing it at a BS. As shown in Figure 1, one of the BS outputs, usually called test arm, passes through the object and is detected by a bucket detector, while the other output, usually called reference arm, is directly sent to a spatial-resolving detector. In many cases, for simplicity, the same spatial-resolving detector (e.g. a charged-coupled device (CCD) camera) can be used to detect the light coming from both arms. In this case, the effect of a bucket detector is obtained in post processing by summing all the pixels illuminated by the light coming from the test arm. The GI image is obtained by correlating over many repetitions the values of the bucket with the value of each pixel of the CCD camera. By assuming for simplicity that in a single pixel we have a single mode, the correlation function \(G(I_{i})\) can be written as \[G(I_{i}) = \frac{\langle(\sum_{j=1}^{\mu_{s}}I_{j})I_{i}\rangle}{\langle\sum_{ j=1}^{\mu_{s}}I_{j}\rangle\langle I_{i}\rangle}=\frac{\langle I_{i}^{2}\rangle+ \langle\sum_{i\neq j}I_{j}I_{i}\rangle}{\langle\sum_{j=1}^{\mu_{s}}I_{j} \rangle\langle I_{i}\rangle}= \tag{5}\] \[= \frac{\langle I^{2}\rangle}{\langle I\rangle^{2}}\frac{\langle I _{i}\rangle}{\langle\sum_{j=i}^{\mu_{s}}I_{j}\rangle}+\sum_{j\neq i=1}^{\mu_{s }}\frac{\langle I_{i}I_{j}\rangle}{\langle I_{i}\rangle\langle I_{j}\rangle} \frac{\langle I_{j}\rangle}{\langle\sum_{j=1}^{\mu_{s}}I_{j}\rangle}=\] \[= g^{2}(I)\frac{\langle I_{i}\rangle}{\langle\sum_{i=1}^{\mu_{s}} I_{i}\rangle}+\sum_{j\neq i=1}^{\mu_{s}}g^{1,1}(I_{i},I_{j})\frac{\langle I_{j} \rangle}{\langle\sum_{j=1}^{\mu_{s}}I_{j}\rangle},\] where \(g^{1,1}(I_{i},I_{j})\) is the cross-correlation function. Note that in Equation (5) the quantity \(I_{i}\) is the shot-by-shot intensity of a single mode, while \(\langle\sum_{i=1}^{\mu_{s}}I_{i}\rangle\) is the mean total intensity measured by the bucket detector, so that, assuming \(\mu_{s}\)-equally populated modes, we have that \(\langle I_{i}\rangle/\langle\sum_{i=1}^{\mu_{s}}I_{i}\rangle=1/\mu_{s}\). According to Equation (4), \(g^{2}(I)=(1+1/\mu_{f})(1+1/\mu_{s})=2(1+1/\mu_{f})\), where we assumed \(\mu_{s}=1\) since this term corresponds to the case in which the pixel is correlated with itself. On the contrary, \(g^{1,1}(I_{i},I_{j})\) gives the correlations between pixels that are different from each other, whose intensity is distributed according to that from the first disk, that is a multi-mode thermal distribution with \(\mu_{f}\) modes. This means that \[\sum_{j\neq i=1}^{\mu_{s}}g^{1,1}(I_{i},I_{j})=\sum_{j\neq i=1}^{\mu_{s}}\frac {\langle I_{i}I_{j}\rangle}{\langle I_{i}\rangle\langle I_{j}\rangle}=(\mu_{s }-1)\left(1+\frac{1}{\mu_{f}}\right)\frac{\langle I_{i}\rangle\langle I_{j} \rangle}{\langle I_{i}\rangle\langle I_{j}\rangle}=(\mu_{s}-1)\left(1+\frac{1 }{\mu_{f}}\right). \tag{6}\] Note that the value of \(g^{1,1}(I_{i},I_{j})=(1+1/\mu_{f})\) represents the background of the correlation image. The maximum value of this function, that is 2, is attained when \(\mu_{f}=1\), while the minimum value, that is 1, is achieved when \(\mu_{f}\rightarrow\infty\). By substituting Equation (6) into Equation (5), we obtain \[G(I) = 2\left(1+\frac{1}{\mu_{f}}\right)\frac{1}{\mu_{s}}+\frac{\mu_{s}-1}{ \mu_{s}}\left(1+\frac{1}{\mu_{f}}\right)=\left(1+\frac{1}{\mu_{f}}\right) \left(1+\frac{1}{\mu_{s}}\right), \tag{7}\] that gives the general expression of the correlation function. Assuming \(\mu_{f}=1\), the two limit values are \(G_{\rm max}(I)=4\) for \(\mu_{s}=1\) and \(G_{\rm min}(I)=2\) for \(\mu_{s}\rightarrow\infty\). For a direct comparison, we note that the expression of \(G(I)\) in the case of thermal light is equal to \(G(I)=(1+1/\mu)\), where \(\mu\) is the number of detected speckles. In fact, under this condition \(g^{2}(I)=(1+1/\mu)=2\), while \(g^{1,1}(I_{i},I_{j})=1\). The calculation of \(G\) is repeated for all pixels having coordinates \(k,l\) of the CCD camera, so that a correlation matrix can be obtained: \[G_{\rm GI}(I_{k,l})=\frac{\langle(\sum_{i,j=1}^{M,N}I_{i,j})I_{k,l}\rangle}{ \langle\sum_{i,j=1}^{M,N}I_{i,j}\rangle\langle I_{k,l}\rangle}, \tag{8}\] assuming that the pixels corresponding to the bucket detector and to an area with size M\(\times\)N are summed together. As extensively discussed in the literature, to quantify the quality of the GI image, we need to consider some figures of merit. The most used are the visibility (V), the contrast (C) and the signal-to-noise ratio (SNR), which can be Figure 1: Example of experimental scheme to realize a standard GI protocol with super-thermal light exploiting a spatial-resolving detector to detect both the test and the reference arm. expressed in terms of \(G(I)\) as follows [10; 21; 31; 32] \[\mathrm{V} = \frac{G_{\mathrm{IN}}(I)-G_{\mathrm{OUT}}(I)}{G_{\mathrm{IN}}(I)+G _{\mathrm{OUT}}(I)} \tag{9}\] \[\mathrm{C} = \sqrt{G_{\mathrm{IN}}(I)-G_{\mathrm{OUT}}(I)}\] (10) \[\mathrm{SNR} = \frac{G_{\mathrm{IN}}(I)-G_{\mathrm{OUT}}(I)}{\sigma[G_{\mathrm{ OUT}}(I)]}, \tag{11}\] where \(G_{\mathrm{IN}}(I)\) and \(G_{\mathrm{OUT}}(I)\) are the values of the correlation inside and outside the GI image, while \(\sigma\) is the standard deviation of the part of the image that does not contain information about the object. It has already been demonstrated that the SNR is the most useful criterion, as it takes into account light fluctuations, thus quantifying the contribution of noise to the light signal. For instance, in the case of thermal light the maximum values of \(G_{\mathrm{IN}}(I)=2\), while \(G_{\mathrm{OUT}}(I)=1\), so that \(\mathrm{V_{th}}=1/3\). The same result can be achieved in the case of super-thermal light since the maximum value of \(G_{\mathrm{IN}}(I)=4\), while \(G_{\mathrm{OUT}}(I)=2\) if the speckled-speckle field is generated by a single speckle \(\mu_{f}=1\) exiting the first disk and entering the second one. This means that the visibility does not allow one to appreciate the difference between the two light states. For what concerns the contrast, in the case of thermal light with \(\mu=1\), \(\mathrm{C_{th}}=\sqrt{2-1}=1\), while for super-thermal light with \(\mu_{f}=1\), \(\mathrm{C_{sth}}=\sqrt{4-2}=\sqrt{2}\). This proves that using two diffusers instead of one improves the contrast of the GI image. The definition of the contrast is connected to that of \(\mathrm{SNR}\), which can be rewritten as \(\mathrm{SNR}\)=\(\mathrm{C^{2}}/\sigma[G_{\mathrm{OUT}}(I)]\). Indeed, this can be given in terms of different quantities, such as remarked in [10]. According to our definition in Equation (11), the signal-to-noise ratio is also connected to the variance of the distribution of the background, considered in the denominator. In the case of thermal light, it has been demonstrated [43] that \(\mathrm{SNR_{th}}\) is proportional to \(1/\sqrt{\mu}\). For what concerns super-thermal light, in analogy with pseudo-thermal light, we can argue that, being \(G(I)\) a function of the number of modes \(\mu_{f}\) and \(\mu_{s}\) in the same way as it is a function of \(\mu\) in the case of thermal light, by fixing \(\mu_{f}=1\), \(\mathrm{SNR_{sth}}\) is proportional to \(1/\sqrt{\mu_{s}}\). Of course, a difference between the two cases is expected in the proportionality coefficient, which is connected to the statistical properties of the two light fields [41]. As we will show in the next Section, our results seem to prove this statement. As remarked in the Introduction, the GI setup can be also exploited to perform DGI, since the difference between the two techniques is in data processing. We evaluate the correlation matrix in the case of DGI, which consists in subtracting from the GI correlation matrix the GI correlation matrix of the non-correlated part of the image. Therefore, Equation (8) is modified as follows \[G_{\mathrm{DGI}}(I_{k,l})=\frac{\langle(\sum_{i,j=1}^{M,N}I_{i,j})I_{k,l} \rangle}{\langle(\sum_{i,j=1}^{M,N}I_{i,j})\rangle\langle I_{k,l}\rangle}- \frac{\langle(\sum_{p,q=1}^{R,S}I_{p,q})I_{k,l}\rangle}{\langle(\sum_{p,q=1}^ {R,S}I_{p,q})\rangle\langle I_{k,l}\rangle}, \tag{12}\] where \(\sum_{p,q=1}^{R,S}I_{p,q}\) is the the sum of the intensities detected in a portion corresponding to the reference arm. Concerning the figures of merits, we can notice that, as discussed in Ref. [21], the use of DGI instead of conventional GI can give better results in terms of SNR in the case of weakly absorbing objects. In the next Sections we will show that for super-thermal light this advantage is preserved also in the non-absorbing case. ## III Implementation In order to appreciate the possible advantages given by the use of super-thermal light instead of pseudo-thermal one for imaging applications, we decided to produce this type of light state in two different ways. First of all, we implemented a LabVIEW-based simulation of speckled-speckle fields, and calculated all the quantities introduced in the previous Section. Secondly, we compared the obtained results with those achieved with an experimental realization of super-thermal light. In both cases, a direct comparison with pseudo-thermal light was performed. ### Numerical simulation The simulation was built with LabVIEW. The program generates a speckle field using \(\delta-\)correlated random matrices, which are then convolved with a Gaussian distribution to obtain a Gaussian field [42]. The resulting matrices represent different realizations of the first speckle field. A portion of the field is then selected through a virtual PH and used as the source of a second random process, that is the speckled-speckle field. To generate it, we used \(N=100\) scattering centers randomly moving in a three-dimensional space to scatter the light exiting the PH. The speckled-speckle field can be expressed as \[E_{2}(x,y)=E_{1}(x,y)\exp\{[i\phi(r)]\}\exp\{[i\phi_{n}(z)]\}, \tag{13}\] where \(E_{1}(x,y)\) is the first speckle field, \(\phi(r)\) is the phase of the speckled-speckle field, while the phase term \(\phi_{n}(z)\) takes into account the thickness of the diffuser. The phase of the speckled-speckle field is related to the distance between each scattering center on the second disk and the position of a virtual spatial-resolving detector, such as a CCD camera: \[\phi(r)=ikr=ik\sqrt{(x_{\rm CCD}-x_{\rm D2})^{2}+(y_{\rm CCD}-y_{\rm D2})^{2}+(z_ {\rm CCD}-z_{\rm D2})^{2}}, \tag{14}\] where \(\{x_{\rm D2},y_{\rm D2},z_{\rm D2}\}\) and \(\{x_{\rm CCD},y_{\rm CCD},z_{\rm CCD}\}\) are the coordinates of the scattering center and CCD pixel, respectively. The virtual CCD consists of \(200\times 200\) pixels with a size of 2 mm \(\times\) 2 mm. To replicate the real behavior of light, in the simulation we also consider the thickness of the second disk by inserting in Equation (13) the phase term \[\phi_{n}(z)=ik_{n}z_{\rm D2}, \tag{15}\] in which \(k_{n}=2\pi n/\lambda_{0}\) is the wavevector of light in the second diffuser with refractive index \(n\), \(\lambda_{0}\) is the wavelength of light in vacuum, while \(z_{\rm D2}\) is the \(z\)-coordinate of each particle on the second diffuser, that is equal to the thickness of the disk. By using this strategy, we generated 100,000 realizations of the speckled-speckle field. Some simulated realizations are shown in panel (a) of Figure 2. By looking at the different intensities of the patterns we can appreciate the intensity fluctuations of the source, which is expected to be multi-mode thermal. This behavior can be quantified calculating the probability distribution of the mean intensity of each image. The resulting statistics is shown in panel (b) of Figure 2, where the experimental data are presented together with the theoretical fitting function according to a multi-mode thermal distribution, in which the number of modes \(\mu_{f}\) is the only fitting parameter [44]. The obtained value is equal to \(\mu_{f}=1.06\pm 0.02\), which means that the field selected by the first pin-hole contains approximately a single mode, \(i.e.\) a single speckle. ### Experimental implementation As sketched in Figure 3(a), the second-harmonic pulses (at 523 nm, 5-ps pulse duration) of a Nd:YLF laser re-generatively amplified at 500 Hz were focused on the surface of a rotating ground-glass disk, GD1. We selected a portion of the speckle field in far field by means of a pin-hole, PH1, having a diameter of 2.5 mm. This choice roughly corresponds to selecting a single speckle, as better discussed in the next Section. The light passing through Figure 2: Panel (a): simulated realizations of the speckled-speckle field at different mean intensities. Panel (b): probability distribution of the mean intensity of each image. Blue dots: experimental data; red dashed line: theoretical fitting function according to a multi-mode thermal distribution, in which the number of modes \(\mu_{f}\) is the only fitting parameter. The obtained value is equal to \(\mu_{f}=1.06\pm 0.02\). PH1 was then focused on the surface of a second rotating ground-glass disk, GD2. The far-field condition of the speckled-speckle field was achieved by putting a lens with a 100-mm focal-length 100 mm behind GD2 so that the speckle field propagated with negligible divergence. The pattern was then split into two parts by a system composed of a half-waveplate (HWP) and a polarizing cube beam splitter (PBS) used to finely tune the balancing between the intensities in the two output arms. The reflected output was used as the test arm. An object consisting of a single slit with a 0.8-mm diameter was placed at 3.5 cm from the PBS and an imaging system with a magnification approximately equal to 1/3 was built using a 100-mm focal-length lens. The image was formed on a portion of a CCD camera (DCU223M, Thorlabs, 1024 \(\times\) 768 squared pixels, 4.65-\(\mu\)m pixel pitch). On the transmitted arm a 1:1 image of the speckle field at 3.5 cm from the PBS was built using another 100-mm focal-length lens. The image was formed on a different portion of the same CCD camera. Some typical single-shot images of the reference arm are shown in panel (a) of Figure 4. Note that the speckles appear larger (roughly by a factor of 3) than those in the equivalent sequence of images from simulation shown in Figure 2. This is due to the fact that the size of the simulated speckles Figure 4: Panel (a): experimental realizations of the speckled-speckle field at different mean intensities. Panel (b): probability distribution of the mean intensity of each image. Blue dots: experimental data; red dashed line: theoretical fitting function according to a multi-mode thermal distribution, in which the number of modes \(\mu_{f}\) is the only fitting parameter. The obtained value is equal to \(\mu_{f}=1.25\pm 0.02\). Figure 3: Panel (a): sketch of the experimental setup. M: mirror; L\({}_{1}\): 200-mm-focal length lens; L\({}_{2}\): 100-mm-focal length lens; GD1 and GD2: rotating ground-glass disks; PH1: 2.5-mm-diameter pin-hole; PH2: 4-mm-diameter pin-hole; HWP: half-wave plate; PBS: polarizing cube beam splitter; L: 100-mm-focal length lens; O: object; CCD: camera. Panel (b): DGI image consisting of two parts: the autocorrelation image on the bucket side on the right and the cross-correlation one corresponding to the reference arm on the left. was chosen similar to that of the experimental speckles of the bucket side, which were demagnified by a factor of approximately \(M=1/3\). In panel (b) we also present the probability distribution of the mean intensity of each experimental image together with the theoretical fitting function according to a multi-mode thermal distribution, in which the number of modes \(\mu_{f}\) is the only fitting parameter. The obtained value is equal to \(\mu_{f}=1.25\pm 0.02\), which is slightly larger than the analogous value obtained from simulations. Nevertheless, as shown in the next Section, this small discrepancy does not prevent a direct comparison between simulated data and experimental ones. To investigate the minimum number of images required to obtain either a GI image or a DGI one, and to evaluate their quality in terms of the already-mentioned figures of merit, we saved 100,000 realizations of the speckled-speckle field. As explained in the previous Section, the calculation of the correlation matrix (see Equation (12)) over this number of realizations contains the autocorrelation image on the bucket side (see the right side of Figure 3(b)) and the cross-correlation image on the reference arm (see the left side of the same Figure). For a direct comparison, we repeated the experiment with pseudo-thermal light by removing GD1 and adjusting the divergence of the beam impinging on GD2 in order to obtain a speckle field with speckles having roughly the same size as those obtained with speckled-speckle light. Also in this case, we saved 100,000 images. ## IV Results and Discussion In order to characterize the speckled-speckle field produced by both the simulation and the experiment, we firstly calculated the spatial autocorrelation function on each image and then averaged over the total number of images. In Figure 5, we show the section of the averaged autocorrelation image from simulations in the case of super-thermal light (panel (a)) and of pseudo-thermal light (panel (b)), and the analogous horizontal sections of the autocorrelation function obtained from the bucket portion of the experimental images (panels (c) and (d)). It is worth noting that from the spatial autocorrelation function some relevant information can be extracted, such as the type of employed light, the typical speckle size, and the number of modes selected by the first pin-hole in the case of super-thermal light. According to Equation (7), for super-thermal light the value of the maximum of the autocorrelation function depends on \(\mu_{f}\) and \(\mu_{s}\) as \(G(I)=(1+1/\mu_{f})(1+1/\mu_{s})\). Thus, if \(\mu_{f}=\mu_{s}=1\), it is possible to reach the maximum value, that is 4. On the contrary, the minimum of the autocorrelation function is related to the background, that is given by \(g^{1,1}(I_{i},I_{j})=(1+1/\mu_{f})\). Thus, if \(\mu_{f}=1\), the expected value of the minimum is equal to 2. It is also worth noting that values smaller than 2 can be reached if \(\mu_{f}>1\). In that case, also \(G(I)\) will attain values smaller than 4. For what concerns pseudo-thermal light, the maximum value of autocorrelation function is 2, corresponding to the case \(\mu=1\), while the background is 1. According to Figure 5(a) and (c), for super-thermal light, the peak of the autocorrelation function is equal to \(3.81\pm 0.03\) in the case of simulation and \(3.76\pm 0.04\) in the case of experiment, while the background is equal to \(1.92\pm 0.01\) and \(1.79\pm 0.02\), respectively. The direct comparison between simulation and experiment proves that the obtained results are compatible with each other, and in good agreement with theory. In particular, we note that in both panels ((a) and (c)) the value of the peak is slightly smaller than 4, thus corresponding to a number of modes \(\mu_{f}\) slightly larger than 1. In particular, we got \(\mu_{f}=1.10\)\(\pm\) 0.02 and \(1.14\pm 0.03\) from the peak of the autocorrelation function referred to simulation and experiment, respectively, while we obtained \(\mu_{f}=1.09\)\(\pm\) 0.01 and \(1.27\pm 0.03\) from the values of the background. Note that these values are compatible with the analysis of the data in Figures 2 and 4. Moreover, from the full width at half maximum of the function we can extract the typical size of the speckles of the speckled-speckle field, that is \(d_{\rm{sp}}=4\pm 1\) pixels for both simulation and the bucket side of the experimental images. If we repeat the same procedure on the reference arm of the experimental images, we get a width of the spatial autocorrelation function equal to \(11\pm 1\) pixels, in agreement with the different magnification existing between the two arms. The good correspondence between the values facilitates further comparisons in imaging applications, as it will become more evident in the next Section. For what concerns pseudo-thermal light, according to Figure 5(b) and (d), the peaks of the autocorrelation functions are equal to \(1.99998\pm 0.00001\) and \(2.08\pm 0.02\), respectively, while the background is equal to 1 in both cases, corresponding to a single-mode thermal state. Moreover, from the width of the function, we can extract the typical size of the speckles of the speckle field, that is \(d_{\rm{sp}}=3\pm 1\) pixels in both cases. The number of modes \(\mu_{f}\) selected by the first pin-hole can be also investigated by calculating the temporal autocorrelation function at different values of \(\mu_{s}\) obtained by selecting a given number of pixels in the portion of the CCD camera illuminated by the light coming from the reference arm. First of all, we built a bucket detector by choosing 1 or more pixels at relative distance larger than the typical width of a speckle, and summed their intensities. Then, for each image, we correlated this sum with each pixel in the reference arm. In the resulting autocorrelation image we can recognize the presence of 1 or more speckles depending on the number of distinct selected pixels, as shown in Figure 6(a). The maxima of the autocorrelation image are found at the coordinates of the chosen pixels. The value of \(\mu_{f}\) can be evaluated from Equation (7) by setting \(\mu_{s}\) equal to the number of selected pixels. In Figure 6(b) we show the value of \(\mu_{f}\) as a function of the number of modes \(\mu_{s}\) for super-thermal light. For a direct comparison, in the same Figure we also show the value of \(\mu_{f}\) extracted from the value of the background, that is inverting \(g^{1,1}(I_{i},I_{j})=(1+1/\mu_{f})\). As observed in the case of the spatial autocorrelation function, we can notice that the two methods are not completely equivalent, at least for small values of \(\mu_{s}\), even if they are compatible within \(1\sigma\). This discrepancy can be ascribed to the fact that the calculation of the number of modes \(\mu_{f}\) from the maximum was obtained by repeating the procedure for a limited number of choices, by randomly choosing the pixels inside the portion of CCD camera. On the contrary, the calculation of \(\mu_{f}\) from the background was based on an average over several pixels. The difference between the two procedures is less evident in the case of large values of \(\mu_{s}\) since the effect of possible fluctuations in the choice of the pixels to be correlated cancels out. Figure 5: Upper panels: section of the spatial autocorrelation function from simulated data in the case of super-thermal light (panel (a)) and pseudo-thermal light (panel (b)). Lower panels: the same as in the upper panels obtained from experimental data by considering the bucket side. Figure 6: Super-thermal light. Panel (a): autocorrelation images obtained by correlating 1, 2, 3, 4, and 5 pixels with all the pixels corresponding to the reference arm of the experimental images. Panel (b): \(\mu_{f}\) as a function of the number of modes \(\mu_{s}\) for super-thermal light. Blue dots + line: \(\mu_{f}\) extracted from the maximum of \(G(I)\); red dots + line: \(\mu_{f}\) extracted from the minimum, \(i.e.\) the background, of \(G(I)\). We are now ready to present the results of the implementation of the GI and DGI protocols, and prove the advantages offered by super-thermal light by evaluating C and SNR. We investigate the quality of the GI and DGI images as a function of the number of speckles illuminating the object, and as a function of the number of images, to understand what is the minimum number required to saturate the figures of merit. In Figure 7 we show the GI and DGI images obtained by selecting an object 20 pixels \(\times\) 50 pixels large on the bucket detector both in the case of simulation (panels (a) and (b)) and experiment (panels (c) and (d)). We can notice that there is a good agreement between the results obtained from simulation and experiment for both strategies. This fact can be ascribed to the fact that the values of \(\mu_{f}\) and \(\mu_{s}\) are compatible. Moreover, DGI images are sharper than the corresponding GI ones, especially in the case of experimental data. As anticipated in the Introduction, this is due to the fact that DGI technique removes the effect of noise due to the non-correlated part of the images. This can be quantified by the figures of merit. For the images in Figure 7 we obtain a SNR higher for DGI images than for GI ones, as explicitly indicated in the caption, for both simulated images and experimental ones. On the contrary, C is independent of the chosen technique. This fact can be better appreciated by considering the values of C and SNR shown in Figure 8 as a function of the ratio between the area of the object selected on the bucket side, \(A_{b}\), and that of a typical speckle, \(A_{sp}\), roughly corresponding to the number of modes \(\mu_{s}\) illuminating the object. From the plot, we can notice that there is a larger increase of SNR with respect to C at different sizes of the object. As a final investigation, we consider the behavior of C and SNR as a function of the number of images for a fixed choice of the object size, that is 20 \(\times\) 50 pixels. We can clearly see from Figure 9 that, while the contrast attains a constant value with more than \(10^{3}\) images in the case of both GI and DGI, the SNR is still an increasing function even with 100,000 images, thus not reaching a saturation value. Nevertheless, we emphasize that the maximum number of data, \(i.e.\)\(N=100,000\), is sufficient to reach good values of SNR in the case of DGI. On the contrary, it seems that more data are necessary to obtain good-quality images in the case of GI, thus proving that this last technique is not particularly convenient. Moreover, concerning the contrast, we notice that in the case of experimental data the values are quite noisy for small numbers of images (\(N<40,000\)), while they agree with simulations for larger data samples. The results shown above suggest different considerations. First of all, we notice that the performed simulation and the experimental data lead to very similar outcomes, which are also in good agreement with the predictions of the Figure 7: Super-thermal light. Upper panels: GI (panel (a)) and DGI (panel (b)) images obtained by selecting an object 20 \(\times\) 50 pixels large on the bucket detector from simulated data. Lower panel: the same as in upper panels from experimental data. The values of C are 0.150 in panel (a), 0.151 in panel (b), \(0.13\pm 0.03\) in panel (c), and \(0.148\pm 0.004\) in panel (d), while those of SNR are 2.74 in panel (a), 15.34 in panel (b), \(2.3\pm 0.9\) in panel (c), and \(14.3\pm 2.4\) in panel (d). developed theoretical model. In particular, we verified the dependence of the correlation functions on the number of modes \(\mu_{f}\) and \(\mu_{s}\). Concerning the applications to imaging, we proved that in the case of super-thermal light there is always an advantage in using DGI instead of GI in terms of SNR. This result is different from what was obtained with pseudo-thermal light, as discusses in Ref. [21]. In that case, Ferri et al. proved that DGI is better than GI only in the case of faint and weakly absorbing objects. On the contrary, our analysis demonstrates that for super-thermal light the values of SNR for DGI are always larger than those for GI regardless of the size of the object. This is not the case of contrast, whose value as a function of the size of the object does not depend on the employed technique. To better investigate advantages and limitations of super-thermal with respect to pseudo-thermal lights, in Figure 10 we compare the values of C and SNR as functions of the size of the object for the two kinds of light states. In both cases the results were produced by numerical simulation. First of all, we can notice that the main advantage of using super-thermal light is given in terms of contrast. Indeed, this is always larger than the corresponding value obtained with pseudo-thermal light. However, as already remarked in Figure 8, the contrast of DGI is equivalent to that of GI. Concerning SNR, we can clearly notice that the values are larger in the case of pseudo-thermal light with respect to super-thermal one. We notice that for this kind of light there is a strong difference between the values of SNR achieved with DGI and GI techniques. This is not the case of pseudo-thermal light, where they are different only for large objects or, equivalently, for weakly absorbing objects [10; 21]. In particular, we emphasize that the values of SNR obtained with DGI for super-thermal light are definitely larger than 10 for a good range of object size. This is actually a sufficient condition to obtain good-quality images, Figure 8: Super-thermal light. C (panel (a)) and SNR (panel (b)) as functions of the ratio between the area of the object, \(A_{b}\), selected on the bucket side and that of a typical speckle, \(A_{sp}\). Open symbols + dotted lines: results from simulation; full symbols + solid lines: results from experimental data. Red color refers to DGI, while blue color to GI. The error bars corresponding to the experimental results were calculated by considering different areas of the correlation images for the evaluation of the background. especially in connection with a high value of contrast. The results obtained from simulations prove that also in the case of super-thermal light there is a \(1/\sqrt{\mu}\) dependence on the number of modes. Indeed, once the value of \(\mu_{f}\) is fixed, all the calculated quantities are simply functions of \(\mu_{s}\). The offset existing for a given quantity between pseudo-thermal and super-thermal lights takes into account the different statistical distributions. Indeed, as already remarked in the Introduction, super-thermal light is endowed with higher intensity fluctuations. Further investigations are now in progress to develop a theoretical model that can exactly describe the figures of merit. Finally, we want to focus again our attention on the results shown in Figure 9, where C and SNR are plotted as functions of the number of images. As already noticed in the previous Section, the contrast is essentially independent of the number of images for DGI technique. Some fluctuations are evident in the case of GI technique: a number of images larger than \(10^{4}\) is required to obtain a constant behavior, especially for the experimental realization, which is definitely more sensitive to nonidealities and optical distortions. Concerning SNR, we note that in no case a saturated value is reached. However, we can observe that while the values of SNR in the case of GI are too low, thus demonstrating that more than 100,000 images are required, in the case of DGI SNR values larger than 10 can be obtained with a minimum number of 40,000 images. Figure 9: Super-thermal light. C (panel (a)) and SNR (panel (b)) as functions of the number of images. Dotted lines: results from simulation; solid lines: results from experimental data. Red color refers to DGI, while blue color to GI. ## V Conclusion In this work we investigated the usefulness for imaging applications of super-thermal light obtained by passing a laser beam through a sequence of two diffusers. We performed our analysis exploiting the model discussed in Ref. [41] and realizing both a numerical simulation and a real experiment. In particular, we proved that in both cases there is a good agreement with the theoretical expectations by investigating the role played by the numbers of modes selected at the exit of both the first rotating ground-glass disk and the second one. We studied the quality of the reconstructed images in terms of contrast and signal-to-noise ratio by employing both GI and DGI techniques. In general, both the simulation and the experimental realization prove that DGI offers many advantages with respect to GI, such as higher values of SNR and the requirement of a smaller number of images. From the direct comparison with pseudo-thermal light, we also demonstrated that super-thermal one yields higher values of C, together with reasonable values of SNR. Figure 10: C (panel (a)) and SNR (panel (b)) as functions of the ratio between the area of the object, \(A_{b}\), selected on the bucket side and that of a typical speckle, \(A_{sp}\), in the case of simulated images. Circles: results in the case of super-thermal light; squares: results in the case of pseudo-thermal light. Red color refers to DGI, while blue color to GI. Colored lines: fitting functions \(y=ax^{b}\), with \(a\) and \(b\) as free-fitting parameters. Solid lines correspond to full symbols, while dashed lines to open ones. The color choice is the same as symbols. The values of the fitting parameters in the case of C are: \(a=0.5621\pm 0.0004\) and \(b=-0.5001\pm 0.0002\) for pseudo-thermal light in the case of DGI, \(a=0.5616\pm 0.0004\) and \(b=-0.4997\pm 0.0002\) for pseudo-thermal light in the case of GI, \(a=0.846\pm 0.004\) and \(b=-0.483\pm 0.001\) for super-thermal light in the case of DGI, and \(a=0.815\pm 0.004\) and \(b=-0.474\pm 0.002\) for super-thermal light in the case of GI. The values of the fitting parameters in the case of SNR are: \(a=193\pm 4\) and \(b=-0.513\pm 0.008\) for pseudo-thermal light in the case of DGI, \(a=198\pm 4\) and \(b=-0.523\pm 0.007\) for pseudo-thermal light in the case of GI, \(a=101\pm 1\) and \(b=-0.525\pm 0.005\) for super-thermal light in the case of DGI, and \(a=72\pm 1\) and \(b=-0.916\pm 0.006\) for super-thermal light in the case of GI. The good quality of all these results suggests a more practical exploitation of this kind of light, and encourages the use of longer sequences of diffusers or more complex systems capable of generating new types of super-thermal light. **Acknowledgements** The authors thank Fabio Ferri, Alberto Parola and Camilla Bianciardi (University of Insubria) for fruitful discussions. S.C. and A.A. acknowledge the support by PNRR D.D.M.M. 351/2022. G.C. acknowledges the financial support of the INFN through the project QUANTUM. This work was financially supported by PNRR MUR Project PE0000023-NQSTI. **Conflict of interest** The authors declare no conflict of interest. **Author contribution** S.C. and G.C. contributed equally to this work. Conceptualization, A.A. and M.B.; methodology, A.A. and M.B.; validation, S.C., G.C. and A.A.; experimental investigation, A.A. and S.C.; simulation, G.C.; writing--original draft preparation, S.C., G.C., A.A. and M.B. All authors have read and agreed to the submitted version of the manuscript. **Data availability statement** The data that support the findings of this study are available from the corresponding author upon reasonable request.
2309.10657
Learning Adaptive Safety for Multi-Agent Systems
Ensuring safety in dynamic multi-agent systems is challenging due to limited information about the other agents. Control Barrier Functions (CBFs) are showing promise for safety assurance but current methods make strong assumptions about other agents and often rely on manual tuning to balance safety, feasibility, and performance. In this work, we delve into the problem of adaptive safe learning for multi-agent systems with CBF. We show how emergent behavior can be profoundly influenced by the CBF configuration, highlighting the necessity for a responsive and dynamic approach to CBF design. We present ASRL, a novel adaptive safe RL framework, to fully automate the optimization of policy and CBF coefficients, to enhance safety and long-term performance through reinforcement learning. By directly interacting with the other agents, ASRL learns to cope with diverse agent behaviours and maintains the cost violations below a desired limit. We evaluate ASRL in a multi-robot system and a competitive multi-agent racing scenario, against learning-based and control-theoretic approaches. We empirically demonstrate the efficacy and flexibility of ASRL, and assess generalization and scalability to out-of-distribution scenarios. Code and supplementary material are public online.
Luigi Berducci, Shuo Yang, Rahul Mangharam, Radu Grosu
2023-09-19T14:39:39Z
http://arxiv.org/abs/2309.10657v2
# Learning Adaptive Safety for Multi-Agent Systems ###### Abstract Ensuring safety in dynamic multi-agent systems is challenging due to limited information about the other agents. Control Barrier Functions (CBFs) are showing promise for safety assurance but current methods make strong assumptions about other agents and often rely on manual tuning to balance safety, feasibility, and performance. In this work, we delve into the problem of adaptive safe learning for multi-agent systems with CBF. We show how emergent behavior can be profoundly influenced by the CBF configuration, highlighting the necessity for a responsive and dynamic approach to CBF design. We present ASRL, a novel adaptive safe RL framework, to fully automate the optimization of policy and CBF coefficients, to enhance safety and long-term performance through reinforcement learning. By directly interacting with the other agents, ASRL learns to cope with diverse agent behaviours and maintains the cost violations below a desired limit. We evaluate ASRL in a multi-robot system and a competitive multi-agent racing scenario, against learning-based and control-theoretic approaches. We empirically demonstrate the efficacy and flexibility of ASRL, and assess generalization and scalability to out-of-distribution scenarios. Code and supplementary material are public online1. Footnote 1: All code and supplementary material: [https://github.com/luigiberbducci/learning_adaptive_safety](https://github.com/luigiberbducci/learning_adaptive_safety) ## I Introduction Safety is an outstanding concern in the design of learning algorithms, especially for safety-critical applications. Control barrier functions (CBFs) have emerged in this context as a very powerful formal approach to ensuring safety [1, 2, 3]. Moreover, the integration of CBFs in reinforcement learning (RL) holds a huge potential for safe exploration [4, 5, 6, 7, 8]. However, the success of CBFs in RL is often confined to simple settings, such as single-agents or cooperative multi-agents with very limited interaction. This is because in multi-agent scenarios, the intricate interplay among agents poses unique challenges to the design of the CBFs and their associated _extended class-\(\mathcal{K}_{\infty}\) functions_. These functions, in the following abbreviated as _class-\(\mathcal{K}\) functions_, control the rate with which the agent can approach the safe-set boundary. While manual tuning of the class-\(\mathcal{K}\) functions is feasible in simple tasks, it becomes challenging in multi-agent environments due to the unpredictable effects of small changes in their parameters. The richness of interactions and limited information about other agents' policies make it difficult to trade-off safety and long-term performance by adjusting just a few parameters. Previous work on Adaptive CBFs have primarily concentrated on enhancing the feasibility of the optimization problem, by introducing time-varying coefficients within the CBF condition [9]. However, it's worth noting that these approaches have been predominantly applied in single-agent or cooperative environments, often relying on the availability of substantial historical data for optimization [10, 11]. These methods face two key challenges: * _Overlooking long-term objectives_: the narrow focus on feasibility and short-term performance within the adaptive CBF framework, struggles in capturing long-term objectives and potentially leads to sub-par solutions. * _Prior-data scarcity_: the assumption that historical data is available does not always hold, especially in non-cooperative settings where other agents may be reluctant to reveal their strategies, thereby hindering the achievement of sufficient coverage of diverse strategies. To address these two challenges in interactive multi-agent environments, we propose a novel approach based on RL and adaptive CBFs. In order to account for the lack of knowledge with regard to the other agents' strategies, we exploit direct interactions with these agents, to uncover their intentions. Our **main contributions** in this paper are the following: 1. _An adaptive safe-RL framework (ASRL)_, where a low-level CBF-controller ensures safety and a high-level one optimizes policy and state-dependent CBF coefficients. 2. _A model-free learning approach_, which is based on RL for efficient adaptation to different agents and scenarios through direct interaction with these agents. 3. _A comprehensive evaluation of ASRL_ in multi-agent environments, in order to assess the adaptation to different types of agents and degrees of cooperation. As shown in Figure 1, by combining a model-based low-level control layer with model-free RL, ASRL enhances adaptiveness to diverse behaviors exhibited by other agents, relieving the engineers from the burden of manual tuning the CBF, in favour of a systematic approach to optimize general long-term objectives and trade-off safety and performance. _Motivating Example:_ Consider a navigation task where multiple robots have distinct starting positions and specific Fig. 1: The proposed hierarchical adaptive framework for multi-agent systems, where a policy \(\pi_{\xi}\) and a safety module \(\gamma_{\psi}\) are jointly optimized for safe and adaptive interaction. goo_ robot needs to reach its goal while avoiding collisions with other robots, without any knowledge of their parameterization. To ensure safe navigation, we equip the ego robot with a CBF, acting as a protective safety shield. However, the emergent behavior of the ego robot can vary significantly by adjusting the coefficient \(\gamma\) of the CBF class-\(\mathcal{K}\) function, as shown in Figure 2 (left). In some simulations, the ego fails to reach its goal due to cautious maneuvers dictated by the CBF condition, while the same controller successfully completes the task in other configurations. Two factors contribute to this: (1) the ego robot adapts its maneuvers based on the CBF condition, varying assertiveness levels; (2) other robots react to the ego's actions, leading to configurations that can aid or hinder task completion. Figure 2 (right) supports this hypothesis by showing how diverse CBF coefficients influence the long-term performance of the ego agent, measured by success and collision rates. The optimal coefficients depend on scenario-specific characteristics, such as the number of agents and their parameters. This underscores the importance of an adaptive approach, which will be detailed in the following sections of this work. ## II Background Consider the stochastic game \((\mathcal{I},S,\bar{A},f,r,\rho_{0},T,\alpha)\), where \(\mathcal{I}=\{1,2,\cdots,q\}\) denotes the set of \(q\) agents, \(S\) and \(\bar{A}\) are the set of states and joint actions, \(f:S\times A\to S\) is the deterministic transition function, \(r:S\times A\times S\rightarrow\mathbb{R}\) is the reward function, \(\rho_{0}\) represents the distribution of initial conditions, \(T\in\mathbb{N}\) denotes the time horizon, \(\alpha\in[0,1]\) is the discount factor, to avoid confusion with the \(\gamma\) adopted in CBF. At timestep \(t\), each agent \(i\) picks an action \(a_{t}^{i}\) according to its policy \(\pi_{i}\) and the system state evolves according to the joint action \(a_{t}=\times_{i\in\mathcal{I}}\)\(a_{t}^{i}\) by discrete-time dynamics \[s_{t+1}=f(s_{t},a_{t}). \tag{1}\] We consider the problem of finding an optimal policy for the _ego_ agent \(\pi_{1}\), assuming the non-controlled agents \(\pi_{i},i>1,\) have unobservable parameters distributed according to \(\rho_{0}.\) In the following, we denote the ego with \(\pi\) and formulate the problem as a constrained partially observable Markov decision process (CPOMDP) \((S,A,\Omega,O,f,r,h,\rho_{0},T,\alpha)\) where the actions \(A\) refer to \(\pi\), the observations \(\Omega\) consists of the observable states without the other agents' parameters obtained by \(O:S\rightarrow\Omega\), \(h\) is a continuously differentiable function delimiting the safe set of states for the ego agent. We define the safe set, \(\mathcal{C}\), by the superlevel set of \(h\): \[\mathcal{C}=\{s\in S:h(s)\geq 0\}. \tag{2}\] **Definition 1**.: (Forward invariance and safety) The set \(\mathcal{C}\) is _forward invariant_ if for every \(s_{0}\in\mathcal{C}\), \(s_{t}\in\mathcal{C}\) holds for all \(t\). If \(\mathcal{C}\) is forward invariant, we say the system (1) is safe. **Definition 2**.: (CBF [12]) Given a set \(\mathcal{C}\subset\mathbb{R}^{n}\) defined by (2), the continuously differentiable function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a _discrete-time control barrier function_ (CBF) for dynamical system (1) if there exists \(\gamma\in[0,1]\) such that for all \(s_{t}\in\mathcal{C}\), \[\sup_{a_{t}\in A}\left[h\Big{(}f(s_{t},a_{t})\Big{)}+(\gamma-1)h(s_{t})\right] \geq 0. \tag{3}\] Note that the parameter \(\gamma\) influences the conservativeness of agent's behaviour: it will be less conservative (i.e., approaching the safe boundary) as \(\gamma\) goes to 1. However, \(\gamma\) is fixed in the above vanilla CBF definition, which implies the fixed degree of conservativness. To overcome this limitation, we introduce the adaptive version of discrete-time CBF. **Definition 3**.: (Adaptive control barrier function) Given a set \(\mathcal{C}\subset\mathbb{R}^{n}\) defined by (2), the continuously differentiable function \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a _discrete-time adaptive control barrier function_ (Adaptive CBF) for dynamical system (1) if for all \(s_{t}\in\mathcal{C}\), there exists \(\gamma(s_{t})\in[0,1]\) such that \[\sup_{a_{t}\in A}\left[h\Big{(}f(s_{t},a_{t})\Big{)}+(\gamma(s_{t})-1)h(s_{t} )\right]\geq 0. \tag{4}\] The Adaptive CBF differs from (3) by the state-dependent function \(\gamma:S\rightarrow[0,1]\). We now demonstrate that state-dependent coefficients do not hinder the safety guarantees of CBF. For any potentially unsafe nominal action \(a_{t}^{nom}\), we can obtain a safe action solving the quadratic program (QP): \[\begin{array}{rl}a_{t}=\underset{a_{t}\in A}{\text{argmin}}&\|a_{t}-a_{t}^{ nom}\|_{2}^{2}\\ \text{s.t.}&h(f(s_{t},a_{t}))+(\gamma(s_{t})-1)h(s_{t})\geq 0.\end{array} \tag{5}\] **Lemma 1**.: _For dynamical system (1), if the QP problem in (5) is feasible for all \(s\in\mathcal{C}\), then the controller derived from (5) renders set \(\mathcal{C}\) forward invariant, i.e., safety is preserved._ Proof.: For any initial state \(s_{0}\in\mathcal{C}\), we can derive that: \[h(s_{t}) =h(f(s_{t-1},a_{t-1}))\geq\left(1-\gamma(s_{t-1})\right)h(s_{t-1})\] \[\geq\left(1-\gamma(s_{t-1})\right)\left(1-\gamma(s_{t-2})\right) \cdot h(s_{t-2})\] \[\cdots\] \[\geq\prod_{i=0}^{t-1}(1-\gamma(s_{i}))\,h(s_{0})\geq 0 \tag{6}\] which implies that safety is preserved at any time \(t\). Practical CBF usability in multi-agent systems comes with a few important remarks. Problem (5) is tipically non-convex, and prior work focused on linear CBFs and convex formulations, for improved efficiency and optimal solutions [4]. In contrast, ASRL does not assume such structures or the ability to optimally solve the QP. Instead, ASRL adaptiveness uses class-\(\mathcal{K}\) functions, allowing users to provide any CBF. Moreover, handling multiple CBF constraints and input limits introduces feasibility issues [13, 14]. This is further exacerbated in multi-agent systems, where limited information about the other agents preclude achieving full safety guarantees. These uncertainties can be quantified and integrated _explicitly_ in robust CBF formulations, with probabilistic guarantees [15]. In this work, the uncertainty is _implicitly_ learned in the adaptive model and the safety relaxed into a chance constraint, as formulated in next section. ## III Problem Statement We consider the problem of learning a policy \(\pi_{\theta}\) with parameters \(\theta\) for an agent operating in a multi-agent environment, with partial observability of the other agents' parameters. We formulate it as a CPOMDP with a cost function and aim to find a solution that keeps the occurrence of safety violations below a desired level of tolerance \(d\in[0,1]\). Formally, the policy optimization problem is defined as: \[\max_{\theta}\ \mathcal{J}_{R}(\theta)=\mathbb{E}_{\tau\sim\pi_{ \theta}}\big{[}\sum_{t}^{H}\alpha^{t}\,r(s_{t},a_{t},s_{t+1})\big{]} \tag{7}\] \[\text{s.t.}\ \mathcal{J}_{C}(\theta)=\mathbb{P}_{\tau\sim\pi_{ \theta}}(h(s_{t})<0)\leq d\] where the trajectory \(\tau=(s_{0},a_{0},s_{1},a_{1},...)\) results from the interaction of \(\pi\) with the agents \(\pi_{i>1}\), given the initial distribution \(s_{0},\pi_{i>1}\sim\rho_{0}\), and dynamics \(s_{t+1}=f(s_{t},a_{t})\). ## IV Adaptive Safe Reinforcement Learning We present ASRL, the main contribution of this work, an adaptive framework for multi-agent systems, which combines low-level model-based control and model-free RL in a hierarchical fashion, and the associated optimization algorithm. **Hierarchical Model Architecture.** We structure the autonomous agent \(\pi\) into a high-level model, which drives the system towards the desired goal and provides an adaptive class-\(\mathcal{K}\) function, and a low-level layer, which enforces the system safety using the barrier function \(h\), the actions, and the coefficients from the high-level model. To address partial observability of other agents, we adopt a novel multi-head actor with the following components: \[\text{Representation model:}\qquad\quad z_{t}=\phi_{\eta}(o_{t-kt}) \tag{8}\] \[\text{Policy model:}\qquad\qquad\qquad\qquad\quad a_{t}\sim\pi_{ \xi}(\ \cdot\ |\ z_{t})\] (9) \[\text{Safety model:}\qquad\qquad\qquad\gamma_{t}\sim\gamma_{\psi}( \ \cdot\ |\ z_{t}) \tag{10}\] It consists of a representation model \(\phi_{\eta}\) which encodes the past \(k\) observations \(o_{t-k:t}\) into an embedding \(z\), a policy head \(\pi_{\xi}\) which produces action \(a_{t}\), and an adaptive-safety head \(\gamma_{\psi}\) which outputs the CBF coefficient \(\gamma_{t}\). Our multi-head model is constructed with a specific emphasis on modularity, thereby enforcing a separation of concerns in design. The joint training of these components is carried out as a single integrated model, with the parameters denoted as \(\theta=(\eta,\xi,\psi)\), and the details described in the next section. **Learning Adaptive Behaviors.** We solve the Optimization Problem (7), considering its unconstrained relaxation: \[\min_{\lambda\geq 0}\ \max_{\pi_{\theta}\in\Pi}\ \mathcal{J}(\theta,\lambda)= \min_{\lambda\geq 0}\ \max_{\pi_{\theta}\in\Pi}\ \mathcal{J}_{R}(\theta)-\lambda\,\mathcal{J}_{C}(\theta) \tag{11}\] where \(\mathcal{J}\) is the Lagrangian, and \(\lambda\geq 0\) is the Lagrange multiplier which acts as penalty term. The two optimizations steps are interleaved till convergence, seeking for a saddle point of the original problem which is a feasible solution. _Policy Update._ Considering the model-free setting due to the lack of knowledge of other agents, the true return and cost distributions are induced by the policy rollouts and unknown. We use a policy-gradient algorithm [16], jointly optimizing an actor \(\pi_{\theta}\) and a critic \(v_{\zeta}\) models. The critic simply regresses the value estimates \(v_{target}(z_{t})\), minimizing the loss: \[\mathcal{L}_{v}(\zeta)=\mathbb{E}_{t}\big{[}(v_{\zeta}(z_{t})-v_{target}(z_{t}) )^{2}\big{]} \tag{12}\] The actor is updated by maximizing the following loss \[\mathcal{L}_{\pi}(\theta)=\mathcal{L}_{R}(\theta)-\lambda_{k}\mathcal{L}_{C}( \theta)+\beta\mathcal{L}_{ent}(\theta) \tag{13}\] where \(\lambda_{k}\) is the Lagrange multiplier introduced in Eq. (11) at the \(k\)-th update, \(\mathcal{L}_{R},\mathcal{L}_{C}\) denote the surrogate clipped losses for cumulative rewards and costs [16], and \(\mathcal{L}_{ent}\) denotes the entropy bonus for exploration. We use generalized advantage estimation (GAE) [17] to trade-off bias and variance in the advantage estimates \(\hat{A}_{R,t},\hat{A}_{C,t}\) for return and cost respectively. The surrogate clipped losses are defined as: \[\mathcal{L}_{R}(\theta)=\mathbb{E}_{t}\big{[}min(r_{t}(\theta)\hat {A}_{R,t},clip(r_{t}(\theta),1-\epsilon,1+\epsilon)\hat{A}_{R,t})\big{]}\] \[\mathcal{L}_{C}(\theta)=\mathbb{E}_{t}\big{[}min(r_{t}(\theta)\hat {A}_{C,t},clip(r_{t}(\theta),1-\epsilon,1+\epsilon)\hat{A}_{C,t})\big{]}\] \[r_{t}(\theta)=\frac{\pi_{\theta}(a_{t}|z_{t})}{\pi_{\theta_{old}} (a_{t}|z_{t})}\] _Lagrange Multiplier Update._ The Lagrange multiplier plays as an adaptive penalty in the unconstrained problem, to Fig. 2: **Left:** Simulations with non-cooperative (left) and cooperative agents (right) under different CBF coefficients \(\gamma\). Time of Arrival (_ToA_) and Minimum Distance to Collision (_DtC_) are reported for the ego agent (_blue_). The long-term effects diverge based on the coefficients, showing the importance of adaptation. **Right:** Average performance of CBF coefficients under different number of agents (_top_) and safety distances \(D_{s,others}\) (_bottom_) of other agents’ controllers (\(n=25\)). make the infeasible solutions sub-optimal. We update the Lagrangian multiplier with the PID update rule [18], because of its effectiveness and simplicity in the implementation. This update rule resembles the tuning of PID controllers to correct oscillations and overshooting of traditional Lagrangian methods [19]. The update rule at iteration \(k\) is as follows: \[\lambda_{k}\leftarrow(K_{P}\Delta_{k}+K_{I}I_{k}+K_{D}\delta_{k})_{+} \tag{14}\] where \(K_{P},K_{I},K_{D}\in\mathcal{R}_{+}\) are hyperparameters for the proportional, integral and derivative errors, defined as: \[\Delta_{k}=\mathcal{J}_{C,k}-d \tag{15}\] \[I_{k}=(I_{k-1}+\Delta_{k})_{+}\] (16) \[\delta_{k}=\mathcal{J}_{C,k}-\mathcal{J}_{C,k-1} \tag{17}\] **Low-level Control Design.** This section presents the design of CBF for multi-agent systems, which involves two steps: 1. _State and dynamics identification_: we model the underlying system's dynamics. This step can follow first principles, employing physical laws and motion equations, data-driven approaches, or a combination of these. 2. _CBF design_: we design CBFs to enforce safety, mapping states to numerical values. This step presents several challenges in formalizing safety, defining barriers, efficient modeling and parameters selection to ensure feasibility while balancing safety and performance. To exemplify this methodology, we show the low-level control design of our motivating example. _Multi-robot system._ Consider a system with \(n\) agents [20], where each agent \(i\) has non-linear control-affine dynamics: \[s_{i,t+1}=\begin{bmatrix}p_{i,t+1}\\ v_{i,t+1}\end{bmatrix}=\begin{bmatrix}I&\Delta t\\ 0&I\end{bmatrix}\begin{bmatrix}p_{i,t}\\ v_{i,t}\end{bmatrix}+\begin{bmatrix}0\\ \Delta t\end{bmatrix}a_{i,t} \tag{18}\] where \(\Delta t\) is the discrete time-step, \(p_{i}\in\mathcal{R}^{2}\), \(v_{i}\in\mathcal{R}^{2}\), \(a_{i}\in\mathcal{R}^{2}\) denote the position, velocity and acceleration of robot \(i\) respectively. We can write the joint multi-agent system as: \[s_{t+1}=\begin{bmatrix}p_{t+1}\\ v_{t+1}\\ z_{t+1}\end{bmatrix}=\begin{bmatrix}f_{p}(s_{t})\\ f_{v}(s_{t})\\ f_{z}(s_{t})\end{bmatrix}+\begin{bmatrix}g_{p}(s_{t})\\ g_{v}(s_{t})\\ g_{z}(s_{t})\end{bmatrix}a \tag{19}\] where \(p\in\mathcal{R}^{2}\) and \(v\in\mathcal{R}^{2}\) denote the position and velocity of the ego agent, \(a\in\mathcal{R}^{2}\) denotes the ego action, and \(z\in\mathcal{R}^{n-4}\) denotes the other agent's states. The real-valued functions \(f_{o},g_{o}\) are known for \(o\in p,v\). However the other agents' actions are unknown for the ego (i.e., \(g_{z}=0\)). Without loss of generality, they can be assumed to be a function of the joint state \(s_{t}\) and part of the dynamics \(f_{z}\). We consider the CBFs as pairwise safety constraints between the ego agent \(i\) and any other agent \(j\neq i\): \[h(x)=\frac{\Delta p_{ij}^{T}}{||\Delta p_{ij}||}\Delta v_{ij}+\sqrt{a_{max}(|| \Delta p_{ij}||-D_{s})} \tag{20}\] where \(a_{\text{max}}\) denotes the maximum braking that the ego agent can apply to avoid a collision, \(\Delta p_{ij}\) represents the relative position \(p_{i}-p_{j}\), and \(\Delta v_{ij}\) the relative velocity \(v_{i}-v_{j}\). _Multi-agent Racing._ We consider a second use case of competitive multi-agent racing. The dynamics are modeled using an Euler-discretization of the kinematic bicycle model as in [21]. We consider two safety specifications for collision with walls and opponents, and model them using distance on Frenet and Cartesian coordinates, respectively. For conciseness, we describe the dynamics and CBFs in the Appendix. ## V Experiments In this section, we describe the experiments to evaluate our adaptive safe-learning approach in multi-agent systems. **Simulation.** We conducted our experiments in the multi-agent environments presented in the previous section. For the multi-robot system, we use the simulator and CBF from [15] with its simplest collision-avoidance formulation. For the multi-agent racing system, we use the F1tenth simulator [22], which provides simulation of multiple vehicles and sensory inputs. In both the environment, the CBF uses a constant-velocity model (CVM) for the other-agents behaviors. **Training.** We implemented the ASRL algorithm with the omnisafe library [23]. During training, we randomize the starting conditions and collect episodes of 15 seconds. The agent observes the last \(5\) states and learns with progress-based reward and sparse cost signals: for the multi-robot system, the cumulative reward is \(1\) for reaching the goal location and the cost is \(1\) for collision with any opponent; for the racing system, the reward is proportional to the relative distance in front of other vehicles and the cost is \(1\) for collisions. We evaluate the agent by averaging the reward and cost with a moving average over the last \(100\) episodes and train the agents for \(1\) million steps. More details on the environments and training are reported in the Appendix. **Agents' Randomization:** To create the condition for adaptiveness, we randomize the number of agents and policies in each environment. In the multi-robot system, we select between \(3\) and \(7\) agents at each episode and randomize their policies through the safety distance \(D_{s}\) used to avoid obstacles. In the multi-agent racing system, we simulate \(2\) vehicles starting in front of the ego vehicle and tracking a reference line with velocity profile randomly scaled by a factor normally distributed with \(\mu=0.60\) and \(\sigma=0.05\). **Comparison End-to-End.** In Figure 3, we compare ASRL with the following state-of-the-art safe RL baselines: PPO Lagrangian, DDPG Lagrangian and TD3 Lagrangian, which are safe versions of the on-policy PPO [16], and off-policy DDPG [24] and TD3 [25] respectively; CPO, a trust-region method with near-satisfaction guarantees [26]; IPO, an interior-point policy optimization method [27]; PPO Saute, a state-augmented PPO on the Saute MDP [28]. In both environments, a noticeable performance gap emerges with on-policy algorithms which struggle in achieving safe solutions within the desired limit. Their on-policy nature may require extended training periods to attain safe and optimal results. Meanwhile, in the multi-robot environment, the off-policy algorithms DDPG-Lag and TD3-Lag achieve safe solutions but at the expense of poor overall performance. Similarly, in the multi-agent racing environment, on-policy algorithms continue to fail, while notably, TD3-Lag exhibits slow but consistent progress, reaching a near-optimal level of performance by the end of training. In contrast, our ASRL approach, incorporating trainable policies and CBF coefficients, fastly converges to high returns while consistently staying around the desired cost limit. Notably, in the multi-robot task, it starts above the cost limit and gradually reaches it, whereas in the racing task, it initiates below the limit and gradually adjusts safety levels to approach the threshold. Our results suggest that our integration of CBF into the agent model improves exploration during training, resulting in a reduction in cumulative violation costs comparable to or better than off-policy methods, all while achieving significantly higher performance. This trend is graphically depicted Figure 3 (bottom). **Ablation Study on Learning Components.** In this experiment, we evaluate the impact of adaptive safety and demonstrate its domain-adaptation skills. To assess adaptive safety in isolation, we replace the policy module with a non-trainable controller. During training, we use a Perturbed Gaussian policy [29] to foster exploration. Details on training and controllers are reported in the supplementary material. We evaluate the performance of the ablate model against traditional control-theoretic approaches: Standard CBF (S-CBF), which uses CBF with fixed class-\(\mathcal{K}\) coefficients, and Optical-Decay Adaptive CBF (OD-CBF), which adapts the coefficients to ensure point-wise feasibility [12]. To account for the fact that these methods' performance are highly sensitive to the coefficients initialization, we discretize the range of CBF parameters in \(10\) values to cover most of the possible configurations. We collect \(100\) episodes for each configuration and report the performance in Figure 4. We observe comparable performance of OD-CBF and S-CBF in both the environments, confirming that optimal-decay adaptiveness might improve feasibility of the QP problem but cannot capture long-term objectives, as sparse and delayed events of success or collision. Conversely, our ablated model outperforms the baselines by simply adapting the CBF coefficients based on interactions with other agents. This strongly suggests that learning of adaptive safety can substantially enhance the performance of existing controllers. **Generalization and Scalability.** To evaluate the generalization capabilities of our trained agent in multi-agent systems, we consider diverse racing scenarios including: (1) varying the number of agents, (2) in-distribution planners with varying velocity profiles, and (3) out-of-distribution planners with new strategies and varying velocity profiles. We focus on in- and out-distributions opponents, deliberately excluding cross-track generalization because competitive high-speed racing demands specialized strategies, even to the extent of overfitting to track conditions. For each scenario, we sample \(10\) starting positions with the ego behind (Figure 5, top left) and run simulations for \(60\) seconds. We consider collision or lap completion as termination conditions and measure the final positioning (_rank_) as common in racing competitions. As shown in Figure 5, the trained agent exhibits competitive performance, often reaching 1st and 2nd place despite the initial positional disadvantage. Training with \(2\) opponents proves sufficient for generalization to races with many agents. Up to \(8\) opponents, the agent consistently secures a podium spot with an average rank below 3rd place. However, with \(9\) agents or more, the average rank exceeds the \(4\)-th position due to the limited time horizon for race completion. For _in-distribution_ planners, the agent shows good adaptation and remains robust to faster velocity profile. Notably, the rank increase appears directly linked to the training distribution. For _out-distribution_ planners, we consider reactive (FTG [30]) and sampling-based planners (Lattice [31]) with different velocities. Our agent outperforms FTG, as expected due to its reactive nature which lacks of any global raceline. Moreover, we observe a competitive racing style against Fig. 4: Comparison of the ablate model with control-theoretic approaches using the same nominal controller. The bars show the mean rate with min/max delimiters for the same method. Performance for trained models are averaged over \(3\) runs. Fig. 3: Learning curves of our approach and safe-RL baselines. Return, cost, and total cost averaged over \(3\) runs. the Lattice planner, a robust baseline for comparison. Notably, our agent maintains a high level of performance even against high-velocity profiles (_right-most bars_), suggesting the ability to learn characteristics behaviors for racing, and effectively reuse them against previously unseen opponents. **Summary of Results.** We assess our adaptive approach in two multi-agent systems with a range of diverse agents. The experiments revealed that our hierarchical integration of CBF facilitated convergence to near-optimal agents, outperforming a variety of safe RL baselines. Moreover, our ablation study isolated the impact of Adaptive CBF, showcasing superior adaptiveness than traditional control-theoretic methods. Finally, empirical evidence demonstrated our agent's ability to adapt and generalize across various racing scenarios, including unseen opponents and high-velocity profiles. ## VI Related Work **Safe RL via CBF.** Safe RL has drawn much attentions to prevent visiting unsafe states in safety-critical systems [32, 33, 34, 35]. The application of CBF in Safe RL has been proposed in [4] and is getting popular because of its safety guarantees and computational efficiency [6, 36]. Existing works train an RL agent to propose actions and use vanilla CBF to enforce safety. However, to the best of our knowledge, we are the first jointly training state-dependent CBF coefficients with an RL policy to ensure bounded chance of violations. **Adaptive CBFs.** Several works focus on improving feasibility and performance of CBF-controllers [9, 11, 12, 37, 38], mostly focusing on single-agent or cooperative systems. In these settings, the CBF coefficients are optimized through gradient-based methods [37] or policy distillation of a network with differentiable CBF layer, under the assumption of available expert demonstrations [11]. None of these works focus on multi-agent environments and adaptation to changing policies. In contrast, our approach leverages discrete-time CBF, which better fits MDP theory, to train a model tailored for multi-agent adaptation with online RL. CBF coefficients are updated in [38] based on the level of cooperation of other agents towards the ego. However, they assume the ego agent knows the other agents' actions information beforehand. Morevoer, they do not offer a direct way to update CBF coefficients but only mentioning that the derivative of them is monotonically increasing w.r.t. the level of cooperation, still requiring user intervention. In contrast, we do not rely on such an assumption and leverage RL to train the coefficients, thus replacing any manual effort with a systematic methodology. Moreover, we demonstrate our approach in a challenging multi-agent racing scenario. **Multi-agent CBF.** In multi-agent systems, prior research proposed CBF to ensure collision-free behavior [39, 20]. Among these, [40] proposes a scalable decentralized approach to control multiple agents, [15] presents a robust CBF with uncertainty model learned from data, and [10] focuses on the joint optimization of control policy and CBFs. However, these works do not operate within an RL setting and do not consider adaptation to many agents' policies. Moreover, they mostly rely on fixed class-\(\mathcal{K}\) function. ## VII Conclusions We present Adaptive Safe RL (ASRL) for multi-agent systems with partial observability from interactions with other agents. Our novel ASRL combines model-free RL and adaptive CBF to optimize long-term objectives under diverse agent strategies while adhering the desired cost constraint. ASRL surpasses traditional learning-based and control-theoretic approaches, demonstrating adaptiveness and generalization across various multi-agent conditions, such as number of agents and their parameters. Thus, ASRL enables safe autonomy in dynamic multi-agent setting. **Why ASRL in multi-agent systems?** CBFs are a valuable tool but their design and tuning is challenging with multiple agents. We enhance CBFs with adaptive coefficients, integrating them into a trainable architecture and optimize them to diverse behaviors and long-term objectives. **How does ASRL compare to existing approaches?** ASRL retains benefits of CBF over learning methods, enabling efficient exploration while consistently adhering to cost limits. Compared to control-theoretic methods, our trainable model achieves superior performance and adaptability. **What are the limitations of ASRL?** We primarily focus on systems with a relative degree of \(1\). To consider higher-order systems and CBFs [41], it would be possible to introduce multiple coefficients in our approach. However, expanding the action space can make high-dimensional continuous control and exploration challenging. To address this, careful modeling of the action space is essential for tractability. Also, ASRL does not assume full observability of other agents or explicit uncertainty quantification, thereby limiting its ability to guarantee safety in all scenarios. While CBF typically assumes perfect knowledge of the system, this assumption rarely holds in practical scenarios. To this limitation, we adopt a chance-constraint formulation and ensure safety within certain bounds. Ongoing research is exploring alternative methods and uncertainty quantification in pursuit of robust solutions. Fig. 5: Generalization in multi-agent racing (_top left_). Performance measure the ego rank (_lower is better_) under previously unseen number of agents, velocity profiles \(v_{opp}\), and planners. The training distribution is overlayed (_blue_). ## VIII Acknowledgements L.B. was supported by the Doctoral College in Resilient Embedded Systems (DCRES). ### Nominal planners for Ablation study In the ablation study, we train the adaptive safety module and control the ego with a built-in controller. In this section, we describe the controllers adopted in each use case. #### Multi-Robot System We use model predictive control (MPC) [42] to generate (potentially unsafe) actions for the ego robot. The MPC steers the robot towards the goal and it is unaware of the other agents, relying on the use of CBF for avoiding collisions. Specifically, our nominal controller solves the following constrained finite horizon optimal control problem at each time step \(t\): \[a_{t:t+N-1} =\operatorname*{arg\,min}_{a_{t:t+N-1|t}}p(s_{t+N|t})+\sum_{k=0}^ {N-1}q(s_{t+k|t},a_{t+k|t})\] (21a) s.t. \[s_{t+k+1|t}=f(s_{t+k|t})+g(s_{t+k|t})a_{t+k|t}, \tag{21b}\] \[s_{t+k|t}\in S,a_{t+k|t}\in A\] (21c) \[s_{t|t}=s_{t},\] (21d) \[s_{t+N|t}\in S_{f}, \tag{21e}\] where \(N\) is the horizon, \(p\) is the final state cost, \(q\) is the state and control cost for each time step, \(k\) is ranging from \(0\) to \(N-1\), (21b) is the joint dynamic of multi-agent system shown in (19), and \(S_{f}\) is the desired final state set. Note that the system is evolved by applying \(a_{t}\) at each time step, and then solve the above MPC again and obtain the next action to apply. In our case, we choose \[p(s_{t})=-s_{t}^{T}Qs_{goal}\] and \[q(s_{t},a_{t})=s_{t}^{T}Qs_{t}+a_{t}^{T}Ra_{t},\] where \(s_{goal}\) is the goal state and \[Q=\begin{bmatrix}10&0&0&0\\ 0&10&0&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix},\ R=0.02\,I_{4}. \tag{22}\] #### Multi-Agent Racing We use a sampling-based planner, known as lattice planner. It observes the current poses and velocities of the ego vehicle and two closest opponents, and knows the raceline as a sequence of waypoints, generated as in [43]. Then, we sample local goals in a grid around the optimal raceline and generate corresponding dynamically feasible trajectories from the current pose to each local goal (see, e.g., [44, 45]). Each of the local plan is evaluated based on the following metrics: * Cost based on the arc length, to prefer shorter trajectories; * Cost based on maximum curvature, to avoid acute steering; * Cost based on the similarity to the previous plan; * Cost based on the tracking error to the raceline; * Cost for collision with opponent and track boundary; * Cost for low speed, to encourage fast racing; * Cost for the co-occurrence of high speed and high curvature. ### Modeling of Multi-agent Racing System The state of each racing agent \(i\) at time \(t+1\) is described by the following variables: \[\begin{bmatrix}x_{i,t+1}\\ y_{i,t+1}\\ \psi_{i,t+1}\\ v_{i,t+1}\\ v_{i,t+1}\\ e_{i,t+1}\\ s_{i,t+1}\end{bmatrix}=\begin{bmatrix}x_{i,t}+v_{i,t}\,cos(\psi_{i,t}+\beta_{i })\,dt\\ y_{i,t}+v_{i,t}\,sin(\psi_{i,t}+\beta_{i})\,dt\\ \psi_{i,t}+\frac{v_{i,t}}{l_{r}}sin\beta_{i}\,dt\\ v_{i,t}+a_{i}\,dt\\ s_{i,t}+\frac{v_{i,t}\,cos(\psi_{i,t}+\beta_{i}-\psi_{c}(s_{i,t}))}{1-e_{c}\,k _{c}(s_{i,t})}\,dt\\ e_{i,t}+v_{i,t}\,sin(\psi_{i,t}+\beta_{i}-\psi_{c}(s_{i,t}))\,dt\end{bmatrix} \tag{23}\] where for agent \(i\) and time \(t\): * \(x_{i,t},y_{i,t},\psi_{i,t}\) represent the cartesian position and orientation, * \(v_{i,t}\) denotes the velocity, * \(e_{i,t},s_{i,t}\) represent the lateral and longitudinal Frenet coordinates, * \(\beta\) denotes the slip angle, computed from the steering angle \(\delta_{i}\) using the relation \[\beta_{i}=\tan^{-1}\left(\frac{l_{r}}{l_{f}+l_{r}}\tan(\delta)\right),\] for \(l_{r}\) and \(l_{f}\) representing the distances from the center of gravity to the rear and front axles, respectively. The model also considers the heading of the track along the center-line, denoted by \(\psi_{c}\), and the curvature of the center-line, denoted by \(k_{c}\). These parameters are necessary to compute the evolution of \(s_{i},e_{i}\). We rewrite the control inputs as \(a_{i},\beta_{i}\) instead of the usual \(a_{i},\delta_{i}\) to use a more tractable model with relative-degree \(1\). #### Control Barrier Function In the context of racing, we consider two different types of collisions, respectively with walls and other vehicles. * To handle the collision with the wall, we use the Frenet coordinates and define a safety as keeping the lateral coordinate within a safe margin \(d_{margin}\) from the track boundaries. To do that, for each agent \(i\) at time \(t\), we consider the lateral coordinate \(e_{i,t}\), its velocity \(\dot{e}_{i,t}\), and assume the agent can brake as \(a_{\text{brake}}\). Formally, the CBF for wall collision is defined as: \[h_{wall}(x)=|e_{wall}-e_{i,t}|-\frac{\dot{e}_{i,t}^{2}}{a_{ brake}}-d_{margin}.\] (24) * For collisions with other opponents, we use the same formulation as in the multi-robot system (e.g., with opponent \(j\)): \[h_{opp}(x)\!=\!\frac{\Delta p_{ij}^{T}}{||\Delta p_{ij}||}\Delta v_{ij}\!+\! \sqrt{a_{max}(v_{i,t})(||\Delta p_{ij}||\!-\!D_{s})}. \tag{25}\] However, we introduce a velocity-dependent braking acceleration \(a_{\text{max}}=a_{max}\left(1-\frac{v_{i,t}}{v_{max}}\right)\) to discourage driving at high speeds in the direction of an opponent, as it would increase the chance of collision. ### Environments In this section, we describe the details on the observations, actions, and training signals used in each environment. #### Multi-Robot System. The agents positions and controllers are randomized at the beginning of each episode. All non-controllable agents drive towards their goal avoiding obstacles within a distance \(D_{s}\), thus sampling \(D_{s}\) for each agent give a diverse set of policies to cope with. As an example, an agent with \(D_{s}=0\) would result in aggressive maneuvers to reach the goal position as soon as possible, while \(D_{s}=6\) would proceed while avoiding other agents closer than \(6\) units from it. In our experiments, we sample \(D_{s}\) among \(0,5,6\) with about \(50\%\) chance of \(0\) and \(50\%\) chance of \(5\) or \(6\). The ego agent observes the states of the \(k\) closest agents, including their position \(x,y\), velocity \(v_{x},v_{y}\) and goal \(x_{g},y_{g}\). We do not expose the policy parametrization of other agents. The ego actions are intermediate waypoints \(x_{wp},y_{wp}\) relative to the ego coordinate and bounded within \(1.0\) unit from it. The min-time controller presented in the previous section is used to compute the accelerations to reach the goal. We use a progress-based reward defined as \[r(s,a,s^{\prime})=\frac{d_{eu}(s)-d_{eu}(s^{\prime})}{d_{eu,0}}\] where \(d\) denotes the Euclidean distance with the goal position, and \(d_{0}\) is the distance from the initial state, used as a normalization constant. The cost builds on a sparse signal in case of the collision event, which is \(1\) if the ego agent is in collision with any other agent. In case of collision, the episode terminates. #### Multi-Agent Racing. The vehicles positions are randomized at the beginning of each episode, sampling a position in the first-half of the track. The non-controllable agents track a reference trajectory, precomputed offline as in [43]. We vary the agents behavior by scaling the velocity profile of the reference trajectory by a random factor around \(0.6\). The ego vehicle starts at the end of the batch of vehicles and drives without any velocity scaling, so that it can catch up with the other vehicles and overtake them. In the observation, we include the agents poses \(x,y,\theta\) and positions in Frenet coordinates \(s,e\) with respect to centerline, and some track features, including \(10\) raceline waypoints with curvature for a lookahead distance of \(10\) meters. The actions consist of local plans as cubic splines. We characterized them in Frenet frame, by controlling the lateral displacement \(e_{f}\) and target velocity \(v_{f}\). From the current position of the vehicle, we use an adaptive lookahead distance based on the velocity \[l(v)=l_{min}+\frac{v*(l_{max}-l_{min})}{l_{scale}}\] and derive the target waypoint corresponding to \(l(v),e\). Then, we fit a cubic spline from the current position to the target waypoint and use the target velocity as reference. We reward the agent based on the relative progress in front of the other agents at the end of the episode, saturated to \(+1\) when at least \(5\) meters ahead and \(-1\) when at least \(5\) meters behind. In particular, we use the following signal: \[r(s,a,s^{\prime})=\sum_{agent\neq ego}tanh\left(\frac{d_{fr}(s_{ego},s_{agent}) }{5.0}\right)\] where \(d_{fr}\) denotes the distance in the longitudinal Frenet coordinate, and \(5.0\) serves as a coefficient to account for the car length and cap the rewarding beyond a sufficient margin to be considered as overtaking. The cost builds on a sparse signal in case of the collision with the wall or any opponent, which is \(1\) if the ego agent crashes into them. Even in this case, when the ego collides at fault or not, the episode terminates. We do not terminate the episode if the other agents collide among them. ### Training Details In this section, we discuss the agent training and the specific settings we used. Table I provides detailed information on these settings for each experiment. In our Ablation Study, we identified two key decisions that significantly impact the performance: (1) changing the actor distribution to Perturbed Gaussian; (2) reducing the frame-stacking to single observation. We observe the ablation study to make the learning process harder because the degree of freedom of the agent are restricted to the adaptive safety only, due to the use of built-in controllers. The resulting reward and cost become noisy and a training signal difficult to extract. For this reason, we introduce Perturbed Gaussian to help in exploration and avoid the early convergence to sub-optimal distributions in favour of high-entropy ones. Moreover, smaller frame stacking significantly reduces the observation dimensions, especially in the racing task, where the observations include many redundant track features. In future work, this approach could benefit from better model architectures to capture the task features in a more effective way.
2309.11549
Large Synthetic Data from the arXiv for OCR Post Correction of Historic Scientific Articles
Scientific articles published prior to the "age of digitization" (~1997) require Optical Character Recognition (OCR) to transform scanned documents into machine-readable text, a process that often produces errors. We develop a pipeline for the generation of a synthetic ground truth/OCR dataset to correct the OCR results of the astrophysics literature holdings of the NASA Astrophysics Data System (ADS). By mining the arXiv we create, to the authors' knowledge, the largest scientific synthetic ground truth/OCR post correction dataset of 203,354,393 character pairs. We provide baseline models trained with this dataset and find the mean improvement in character and word error rates of 7.71% and 18.82% for historical OCR text, respectively. When used to classify parts of sentences as inline math, we find a classification F1 score of 77.82%. Interactive dashboards to explore the dataset are available online: https://readingtimemachine.github.io/projects/1-ocr-groundtruth-may2023, and data and code, within the limitations of our agreement with the arXiv, are hosted on GitHub: https://github.com/ReadingTimeMachine/ocr_post_correction.
Jill P. Naiman, Morgan G. Cosillo, Peter K. G. Williams, Alyssa Goodman
2023-09-20T18:00:02Z
http://arxiv.org/abs/2309.11549v1
# Large Synthetic Data from the ar\(\chi\)iv for OCR Post Correction of Historic Scientific Articles ###### Abstract Historical scientific articles often require Optical Character Recognition (OCR) to transform scanned documents into machine-readable text, a process that often produces errors. We present a pipeline for the generation of a synthetic ground truth/OCR dataset to correct the OCR results of the astrophysics literature holdings of the NASA Astrophysics Data System (ADS). By mining the ar\(\chi\)iv we create, to the authors' knowledge, the largest scientific synthetic ground truth/OCR post correction dataset of 203,354,393 character pairs. Baseline models trained with this dataset find the mean improvement in character and word error rates of 7.71% and 18.82% for historical OCR text, respectively. Interactive dashboards to explore the dataset are available online: [https://readingtimemachine.github.io/projects/1-ocr-groundtruth-may2023](https://readingtimemachine.github.io/projects/1-ocr-groundtruth-may2023), and data and code, are hosted on GitHub: [https://github.com/ReadingTimeMachine/ocr_post_correction](https://github.com/ReadingTimeMachine/ocr_post_correction). Keywords:scholarly document processing optical character recognition astronomy. ## 1 Introduction The ability to digitally store and parse scientific literature is vital to ensure access and proliferation of scientific ideas [40, 28, 44]. While digital storage is supported for much of contemporary scientific literature, the text of many historical documents is "trapped" within scanned pages of paper journals and theses. Recently, various deep learning methods have been employed to extract page objects (e.g., figures) from scans [10, 11, 32, 20]. An obstacle to the extraction of information from historical articles is the accuracy of these extracted materials. This is especially of concern for any text objects which contain the bulk of the information in an article. A typical solution is to extract text with Optical Character Recognition (OCR) engines [48]. However, the generated text is often noisy which is not only an issue for comprehension by humans and screen readers [41], but also can affect "downstream" natural language processing tasks such as topic modeling, sentence segmentation and named entity recognition [12], often times causing significant errors in these processes [47]. Here, we discuss a new method for addressing OCR noise in the context of the extraction of text from a subset of \(\sim\)56k articles from the pre-digital holdings of the the Astrophysics Data System (ADS)3 from \(\sim\)1850-1997 [29]. While our ultimate goal is to correct all historical text within the ADS holdings, our initial focus is on the correction of "plain text" in the main portions of articles (i.e., not text within tables or captions). Our method relies on generating synthetic data from mining the ar\(\chi\)iv source files (LaTeX/TeX files which compile to PDFs [49]) for "post correction" models which are applied to previously extracted OCR text. Footnote 3: [https://ui.adsabs.harvard.edu/](https://ui.adsabs.harvard.edu/) Post correction methods are vital to the extraction of text from the historical holdings of ADS as only a small portion of the articles can be mined with PDF-parsing software [29, 30]. Additionally, in many large historical corpora it is not computationally feasible to re-OCR holdings each time an OCR engine is upgraded [51], making post correction the only option to reduce errors. While the work presented here focuses on the literature of the "big-data" science of astronomy and astrophysics [42, 46], our methods of synthetic data generation can be generalized to other scientific fields. To aid in future generalizability, we use the open-source OCR engine Tesseract[43] and provide all code in Python. Because the dataset is large we provide interactive visualizations to assist any user of our resource in their investigation of the dataset. ## 2 OCR Noise Reduction Techniques & Mining the ar\(\chi\)iv OCR noise is prevalent in the majority of OCR datasets used in the fields of digital humanities and cultural analytics [19]. OCR errors do not follow patterns of typical misspellings, thus their correction generally relies on different tools than spell-checking software [31]. OCR post correction, a method of error mitigation, in which OCR'd text is de-noised, is a field covering a wide range of digitization applications [36] and models have historically taken several forms [53]. More recently, deep learning models have been developed to tackle post correction [27] which typically make use of sequence-to-sequence models [26, 34, 50]. These deep learning methods require large training datasets, making their testing predominately completed with well known OCR post correction datasets from the community [13, 16, 37]. As manual annotations can be time consuming at scale [27, 45], synthetic datasets are often used [24, 25, 52]. In particular, mining the ar\(\chi\)iv is a popular method to generate synthetic machine learning training datasets [24, 25, 33]. Given the variety of journals represented in the ar\(\chi\)iv database, its mining represents a vital opportunity to create domain-specific synthetic data [21, 22, 35], which is necessary as models trained on one type of document will often fail on documents dissimilar to the training data [15]. ## 3 Methods In what follows, we make use of two decades of the oldest articles available through the ar\(\chi\)iv Bulk Downloads [1] (1991-2011) for a total of 712,975 articles. ### Compiling the Astrophysics ar\(\chi\)iv Bulk Downloads Once downloaded, all article files are checked for corrupt decompression formats and a main TeX file (those containing \(\backslash\)documentclass or \(\backslash\)documentstyle) for a total of 318,033 articles. To construct an "astronomy article" list, class/style commands are parsed with regex and those which denote typical astrophysical journal names (e.g., "aastex", "apj", "mn") are kept. These names correspond to the three journals which have the most complete scanned historical corpus (The Astrophysical Journal, Astronomy & Astrophysics, and Monthly Notices of the Royal Astronomical Society) [14]. This results in a total of 65,132 articles. This set of \(\sim\)65k files are tested for PDF-compilation errors for a total of 26,578 successfully compiled astronomy articles. The main sources of error are missing files (e.g., missing figure files) and an inability to distinguish which TeX file in a directory is the main article document. ### Segmentation of TeX Documents Many parsers exist for TeX files with output formats such as plain text (e.g., opendetex[6]), XML (e.g., LaTeXML[17], unarXive[39]) or document trees (e.g., TexSoup[8]). With all methods, this parsing tends to be non-trivial [38]. As the documents are compiled once marking modifications are applied to the TeX to track synthetic ground truth (SGT) locations, any parser must account for errors that could occur in the compilation process. Additionally, checks for incorrect splitting of TeX source into trees are required. This excludes "off the shelf" parsers which only run a subset of these checks4. Thus this work makes use of a custom-built TeX parser. Footnote 4: For example, following the process in Section 3.3, TeXSoup finds errors in only 46.2% of files, while our method finds errors in 70.4%. Figure 1 diagrams the segmentation process which uses regex to break TeX files into document trees. A raw TeX document ("Raw LaTeX" snippet shown in upper left gray panel) is parsed to find the locations of special characters denoting commands, variables, and environments ("Splits with regex" blue upper middle panel). A hierarchy is then constructed with checks for closing and opening statements of commands (closing {}) inline math formulas (paired $'s) and environments (\(\backslash\)begin, \(\backslash\)end) and stored in a tree ("Tree" purple upper right panel). Commands which reside within plain text sentences such as inline math, citations, and references ("\(\backslash\)ref{}" commands) are stored with special tags. ### Marking the "ground truth" words in LaTeX & OCR'ing Pages Many methods for marking TeX documents to generate synthetic data for page objects (e.g., figures) modify the LaTeX to add bounding boxes in specific colors around objects and use image processing techniques to extract object locations after the PDF is rendered [24, 25]. Rendered PDFs can potentially be mined for SGT text, however, this can lead to errors in the extracted SGT text [25]. To avoid SGT-text parsing errors, this work adopts a different approach by modifying the TeX source documents with markers denoting every word, inline equation, citation, and reference using the tikmark[9] package as shown within the green outlined "Marked LaTeX" box of Figure 1. Inline math, citations, and references are included as they are frequently interspersed with the plain text. After storing the locations of each SGT object ("Tree" purple box in Figure 1), all text within the "plain text" sections are split into words using white space and starting (ending) \tikzmark commands are placed at the word/citation/reference/inline math start (end). Once the TeX document is compiled, the marks are stored in the auxilary (.aux) file produced during compilation which is then parsed to match each word to its location on the final, rendered PDF page. At this stage, documents which contain the \input command are ignored as these can include text external to the document being parsed. Once the marked files are compiled, each page of each article is OCR'd with Tesseract, following methods used with articles from the historical holdings of the ADS [29, 30]. Examples of these bounding boxes and words are shown in the orange "OCR with boxes" panel of Figure 1. ### The OCR-SGT Alignment Algorithm & Dataset Characteristics The final step in creating our SGT - OCR dataset is to align the OCR and SGT words. In what follows, "element" is defined as a plain text word, inline math formula, citation, or reference. Our alignment routine is as follows: * Step 1: Locations of the bottom left and right bounds of each marked element are found from the.aux files. These locations are shown as solid magenta lines in the magenta "Marked PDF" panel of Figure 1. * Step 2: As \tikzmark gives only the lower y-position of each element, a bounding box is created by assuming 11pt font for each element (11pt font is an average value, font size is not always specified explicitly in the TeX file), shown by the dashed magenta lines in the "Marked PDF" panel of Figure 1. * Step 3: If the bounding box is found to span more than one line, the SGT element is assumed to be hyphenated and each part is marked as a separate word. Alignment operates page-by-page, therefore hyphenated elements which span multiple pages are ignored. * Step 4: The "raw" SGT element is extracted from the source TeX. Figure 1: Diagram of TeX parsed into its attributes (“Raw LaTeX”, “Splits with regex”), and the tree structure built from the positions of these splits within the document (“Tree”), as outlined in Section 3.2. TeX is then marked with the \tikzmark package and OCR’d (section from three top lines in “Tree” shown in “Marked LaTeX”, Section 3.3). Once the TeX is compiled into a PDF, the auxiliary files are parsed to locate the SGT word locations on the rendered PDF page (“Marked PDF”, Section 3.4), OCR words are collected (“OCR with boxes”), and SGT-OCR boxes are aligned (“Output data SGT: OCR-word(s)”, Section 3.4). See text for more details. * Step 5: All OCR bounding boxes which overlap with a SGT box are associated with that SGT element. If an OCR bounding box is associated with more than one SGT element, the OCR element is associated with the SGT element with which it has the largest intersection-over-union (IOU). * Step 6: All OCR elements associated with a SGT element are ordered by increasing horizontal position and combined into a single OCR element for that SGT element. This is shown by the data structure in the yellow "Output in SGT: OCR-words" box of Figure 1. * Step 7: SGT word "type" is stored along with SGT word (plain text, inline main math, citation, reference and whether the word is hyphenated). * Step 8: Elements are ordered by tikzmarks and aligned with edit distance operations [5]. spaCY[18] is used to tokenize aligned pages as sentences [7]. While the majority of articles are aligned without error, Tesseract errors are possible on single pages. From this corpus of 7,850 articles which contain successfully aligned pages, our algorithm produces a total of 71,735 pages of 1,527,118 SGT/OCR sentence pairs which contain a total of 203,354,393 character pairs. The relationships between SGT and OCR aligned characters closely follow other popular datasets with the majority of Levenshtein edit distance [23] operations in our dataset (other datasets) being replacements \(\sim\)61.5% (\(\sim\)40-60%), followed by deletions \(\sim\)19.6% (\(\sim\)10-18%) and insertions \(\sim\)18.9% (\(\sim\)5-24%) [31]. Interactive versions of large confusion matrices for alphabetic characters, digits, punctuation marks and frequent words are hosted on this project's webpage5. Footnote 5: [https://readingtimemachine.github.io/projects/1-ocr-groundtruth-may2023](https://readingtimemachine.github.io/projects/1-ocr-groundtruth-may2023) ## 4 Post Correction Model Baseline Tests To test the post correction effectiveness of our dataset we train a baseline transformer model - byt5[50] - with the dataset. This model is effective for datasets such as ours which contain many out-of-vocabulary OCR words [27]. The model's initial training uses 100k aligned sentences for training, and 5k in the validation and test datasets. Here, transfer learning from the byt5/google-small model on HuggingFace [3] is used, and, for all models, training occurs on a NVIDIA V100 for \(\sim\)87000 iterations over \(\sim\)24 hours, in which the model converges. The entry above the first thick line of Table 1 ("byt5,words") shows the ability of the model to correct only the parts of each aligned SGT-OCR text which have been tagged as plain text in the test datasets. Here, byt5 improves the character error rate (CER) by 67.35% and the word error rate (WER) by 60.18%. While the focus of this work is on correcting the plain text within our corpus, historical ADS articles also contain inline math and citations. Here, we simplify the problem by testing the accuracy of the model on _detecting_ these elements in the text. To proceed, we modify the input and output text by marking these environments with characters that do not appear in the plain text corpus. For example, we replace each instance of a SGT or post corrected OCR inline math formula with a single character ($) and determine how often these characters align in the SGT and predicted OCR. The "byt5,full,fixed" row in Table 1 lists the results of this "fixed" model, trained on 500k "fixed" sentences (10k in the validation and test sets). Here, the CER and WER improvements have both increased to their highest rates of 85.51% and 84.44%, respectively. To test the model's accuracy on pre-digital OCR, we apply the "byt5,full,fixed" model to 202 hand-annotated sentences from the main text of articles in the historical ADS corpus [10, 29, 30, 32]. When applied to this dataset, the mean improvement, \(\langle\)I\(\rangle\), in CER and WER from correction with the fixed-byt5 model (i.e. "byt5,full,fixed" for the ar\(\chi\)iv data) are 7.71% and 18.82%, respectively, as shown in the "historical,full,fixed" row of Table 1. While the improvements in CER and WER are more modest than the improvement in the ar\(\chi\)iv dataset, they are nonetheless significantly larger than those from a generic post correction model [4] (\(\langle\)I\(\rangle_{\text{CER}}\)=-2499.35%, \(\langle\)I\(\rangle_{\text{WER}}\)=-499.26%) or from when byt5 is trained on the words from the historical dataset alone (\(\langle\)I\(\rangle_{\text{CER}}\)=-443.18%, \(\langle\)I\(\rangle_{\text{WER}}\)=-209.74%), both of which result in a large _negative_ improvement. ## 5 Current Limitations & Future Work While the full dataset cannot be shared directly (ar\(\chi\)iv administrators, Private communication), we share a subset of our aligned sentences along with analysis notebooks in GitHub6. We are currently working with the ar\(\chi\)iv to make a larger portion of the dataset available to the public. Footnote 6: [https://github.com/ReadingTimeMachine/ocr_post_correction](https://github.com/ReadingTimeMachine/ocr_post_correction) LaTeX source from \(\sim\)1990-2010 is known to be difficult to compile due to updates in TeX compilation software [2] which, in part, lead to the drop of the initial \(\sim\)65k astronomy articles to \(\sim\)7k. Partnership with the ar\(\chi\)iv to support more documents, along with adding support for a wider range of documents (e.g., those with the \(\backslash\)input command) will increase the dataset size. While the accuracy of the "byt5,full,fixed" model applied to the historical dataset ("historical,full,fixed") is lower overall, because there is no associated TeX with these historical documents, some ambiguity in the "ground truth" is expected (e.g., the phrase "\(\leq\)90%" can be written as $\(\downarrow\)le$90\(\rangle\)%, $\(\downarrow\)le 90$\(\rangle\)% or $\(\downarrow\)le 90 %% and the meaning of the phrase is unchanged). Post correction with consideration for these nuances is relegated to future work. Finally, a larger historical dataset would undeniably enhance our post correction accuracy. A discussion of the methods used to generate a larger manual dataset is relegated to future work. AcknowledgmentsThis work is supported by a NASA Astrophysics Data Analysis Program Grant (20-ADAP20-0225). \begin{table} \begin{tabular}{|c|c c c c|c c c|} \hline & \multicolumn{6}{c|}{CER in \%} & \multicolumn{3}{c|}{WER in \%} \\ Model & \(\langle\)B\(\rangle\) & \(\langle\)A\(\rangle\) & \(\langle\)I\(\rangle\) & \% Improved & \(\langle\)B\(\rangle\) & \(\langle\)A\(\rangle\) & \(\langle\)I\(\rangle\) & \% Improved \\ \hline byt5,words & 5.50 & 2.37 & 67.35 & 93.00 & 15.34 & 6.46 & 60.18 & 90.38 \\ \hline byt5,full,fixed & 12.53 & 2.47 & 85.51 & 98.22 & 19.81 & 3.84 & 84.44 & 99.24 \\ \hline historical,full,fixed & 5.53 & 3.94 & 7.71 & 82.67 & 8.98 & 8.20 & 18.82 & 82.67 \\ \hline \end{tabular} \end{table} Table 1: Mean CER and WER in percent for original datasets, \(\langle\)B\(\rangle\), after post correction with listed models, \(\langle\)A\(\rangle\), and the improvement percent, \(\langle\)I\(\rangle\). Also shown are the percent of test instances with improvement (\(\langle\)A\(\rangle\)\(<\)\(\langle\)B\(\rangle\)) as “% Improved”. All calculations use the ar\(\chi\)iv dataset except for the last row which uses the historical dataset.
2302.14743
The Classification of Short and Long-term Driving Behavior for an Advanced Driver Assistance System by Analyzing Bidirectional Driving Features
Insight into individual driving behavior and habits is essential in traffic operation, safety, and energy management. With Connected Vehicle (CV) technology aiming to address all three of these, the identification of driving patterns is a necessary component in the design of personalized Advanced Driver Assistance Systems (ADAS) for CVs. Our study aims to address this need by taking a unique approach to analyzing bidirectional (i.e. longitudinal and lateral) control features of drivers, using a simple rule-based classification process to group their driving behaviors and habits. We have analyzed high resolution driving data from the real-world CV-testbed, Safety Pilot Model Deployment, in Ann Arbor, Michigan, to identify diverse driving behavior on freeway, arterial, and ramp road types. Using three vehicular features known as jerk, leading headway, and yaw rate, driving characteristics are classified into two groups (Safe Driving and Hostile Driving) on short-term classification, and drivers habits are categorized into three classes (Calm Driver, Rational Driver, and Aggressive Driver). Proposed classification models are tested on unclassified datasets to validate the model conviction regarding speeding and steep acceleration. Through the proposed method, behavior classification has been successfully identified about 90 percent of speeding and similar level of acute acceleration instances. In addition, our study advances an ADAS interface that interacts with drivers in real-time in order to transform information about driving behaviors and habits into feedback to individual drivers. We propose an adaptive and flexible classification approach to identify both short-term and long-term driving behavior from naturalistic driving data to identify and, eventually, communicate adverse driving behavioral patterns.
Mudasser Seraj
2023-02-28T16:43:54Z
http://arxiv.org/abs/2302.14743v1
The Classification of Short and Long-term Driving Behavior for an Advanced Driver Assistance System by Analyzing Bidirectional Driving Features ###### Abstract Insight into individual driving behavior and habits is essential in traffic operation, safety, and energy management. With Connected Vehicle (CV) technology aiming to address all three of these, the identification of driving patterns is a necessary component in the design of personalized Advanced Driver Assistance Systems (ADAS) for CVs. Our study aims to address this need by taking a unique approach to analyzing bidirectional (i.e. longitudinal and lateral) control features of drivers, using a simple rule-based classification process to group their driving behaviors and habits. We have analyzed high resolution driving data from the real-world CV-testbed, Safety Pilot Model Deployment, in Ann Arbor, Michigan, to identify diverse driving behavior on freeway, arterial, and ramp road types. Using three vehicular features known as jerk, leading headway, and yaw rate, driving characteristics are classified into two groups (Safe Driving and Hostile Driving) on short-term classification, and drivers' habits are categorized into three classes (Calm Driver, Rational Driver, and Aggressive Driver). Proposed classification models are tested on unclassified datasets to validate the model conviction regarding speeding and step acceleration. Through the proposed method, behavior classification has been successfully identified in \(86.31\pm 9.84\%\) of speeding and \(87.92\pm 10.04\%\) of acute acceleration instances. In addition, our study advances an ADAS interface that interacts with drivers' in real-time in order to transform information about driving behaviors and habits into feedback to individual drivers. We propose an adaptive and flexible classification approach to identify both short-term and long-term driving behavior from naturalistic driving data to identify and, eventually, communicate adverse driving behavioral patterns. Advanced Driver Assistance System, Aggressive Driving, Connected Vehicle, Driving Behavior, Safe Driving. ## I Introduction The classification of individual driving behavior has played a vital role in identifying hazardous driving patterns, vehicle fuel consumption optimization, individualized vehicle control system design, and power management system design. Gradual expansion and integration of connected and autonomous vehicle (CAV) -based transportation systems have amplified the need to understand drivers' individual behaviors. Recognition and classification of driving behavior is now seen as intrinsic to the proper design and assessment of an Advanced Driver Assistance System (ADAS) as well as the enhancement of traffic safety through CAVs [1, 2, 3, 4, 5, 6]. However, observations of real-world driving indicate that driving behavior is the result of instantaneous decisions made in response to the exogenous environment, including elements such as road type, surrounding traffic, and the physical and mental state of the driver. Assuming that these instantaneous driving decisions result from a complex fusion of different factors, this study aims to dynamically identify distinct types of driving behavior by analyzing bidirectional control, driver decisions. Developing a flexible yet accurate classifying process to identify both driving behavior and habits of individual drivers is our primary objective here. Driving behavior is a complex concept, and the common association of 'Driving behavior' with 'Driving Habit/Style' complicates its definition and identification further. The correlation between the terms, as understood from the related literature, offers clarification on the distinct levels of classification. Driving behavior focuses exclusively on drivers' instantaneous decisions and correlates with the driving conditions experienced by drivers. Therefore, a precise understanding of the environment can provide better insight into driving behavior [7, 8, 9]. Furthermore, we can expect variations in decisions by same driver at different times for the same driving conditions because of transformed habitual influence. On the other hand, individual drivers' preferential driving behavior accumulates over time and develops into driving habit or driving style [10, 11, 12, 13]. While driving behavior varies with external factors, often erratically, driving habits change steadily in the longer term. The concepts of driving behavior and driving habit are necessary to distinguish between observed driving behavior on any given trip and developed driving habit from an accumulated driving history. In this paper, we present a simplified approach to dynamically identifying driving behavior by analyzing drivers' jerk, yaw rate, and leading headway profiles on different roadways. Jerk, yaw rate, and leading headway profiles are regarded as indicators of individual drivers' longitudinal and lateral control decisions. Using these indicators, our research aims to decisively classify the behavior of any given driver. In so doing, we aim to contribute to driving behavior research in two ways: 1), we can generate more accurate representations that better identify hazardous driving behavior by analyzing bidirectional driving features for classification, and 2) we can establish and distinguish between the two different behavioral classes for individual trip behavior and accumulated driving history. Additionally, this paper presents our model for a convenient and cohesive ADAS interface that warns drivers in real time of unsafe driving behavior. This interface will also allow both drivers and regulatory organizations to review driving habits based on an accumulation of previous driving behavior. The greater demand for understanding CAV justifies the need for comprehensive research on driving behavior. Traffic management authorities that apply the recommended approach derived from the results of this study could provide an efficient ADAS application of this promising technology to improve traffic mobility and safety. As such, the key contributions of this research are listed below: 1. The first contribution of this research is our account of both longitudinal and lateral driving features to detect adverse driving patterns from real-world driving data 2. Another contribution of this research is the capture of the behavioral evolution of a driver's instantaneous responses (i.e. short-term behavior) to driving habit (i.e. long-term behavior) 3. Finally, the resulting classification models are intended to propose a simple yet informative ADAS system to communicate detected behavioral information to drivers. In order to best present our findings, the paper is organized as follows: Section II summarizes the leading literature on driver behavior classification, identifies the gaps on current knowledge, and outlines the contributions this research will make in order to address those gaps. Following on from this, Section III describes the proposed classification method in detail. Then, Section IV evaluates the proposed method's performance when identifying behavioral pattern, followed by a description of the plans to extend the current research. Finally, a synopsis of the research findings concludes this paper. ## II Literature Review Identification of driving behavior and habit has long been of interest among the researchers, especially with respect to enhancing road safety. Gibson and Crooks [14] conducted an early study on driving psychology and concluded that driving predominantly depends on drivers' perceptions of their surrounding environment. In particular, safe driving depends on a driver's psychological safe spatial zone. In other studies, different physical measures have been identified to capture drivers' perceptual decisions during driving [15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Some specific driving measures included speeding and/or hard braking [25, 26, 27, 28, 29, 30, 31], jerky driving [1, 32, 33, 34], tailgating [35, 36], lane choice [37, 38], steering angle [35, 38, 29, 39], lateral acceleration [26], and the passing gap during overtaking [40]. These identified measures included both longitudinal and lateral features of driving to help categorize driving behavior. While speed and acceleration are frequently used measures of driving behavior, jerk profile is found to be more sensitive to safety-critical driving behavior [1]. With regards to longitudinal control, decision time and/or space headway are found to be more specific than speed, acceleration, or jerk profiles in reflecting hostile driving [35]. Since consistent headway during driving is the socially accepted norm of a safe driver, the choice of short and erratic headways could be explained in part by aggressive intentions. On the other hand, lateral control behavior is often associated with steering angle, lateral acceleration, or lane choice. Increased variations in these features can differentiate between safe and unsafe driving. While both longitudinal and lateral driving features play a role in defining driving behavior, the collective impact of both aspects remains uncharted. A major motivation for the study of driving behavior identification is the development of techniques to modify driving behavior [41, 42, 43, 44, 45, 46, 47]. Recently, personalized communication through a connected vehicle (CV)-based ADAS system has been applied to driving behavior modification [48, 49, 50, 51]. Driver identification that incorporates both driving behavior and driving habit is necessary to design an ADAS system that accounts for drivers' requirements, acceptability, and preferences [52]. However, labelling driving behavior from collected driving features varies widely in the literature. Major labelling techniques include rule-based [1, 9, 19, 53, 54], fuzzy logic-based [55, 56, 57, 3, 5], and machine learning methods-based [2, 8, 12, 19, 28, 58, 59]. Due to their computational simplicity, robustness, and clear explanation, rule-based techniques of driving behavior identification are adopted by numerous studies. Larger set variables can create complex classification processes on rule-based methods that can then be solved by fuzzy logic-based methods. Due to the availability of large, multivariate datasets, machine learning methods have recently become prevalent among practitioners. While machine learning methods can identify driving patterns from big data with larger sets of variables, these labelling techniques often contain complex and delicate structures as well as inexplicable solutions. To avoid these pitfalls, we have used a small number of variables with relatively large datasets, and we have explored both rule-based and machine learning-based labelling techniques in order to choose an ideal technique for labelling unlabeled training data. Although the dataset chosen for this study may initially appear small, we feel that they are large enough to represent the behavioral variations of drivers as well as to demonstrate the proposed method of classification. Furthermore, the chosen dataset included the trips with a higher number of records than of the remaining trips in the dataset, which potentially accounts for most possible variations. Reviewing the literature related to driving behavior identification, recognition, and classification, we noted that integration of bidirectional control decisions in classification could improve the odds of precise categorization, since the combination of both features can capture greater diversity potentially overlooked by one-dimensional feature-based classifications. Another key contribution of this study is to demonstrate the gradual development and changes in driving habits from driving behavior in both short-term and long-term driving behavior classifications. Finally, we have designed a user-friendly, real-time warning system for a driving behavior interface that includes the capability to provide long-term driving habit information and is a future extension of the current study. ## III Methodology In this study, we attempt to categorize both short term and long-term driving behavior. While the short-term classification represents a driver's individual trip behavior, the long-term classification stands for an individual driver's driving habit, formulated from previous driving experiences. Both classifications of driving behavior are based on a fixed duration (5 sec) moving window along the classification period. Short-term driving behavior is classified into two distinct classes, Safe Driving and Hostile Driving, as defined below: * Safe Driving: driving instances within a trip when the driver anticipates the surrounding roadway environment and executes composed control decisions. * Hostile Driving: driving instances within a trip when the driver fails to assess the surrounding roadway environment and compensates by performing impulsive and hazardous control decisions. The continuous accumulation of short-term classifications, gathered from previous trips in the driving history, facilitates long-term driver behavior classification. In this classification process, individual drivers are grouped into three categories: Calm Driver, Rational Driver, and Aggressive Driver. * Calm Driver: their share of cumulative hostile driving instances over the analysis period is below the specified lower threshold value * Rational Driver: their share of cumulative hostile driving instances over the analysis period is within the lower and upper threshold value * Aggressive Driver: their share of cumulative hostile driving instances over the analysis period is above the upper threshold value ### _Data Preparation_ The data used in this study was adopted from the Safety Pilot Model Deployment (SPMD) Project [60] database [61]. In this project real-world driving data were collected from roadways of Ann Arbor, Michigan through integrated safety devices and the radar-based data acquisition system that were developed by the Virginia Tech Transportation Institute [62]. These data were obtained via the Research Data Exchange website [63]. Sixty-three sensor-equipped vehicles were used to collect information from 13,792 trips containing 78.43 million data points at the frequency of 10Hz. Sensors attached to these CVs continuously collected information, including vehicle ID, trip ID, GPS longitude, GPS latitude, GPS UTC time, in-vehicle brake status, in-vehicle headlight status, in-vehicle speed, in-vehicle acceleration, in-vehicle steering position, in-vehicle throttle position, and in-vehicle yaw rate, amongst other types of information. From this large dataset, the top 550 trips (\(\sim\)4%), containing 7.94 million records in total (10.12%), were selected for our research by sorting the trips in descending order of available data records of each trip. Amongst the different operational data collected though the equipped vehicles' data acquisition systems, we chose three features for driving behavior classification: jerk, yaw rate, leading headway. In our analysis, these three features represented longitudinal and lateral control decisions undertaken by individual drivers. As the first derivative of acceleration/deceleration and second derivative of velocity, jerk is a more effective feature than velocity or acceleration in driving behavior classification [1]. Also, longitudinal and lateral decisions of individual drivers are incorporated within this single feature. The jerk data for this study was measured from the vehicles' data acquisition system (i.e. Integrated Safety Device (ISD)) that recorded in-vehicle acceleration data at 10hz frequency. This main data file contained several fields detailing elements such as vehicle position and speed, fidelity measures of GPS-based data, and vehicle operation data (steering, throttles position etc.). The authors extracted the acceleration data from this large dataset to calculate jerk data. Yaw rate measures the vehicle's lateral movement rate and characterizes a driver's lateral behavior. In contrast to earlier feature calculation, we obtained measurements of this feature directly from the vehicles' ISD system with the same frequency. Measurements of the leading headway stand for driver's longitudinal control decisions since the gap between vehicles often dictate car-following behavior. We collected leading headway data from the radar units that were installed as a part of vehicles' ISD units. These radars recorded the distance between the radar and the forward vehicle, in the cases where there was another vehicle within a 200 m distance in the same lane. The combination of these three mutually inclusive features - jerk, yaw rate, and leading headway - is capable of capturing instantaneous variations of drivers' bidirectional control Fig. 1: (a) Road type-based segmentation of a sample trip, contrast of studied driving features on (b) arterial and (c) freeway. decisions and, hence, assist in classifying drivers' behavior in real-time. ### _Driver Behavior Classification Algorithm_ The selected three features outlined above for classification were extracted from the chosen 550 trips. In addition to those three features, vehicle ID, trip ID, latitude, longitude, and time stamps were also included in the dataset, which we used to geographically locate the trip route and split the route based on road type [Figure 1(a)]. Figure 1(a) presented the segmentation of a sample trip from the dataset. Data points were placed on map from longitude and latitude information of the trip. The same information was used to classify the trip in different segments based on road type (i.e. arterial, ramp, freeway) traversed during the trip. Figure 1(a) shows three shades of blue that represent the three different classes of roads considered in this study, as well as details of the different segments. Two pie charts within the figure illustrate the proportion of trip duration and trip length for each class of road, from the whole trip. The assumption that one can observe substantial diversity in the driving environment between freeway and arterial roads motivated our road type-based splitting of trips. Since driving behavior is directly influenced by the surrounding environment, classifying driving behavior in relation to different road types using the same standards would lead to erroneous categorization. Additionally, visual observations of classifying features showed significant disparity between the road classes [Figure 1(b, c)]. Figure 1(b, c) highlights the distinctions between the different road type features for the trip that was plotted on Figure 1(a). Plotted feature profiles on arterial roads [Figure 1(b)] showed greater fluctuations of feature values in comparison to feature profiles on freeway [Figure 1(c)]. All three features showed relatively higher ranges of variability for arterials than freeway. To emphasize driving behavior contrasts, each trip was divided into three road types, based on GPS location (i.e. longitude, latitude): Freeway, Arterial, and Ramp. Features of the same road types were grouped together to classify short-term and long-term driving behavior. From the training dataset (550 trips), 66.20% (5.26 million data points), 31.76% (2.52 million data points), and 2.04% (0.16 million data points) are labeled as freeway, arterial, and ramp respectively. Once the features were sorted based on road types using the geolocation of each time stamp, their distribution was plotted (**Figure 2**). We compared the datasets of each road type by an unpaired, two-sample t-test to justify our assumption of substantial feature disparity between road types. Comparison results of each pair (i.e. freeway vs arterial, arterial vs ramp, freeway vs ramp) presented significant differences (i.e. p-value \(<\) 0.0001) in the mean of each feature, with a 99% confidence level, while also assuming unequal variance of the tested samples. Upon confirmation of attributional difference among road types, we calculated the absolute mean of each feature for the three road types, which was stored in a database. Next, we calculated the standard deviations of each feature for all trips with a moving window of \(t_{c}\) = 5 sec (50 data points). The coefficient of variation (CoV) was then calculated by dividing the measured standard deviations with the absolute mean of current road type within the time window (Equation 1). Since the CoV is the measure of relative variability, this statistical attribute of each driving feature was exerted when identifying hostile driving behavior for classification. Finally, we scaled the CoV datasets of each feature within [0 1] range for each of the road types (Equation 2). Since the absolute values of studied features were significantly different, the authors refrained from using the absolute values of these features and rather used scaled (i.e. standardized) coefficient of variations to perform classification. \[\text{CoV}_{\text{f}}(\text{t})=\frac{\text{SD}_{\text{f}}(\text{t}-\text{t}_ {c},\text{ t})}{\text{f}_{\text{R}}} \tag{1}\] \[\text{CoV}_{\text{f}}^{\prime}(\text{t})=\frac{\text{CoV}_{\text{f}}(\text{t })-\text{CoV}_{\text{f}R}^{\text{min}}}{\text{CoV}_{\text{f},R}^{\text{max}}- \text{CoV}_{\text{f}R}^{\text{min}}} \tag{2}\] Here, \(\text{CoV}_{f}(\text{t})=\text{coefficient of variation of feature }f\) (i.e. jerk, leading headway, yaw rate) at time \(t\); \(SD_{f}(\text{t}-\text{t}_{c},\text{ t})=\text{standard deviation of feature f within time }t-\text{t}_{c}\) and \(t\) (\(t_{c}=5sec\)); \(\bar{f}_{R}=\text{mean of absolute values of feature }f\) at roadtype \(R\); \(\text{CoV}_{f}^{\prime}(\text{t})=\text{scaled coefficient of variation of feature }f\) at time \(t\); \(\text{CoV}_{fR}^{\prime min}=\text{minimum coefficient of variation for feature }f\) on current roadtype \(R\); \(\text{CoV}_{f,R}^{max}=\text{maximum coefficient of variation for feature }f\) at current roadtype \(R\). Once scaled, and the unlabeled CoVs of features were available, we were able to explore the labeling methods of short-term driving behavior, using K-nearest neighbor (KNN), hierarchical clustering, and neural networks-self organizing maps as viable, partitioned clustering options for classifying behavioral features by the unsupervised machine learning method. Among these methods, several researchers used KNN Fig. 2: Distribution of studied features on different road types. to classify driving behavior [2, 28]. The efficiency of KNN in dealing with large datasets makes this method a perfect candidate for labeling unlabeled feature data. However, the output of KNN clustering failed to provide reasonable classification **[Figure 3(a)]**. The clusters that resulted from KNN were unable to represent explicit differences between two clusters. Increasing cluster size led to increased complexity in classification without proper explanation of individual cluster characteristics. Additionally, the clusters, specifically for highways and arterials, were incapable of addressing the impact of all three features in the classification process. Irrational division of traffic features resulting from KNN led us to examine the much simpler rule-based classification approach. Using the rule-based classification process, we chose a threshold value of scaled CoV to label driving decisions. If the scaled CoV value of any of our three features was higher than the threshold value, the driving behavior for that time window was labeled as 'Hostile Driving'. In the process of labelling traffic behavior, we explored different threshold values of CoV to identify the sensitivity of the threshold value. The results indicated that reducing threshold value of scaled CoV would lead to a higher share of 'Hostile Driving'. Therefore, to remain on the conservative spectrum of behavior identification, we chose a small threshold value of Fig. 3: Labelling scaled CoV values of studied features using (a) unsupervised learning and (b) rule-based classification methods. scaled CoV (0.3) **[Figure 3(b)]**. Followed by the labeling process, we then executed several supervised classification learner methods (i.e. logistic regression, discriminant analysis, support vector machine, decision tree) over the labeled training data to identify the best classifying model. Among our explored models with 10-fold cross validation, the decision tree model provided the highest accuracy (\(\sim\)100%) in correctly classifying training data for all road types and, hence, was used as a short-term classifier. Table I summarizes the steps involved in labelling training datasets to enable subsequent classification. While the selected threshold for classifying behavior was the same for all types of roads (i.e. freeway, arterial, ramp), the threshold value was applied on scaled CoV values of studied features, derived by balancing different ranges of feature values into a common unit. As illustrated earlier in Figure 2, the ranges of these features were significantly different with respect to different road classes. Hence, the same threshold value on scaled parameters resulted in different CoV values for different road classes. In the end, the behavior-classifying limit would remain the same for a specific feature on a specific road class and demonstrate a dynamic quality with changing road types as well as features. We then used measured values of road type specific shares of 'Hostile Driving' on total driving instances to categorize long-term driving behavior. For instance, 9.40% of samples from total training data demonstrated 'Hostile Driving' behavior while driving through arterial roads. To recognize long-term driving behavior on arterials for a specific driver, we considered the accumulated classified (i.e. safe, hostile) driving history and compared the share of cumulative hostile driving decisions along arterial roads with training 'Hostile Driving' shares. For this analysis, we regarded 0.5 as the lower threshold and 1.0 as the upper threshold to classify long-term driving behavior into Calm, Rational, and Aggressive driving behavior. So, if the cumulative 'Hostile Driving' share along arterial of a driver was less than 4.7% (\(=0.5\times 9.4\%\)), then that driver was classified as a 'Calm Driver' on arterial roads. On the other hand, if the same share rose above 9.4%, then that driver was classified as an 'Aggressive Driver' on arterials. We followed a similar process to classify long-term behavior of drivers on other road types and total travel history. Table II describes the process of long-term behavior classification based on road types and overall driving history. To provide further clarification of the long-term behavior classification process, a hypothetical scenario is presented here as illustration. Suppose a specific driver had made 30 trips, and the three feature values (i.e. jerk, yaw rate, leading headway) were collected, scaled, and stored according to the short-term behavior classification process. Then, the average hostile driving proportion of these 30 trips was measured for long-term behavior classification, using the three specified road types (i.e. freeway, arterial, ramp) as well as overall trips. By analyzing this road user's driving history of 30 trips, let us imagine that they showed average hostile driving behavior on freeway, arterial and ramps for 5.17%, 4.27% and 11.84% of the total driving time, respectively. We would find that the average hostile driving share for total trips to be 2.95% when the total number of trips were evaluated for driving behavior. Once these values were obtained from the driver's history, it would be compared with the stored road-specific and overall-average hostile driving shares of the training dataset. The average hostile driving shares of the training dataset would be 3.60%, 9.40%, 16.96%, and 5.79% for freeway, arterial, ramp, and total trip, respectively. Once calculated, these values would form the basis of road-type specific classification by comparing the driver's hostile share with the training datasets hostile share. In our example, this driver's hostile share on freeway (5.14%) was found to more than 1.0 xhostile share of training data on freeway (3.60%), therefore, the driver's long-term behavior, based on their driving history of 30 trips, had classified them as an 'Aggressive Driver' on freeways. Similarly, road-type specific, long-term classification would label this driver's behavior on arterial, ramp, and total trips as a 'Calm driver' [\(4.27\%<0.5\times 9. \(1.0\times 16.96\%\)] and 'Rational driver' [\(0.5\times 5.79\%<2.95\%<1.0\times 5.79\%\)], respectively. **Figure 4** presents the implemented classification algorithm in a flow chart in order to detail the progression of the behavior classification process. ## IV Performance Evaluation The generated classification models from the training data were executed on 'test trips' to classify driving behavior. To qualify as a 'test trip', we selected those with the highest number of datapoints (20% of training trips) among the remaining 110 trips on the database (except trips used for training purposes), which suggested that they were long and thus expected to contain the most diverse behavioral variations. We maintained the same time window of 5 sec (50 data points) to reshape classification features data. The proposed classifying model categorized the selected test trips for both short and long-term. The obtained hostility instances for the total trip of test trips varied between 1.45% to 18.53% with a mean of 5.67%. The short-term classification of total trips for a sample test trip is shown in **Figure 5**, which displays driving road types, the classification features' CoV profiles, and hostile driving instances during a 28min 23.7 sec long trip (341-time stamps). All 110 trips were categorized, with the short-term driving behavior classifier following the same process for specific road types and total trips. Although the classifying model identified hostile driving behavior through longitudinal and lateral feature recognition, we had yet to test the precision of identified behavior. To do so, we took the velocity and acceleration profiles of each trip as explicit identifiers of hostile behavior. Then, mean velocities within a predetermined time window were measured and compared with the corresponding road type's speed limit. Subsequently, the time stamps with mean velocities higher than 10 miles above speed limits were labeled as 'Hostile Driving' instances. As a result, this classification method only used the speeding behavior of the driver. Several studies have selected speeding as the controlling feature of unusual driving instances identification [64, 65, 66]. A second process measured the acceleration range of each time window determined from the classification by acute acceleration change. Time stamps with an acceleration range higher than 2.5 m/s\({}^{2}\) were labeled as 'Hostile Driving' behavior. A similar approach to identifying unique driving events through acceleration variations had been used previously in numerous studies [67, 68, 65, 28, 65, 69]. Both explicit classification measures (i.e. classification by speeding, classification by acute acceleration change) were compared with the model classification output (i.e. short-term driving behavior classification) to evaluate the behavioral disparity identification capability of the proposed method. **Figure 6** presents a sample trip behavior classification using the aforementioned methods. For the sample trip, a comparison of short-term behavior classification, from the generated classifying model with speeding-based classification, provided 87% accuracy. A similar comparison, with an acute acceleration-based classification, produced behavioral identification with 84% accuracy. Another analysis of speeding identification revealed that the proposed short-term classifying model accurately identified 19 out of 23 speeding instances as hostile driving behavior for the sample trip. Similarly, short-term classification identified 17 out 25 instances when compared to the acute acceleration change-based classification. The identification accuracy for all 110-test trips in comparison to the speeding-based classification was, on average, 86.31%, with a standard deviation of 9.84%. Likewise, the comparison with the acute acceleration change-based classification presented 87.92% average accuracy with 10.04% standard deviation. Fig. 5: Short-term driving behavior classification of a test trip. The short-term classification based on multiple driving features was further compared with the classification process proposed by Murphey et al. [1] to demonstrate the aptitude of the proposed methods in identifying behavioral extremity. Murphey et al. [1] proposed a single feature-based (i.e. jerk) classification of driving behavior into three groups (i.e. calm, normal, aggressive). The division of the groups were founded on threshold values of jerk profiles CoV (e.g. CoV of a time window\(<0.5\) then driving behavior = calm, \(0.5\)\(<\) CoV of a time window\(<1.0\) then driving behavior = normal, \(1.0<\) CoV of a time window then driving behavior = aggressive). To measure the CoV, the average jerk value was measured on different road types and at different levels of service from 11 standard drive cycles. **Figure 7(a)** shows the classification of the sample trip by method in [1], and **Figure 7(b)** shows the classification of the same trip by method proposed in this paper. The average jerk for level of service C on a freeway, CD on arterial and ramps values were chosen to follow the jerk-based classification as these levels of services are usually expected in these road classes. Classification of the sample trip by the proposed method identified 13.09% of driving as hostile driving instances during the trips by analyzing three features, whereas classification by the method of Murphey et al. [1], identified 6.92% of driving as aggressive driving instances. Therefore, the additional features were capable of increasing the identification of hostile driving instances by just under 47%. Notably, the average jerk value used for calculating CoV was different for both methods, resulting in different jerk profile scales. Additionally, in contrast to the method in [1], the proposed method had a different threshold for different road types, generated by analyzing the training dataset. To illustrate the long-term, behavior classification functionality of the proposed classifying process, the previously classified 110 test trips were presumed to be driven by the same driver at separate times. We found this assumption to be necessary since the demographic information on the dataset about the drivers making the trips was inaccessible. As such, it was impossible to link the dataset with a specific driver. Under this scenario, hostility shares on both specific road types and total trips were measured on the short-term classification. The hostility proportions of each trip were also compared with the training data's hostility proportions and classified into Calm, Rational, and Aggressive driving behavior by scaling training hostility shares with the lower threshold (0.5) and upper threshold (1.0). Long-term categorization was performed by measuring the moving averages of hostility shares (including all previous trips) and by matching that measurement with the hostility limits (\(<\)0.5: Calm, 0.5- 1.0: Rational, \(>\)1.0: Aggressive) of three groups (i.e. calm, rational, aggressive). **Figure 8** illustrates both types of test trip classification for specific road types as well as for the total trip. Each blue dot on Fig. 6: Evaluation of (a) proposed classification method in comparison to (b) speeding-based classification, (c) acute acceleration change-based classification. the plots of **Figure 8** represent the hostility proportion of each trip that could be utilized to perform short-term classification. The red curve on the plots portrays the progression of driver behavior by taking all previous trips into account (moving averages of blue dots). Different color patches (i.e. yellow, green, red) on the plots illustrate the boundary regions of specific behavioral classes (i.e. calm driver, rational driver, aggressive driver). While individual trip hostility fluctuated frequently, the behavioral progression on long-term was relatively stable and changed gradually over time. The proposed method of long-term classification was capable of identifying the changing patterns of driving habits for the number of total trips and road type specific habits. In **Figure 8**, the total trip hostilites of the accumulated trips were highly weighted towards to freeway hostility, which suggests that the largest portion were traversed though freeways. Moreover, the comparison between freeway and arterial hostility share demonstrated higher long-term behavioral variability on arterial roads (standard deviation = 1.48%) than on freeways (standard deviation = 0.84%). The paired sample t-test on long-term freeway and arterial hostility showed significantly lower hostility on freeways at 95% confidence interval (t-score = 29.557, p-value \(<\) 0.001). The obtained comparison result did not necessarily mean that the driver was more aggressive on arterials than freeways, because the classifying threshold for freeways was different. As a result, the long-term behavior on arterials graduated from 'aggressive' to 'rational', even with higher hostility than freeways. Since ramp road-types had a relatively low share (2.5% in average) on total test trips, the influence of long-term ramp hostility on total trip hostility was discarded for comparison. We further analyzed the identified hostility of 110 test trips to reveal short-term behavioral distribution on different road types. As shown in **Figure 9**, the hostility behavior was different from one road type to another. For instance, freeway hostility was skewed towards the origin, with the highest proportion lying between 2.5-5.0%. This skewness towards lower hostility could be explained by the fact that drivers, in general, tend to operate with less variations in control while driving on freeways (Figure 1(c)). Whereas, the probability distribution of arterial hostility was relatively balanced over a larger range of hostility (0-27.5%). The drivers had to experience more frequent disruptions, due to geometry, traffic control measures etc., while driving through arterials that could Fig. 7: Behavioral classification of a sample trip by (a) analyzing jerk feature and (b) analyzing multiple driving features. result in such diverse hostility patterns on arterials. Similarly, ramp hostility showed a central tendency towards the median. Since ramps perform as connecting links between freeways and arterials, we expected the hostility pattern in this transitional phase to be influenced by both road types' distribution. We performed a paired, two-sample t-test between the measured hostility ranges to identify significant dissimilarity in behavior on distinct road types. The results of the t-test showed that the hostility behavior on a specific road type was significantly different from other road-types with a 99% confidence level. ## V Future Extensions Since the major motivation of behavior classification lies in persuading drivers to maintain safe driving patterns, providing real-time feedback on driving style is imperative in order to harness its benefits. With the assistance of CV technology and smartphones, driving behavior information can be conveyed to drivers through a user-friendly ADAS interface, designed to easily communicate both short-term and long-term behavior classification information (**Figure 10**). Verbal and visual warnings on the ADAS interface can announce detected hostile behavior through short-term classification. **Figure 10(a)** provides an interface design for this purpose. The yellow circle in the middle starts to blink once hostile driving behavior is detected, thus presenting the driver with the visual warning. The system can deliver an auditory warning (indicated by the alarm sign on the picture) in tandem. In addition, the same interface can provide other information as part of the system design. This real-time warning system is assumed to induce cautiousness in drivers and, hence, promote safe driving behavior. At the same time, options to personalize the hostility threshold for different road-types and overall trip can be provided on the developed application, giving freedom to users to define their own hostility perception. In addition to real-time response, driving habits of individual drivers can be tracked through long-term classification, enabled by storing classified trip characteristics in a database. Previous classified trip history could be analyzed through the long-term classifier and displayed on a convenient interface to identify both road-type specific and overall driving habits (**Figure 10(b)**). The left most dial on **Figure 10(b)**, shows the overall long-term behavior classification from the trips within the time range, where the yellow region indicates 'Calm', green for 'Rational', and red for 'Aggressive'. The indicator arm of the dial gauge lies within the yellow and green region, which suggests the driver behavior falls within Calm and Rational driving behavior. The other three gauges in **Figure 10(b)** show road type specific long-term driving behavior and trip shares on each road type (value at the bottom right corner of each gauge). Detected long-term driving behavior can assist road traffic operation and safety authorities, insurance companies, and other associated organizations to offer incentives for 'Rational Drivers' as well as to impose penalties on 'Aggressive Drivers' as a means to promote safe driving on roadways. As part of this continued research, we plan to develop a smartphone application to detect and communicate driving behavior information to drivers in real time. Furthermore, the application will store both short-term and long-term driving history as well as analyze the effects of ADAS on drivers' behavior and habits. The goal of the analysis will be to determine the capability of ADAS in bringing paradigmatic shifts in driving behavior. ## VI Concluding Remarks This paper presents a simple, efficient, and adaptable driving behavior classification technique developed by analyzing both longitudinal and lateral driving features collected though CV technology from real-world trips. The thresholds of the proposed classification method can be modified to Fig. 9: Hostility distribution for test trips on different road types accommodate transportation, motoring, and roadways authorities' purposes and requirements. By considering bidirectional features of driving, the proposed method has greater aptitude in sensing unsafe driving behavior compared to singular feature-based classification methods. This paper has taken a unique approach by distinguishing between driving behavior and driving habit as well as classifying drivers' behavior from both behavioral and habitual contexts. We worked with the concept of instantaneous behavior classification and used that information to categorize drivers' driving habits. Authorities considering the uses of behavior classification are not only interested in current responses but also in driving style, with the aim of recognizing safety hazards caused by those drivers and the extent of safety risk taken by allowing them to drive. Our study, given its scope, would help facilitate their decision-making concerning rewards and penalties for driving behavior. While we have limited our research to three distinct features in the form of continuous variables, in order to illustrate longitudinal and lateral decisions, other features can also distinguish characteristic identifiers. Furthermore, we analyzed partial datasets of a larger SPMD database to demonstrate the classification technique. Since the primary aim of the study is to propose and present a simplified classification technique, we have set aside the potential bias of the analyzed datasets. In brief, this study is an attempt to gain insight into driving behavior and habits though a simple categorization process that considers bidirectional control decisions. Furthermore, our study offers the possibility for extension through the development of ADAS and through the identification of its impact on modifying driving behavior and habits. ## Acknowledgment This research work was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada, City of Edmonton, and Transport Canada. The contents of this paper reflect the views of the authors who are responsible for the facts and the accuracy of the data presented herein. The data that supports the findings of this study is available from the corresponding author, upon reasonable request. The contents do not necessarily reflect the official views or policies of the City Fig. 10: Abstract ADAS interface for communicating (a) real-time warning, (b) long-term behavioral information to drivers of Edmonton or Transport Canada. This paper does not constitute a standard, specification, or regulation. The authors declare that there is no conflict of interest regarding the publication of this paper.
2309.16226
Anomalous function of a Lorentz-violating QED effective action and the relation between compact bulk scalar propagator and path integral duality
In this paper, we consider a compact five dimensional spacetime with the structure $\mathcal{M}^{1,3}\times S^{1}$. Generally speaking, motion on such a structure will break Lorentz invariance, allowing for causal bulk signals to propagate superluminally. Based on recent articles, we calculate the anomalous function of a gauge invariant but Lorentz-violating term in the $4D$ QED effective action by using path integral. Finally, we find that the compact bulk scalar propagator and path integral duality are consistent, this result brings a new perspective: the behavior of breaking Lorentz invariance caused by dimensional compactness can be seen as path integral duality.
Huangcheng Yin
2023-09-28T07:59:33Z
http://arxiv.org/abs/2309.16226v1
Anomalous function of a Lorentz-violating QED effective action and the relation between compact bulk scalar propagator and path integral duality ###### Abstract In this paper, we consider a compact five dimensional spacetime with the structure \(\mathcal{M}^{1,3}\times S^{1}\). Generally speaking, motion on such a structure will break Lorentz invariance, allowing for causal bulk signals to propagate superluminally. Based on recent articles, we calculate the anomalous function of a gauge invariant but Lorentz-violating term in the \(4D\) QED effective action by using path integral. Finally, we find that the compact bulk scalar propagator and path integral duality are consistent, this result brings a new perspective: the behavior of breaking Lorentz invariance caused by dimensional compactness can be seen as path integral duality. ## I Introduction The compactification of a single dimension with a radius of \(R\)(i.e., \(\mathcal{M}^{1,3}\times S^{1}\)) is the simplest form of braneworld scenario to consider, Minkowski metric can be induced on braneworld's volume. In the past papers, superluminal propagation has been considered on a moving braneworld[1][2][3], and it's causality[4]. Faster than light will cause Lorentz violate, although there is no local indication of the violation, but worldvolume Lorentz invariance is broken globally by the compactification. However, causality remains intact, inherited from the causality of the underlying \(5D\) spacetime. In [5], they consider the effective action on the brane induced by loops of bulk fields, a variety of self-energy and vertex corrections due to bulk scalars and gravitons and show that bulk loops with non-zero winding generate UV-finite Lorentz-violating terms in the 4-D effective action. These terms are part of the Standard Model extension, a general framework for Lorentz-violating effective field theories developed in[6][7]. There are stringent experimental bounds on the Lorentz-violating coefficients which have been tabulated in [8]. By using the fundamental constants \(G\), \(\hbar\) and \(c\), the length with dimensions is \(L_{0}=\sqrt{G\hbar/c^{3}}\), which also named zero-point length, play a critical role in quantum gravity[9]. Experiments so far indicate it is not possible to detect experimental procedures that measure lengths with an accuracy greater than or about \(L_{0}\). This result suggests that one could think of Planck length as some kind of "zero-point length" of spacetime. In 1997[10], T. Padmanabhan used the idea that come from T-duality(i.e., \(R\leftrightarrow\alpha^{\prime}/R,\,n\leftrightarrow w\)) in String theory. The basic idea is he given an assumption that the path integral amplitude is invariant under the 'duality' transformation \(ds\to L_{0}^{2}/ds\), one will modify Feynman's propagator and he showed that this propagator is the same as the one obtained by assuming that: the modification of the spacetime interval \((x-y)^{2}\) to \((x-y^{2})+L_{0}^{2}\). One advantage of this theory is that the propagator becomes UV finite[11]. For example, in the recent paper[12], they calculate the electric field strength of a point charged particle in the path integral duality and there is no singularity while the point charge particle in standard Maxwell's equations will give a singularity. By the way, a similar approach is used to obtain generalized uncertainty relationship[13][14][15]. Usually, the approach to QFT is based on the canonical formalism, in which a field is an operator-valued distribution. In this approach, the derivation of the generating functional from which we obtained the perturbative rules was arguably rather cumbersome, even in the simple setting of a scalar field theory. Functional quantization, also known as the path integral formalism, is an alternative quantization procedure that considerably simplifies the algebraic manipulations, and provides some rather intuitive insights into what makes a theory quantum. Quantum anomaly is the phenomenon that some symmetries of Lagrangian are broken by quantum corrections. And under path integral, if the transformation changes the path integral measure, it would be anomaly. Therefore, it is worth to quantum anomalies under path integral. In this paper, I first introduce the basic theory which has existed in previous papers by using dimensional compactification on propagators and a gauge-invariant but Lorentz-violating QED effective action which can be obtained through the order power of the external momentum \(p^{\mu}\) in Section II. In Section III, I use the path integral to calculate the gauge invariance but Lorentz-violating QED effective action's anomalous term. In Section IV, I give the relation between compact bulk scalar propagator and path integral duality by using some mathematical tricks, I think this relation is interesting because it gives a unification that compact bulk scalar propagate is consistent with path integral duality. Basic theory ### Compactification of extra dimensions Consider a \(5D\) spacetime \(\mathcal{M}^{1,3}\times S^{1}\). To describe this we begin from \(5D\) Minkowski space \(\mathcal{M}^{1,4}\) with coordinates \[x^{M}=\left(x^{\mu},x^{4}\right),\quad M=0,...,4.\quad\mu=0,...,3. \tag{1}\] -with the metric \(\eta_{MN}=\)diag\((+,-,-,-,-)\) and obtain an \(S^{1}\) by periodically identifying the \(x^{4}\) coordinate, \(x^{4}\sim x^{4}+2\pi R\). It's convenient to describe this identification as \[x^{M}\sim x^{M}+A^{M},\quad A^{M}=(0,0,0,0,2\pi R) \tag{2}\] This coordinate define the frame for the compactification, with an exact \(SO(1,3)\) symmetry that acts on the coordinates \(x^{\mu}\). Now to describe the braneworld moves in the \(x^{4}\) direction or rotates the \(x^{4}\) direction we need to transform to a new frame with lower-case coordinates \(x^{M}\) through \[x^{M}=L_{N}^{M}x^{N}, \tag{3}\] where \(L_{N}^{M}\in SO(1,4)\). Boosted/Rotated in the \(x^{M}\) coordinates can be written via \[x^{M}\approx x^{M}+a^{M},\quad a^{M}=L_{N}^{M}A^{N}. \tag{4}\] Decompose \(a^{M}\) into components tangent and normal to the brane, \[a^{M}=(a^{\mu},2\pi r), \tag{5}\] where \(2\pi r\) is a fifth component scalar on the brane and \(r\) is related to \(R\) but also depends on the motion. Some form of \(r\) in timelike/spacelike/lightlike situation has been given in [5]. And usually we use the new defined quantity \(b^{\mu}=\frac{a^{\mu}}{2\pi r}\). ### compact bulk scalar propagator In a general \((d+1)\)-dimensional space, coordinates can be splitted as \(x^{n}=(x,\xi)\) where \(\xi\) is the compact dimension, \(n=0,1,...,d\) and split the momenta as \(k^{n}=(k,q)\) where \(q\) is the momentum on \(\xi\) direction. Consider a bulk scalar field of mass \(m\) and denote the retarded Green's function \(G_{R}^{(d+1)}(x,\xi)\) where \(d+1\) is the number of spacetime dimensions and \(R\) is the radius of the circle. For the standard scalar field, the Green's function is \[(\Box-\mu^{2})G_{\infty}^{(d+1)}(x,\xi)=\delta^{d}(x)\delta(\xi), \tag{6}\] where \(\mu\) is the mass of a bulk scalar, this Green's function can be represented in the form \[G_{\infty}^{(d+1)}(x,\xi)=\int\frac{d^{d}k}{(2\pi)^{d}}\frac{dq}{2\pi}\frac{ ie^{-ik\cdot x+iq\xi}}{k^{2}-q^{2}-\mu^{2}+i\epsilon}, \tag{7}\] A sum over winding numbers to compactify the \(\xi\) direction, \[G_{R}^{(d+1)}(x,\xi)=\sum_{w\in\mathbb{Z}}G_{\infty}^{(d+1)}(x,\xi-2\pi Rw). \tag{8}\] If the contribution such that the winding sum becomes continuous(i.e., \(\sum_{w\in\mathbb{Z}}\rightarrow\int d\omega\)), then the winding sum will lead to \[\int dwe^{-iq2\pi Rw}=\frac{1}{R}\delta(q), \tag{9}\] which means \[G_{R}^{(d+1)}\rightarrow\frac{1}{2\pi R}G_{\infty}^{(d)}=\frac{1}{2\pi R}\int \frac{d^{d}k}{(2\pi)^{d}}\frac{ie^{-ik\cdot x}}{k^{2}-\mu^{2}+i\epsilon} \tag{10}\] This equation nicely illustrate the relation between the compactifiaction of \((d+1)\)-dimension and noncompact \(d\)-dimension. Let's go back to focus on bulk propagation and it's convenient to set \(\xi=0\) and we only consider the winding number integral(i.e., ignore \(\int\frac{d^{d}k}{(2\pi)^{d}}e^{-ik\cdot x}\)) \[\Delta=\sum_{w=-\infty}^{\infty}\int\frac{dq}{2\pi}\frac{i}{k^{2}-q^{2}-\mu^ {2}+i\epsilon}e^{-iq2\pi Rw}. \tag{11}\] Switching from a sum over windings to a sum over Kaluza-Klein momentum using the Poisson resummation identity[5] \[\sum_{w=-\infty}^{\infty}\int\frac{dq}{2\pi}f(q)e^{-iq2\pi Rw}=\frac{1}{2\pi R }\sum_{w=-\infty}^{\infty}f(\frac{n}{R}), \tag{12}\] this identity can put (10) into another form \[\Delta=\frac{1}{2\pi R}\sum_{n=-\infty}^{\infty}\frac{i}{k^{2}-(n/R)^{2}-\mu ^{2}+i\epsilon}. \tag{13}\] Now let's meet a special case: set \(d=4\) and consider \((x^{\mu},\xi)\approx(x^{\mu}+a^{\mu},\xi+2\pi r)\). The bulk propagator in this case will be in the form \[\Delta=\frac{1}{2\pi r}\sum_{n=-\infty}^{\infty}\frac{i}{k^{2}-(k\cdot b+ \frac{n}{r})^{2}-\mu^{2}+i\epsilon}. \tag{14}\] \(b^{\mu}\neq 0\) will break Lorentz invariance. ### A gauge-invariant but Lorentz-violating QED effective action Consider the action \[S= \int d^{5}x\left[\frac{1}{2}\partial_{M}\chi\partial^{M}\chi- \frac{1}{2}\mu^{2}\chi^{2}\right] \tag{15}\] \[+\int d^{4}x\left[\overline{\psi}(i\not{\partial}-m)\psi-\lambda \overline{\psi}\psi\chi\big{|}_{\xi=0}\right].\] This action means there is a real bulk scalar field \(\xi\) of mass \(\mu\) that has a Yukawa coupling to the electron and \(\lambda\) is a coupling constant. One-loop electron self-energy arising from a Yukawa coupling to a bulk scalar and it's bulk propagator is[5] \[i\sum= \frac{\lambda^{2}}{2\pi r}\sum_{n=-\infty}^{\infty}\int\frac{d^{4} k}{(2\pi)^{4}}\frac{\not{k}+m}{k^{2}-m^{2}+i\epsilon}\times \tag{16}\] \[\frac{1}{(k-p)^{2}-((k-p)\cdot b+\frac{n}{r})^{2}-m^{2}+i\epsilon}.\] Expand this propagator in powers of the external momentum \(p\) and notice the first order in \(p^{\mu}\), this term will produce a gauge-invariant but Lorentz-violating term in the effective action \[{\cal L}=ic_{\mu\nu}\overline{\psi}\gamma^{\mu}D^{\nu}\psi, \tag{17}\] \(D^{\nu}=\partial^{\nu}-ieA^{\nu}\) to preserve the gauge-invariant property and the coefficient \(c_{\mu\nu}\) is \[c_{\mu\nu}=-\frac{1}{16\pi^{2}}\frac{\lambda^{2}}{\pi r}\left(b_{\mu}b_{\nu}- \frac{1}{4}\eta_{\mu\nu}b^{2}\right)I_{1}, \tag{18}\] \[I_{n}=\] \[\frac{1}{\sqrt{\pi}}\sum_{w=-\infty}^{\infty}\int_{0}^{\infty}ds \int_{0}^{\infty}dt\frac{s^{n}w^{2}}{\sqrt{t}(s+t)^{4}}\times\] \[exp\left\{-s(\pi mr)^{2}-t(\pi\mu r)^{2}-\frac{s+t(1-b^{2})}{t(s+ t)}w^{2}\right\}. \tag{19}\] ## III Anomalous function of a 4-dimensional Lorentz-violating QED effective action ### General considerations Before we calculate the anomaly term of the Lagrangian (17), let's do some preparation. Without loss of generality, consider a set of fermion fields \(\psi_{n}(x)\), which we encapsulate into a multiplet denoted \(\mathbf{\psi}(x)\). Consider now the following transformation of the fermion fields: \[\mathbf{\psi}(x)\to U(x)\mathbf{\psi}(x) \tag{20}\] The Hermitic conjugate of \(\mathbf{\psi}(x)\) transforms as \(\mathbf{\psi}^{\dagger}(x)\rightarrow\mathbf{\psi}(x)U^{\dagger}(x)\), so that we have \[\overline{\mathbf{\psi}}(x)=\mathbf{\psi}^{\dagger}\gamma^{0}\rightarrow\mathbf{\psi}^{ \dagger}(x)U^{\dagger}(x)\gamma^{0}=\overline{\mathbf{\psi}}(x)\gamma^{0}U^{ \dagger}(x)\gamma^{0}. \tag{21}\] The measure is transformed with the inverse of the determinant of the transformation, \[{\cal D}[\mathbf{\psi}]{\cal D}[\overline{\mathbf{\psi}}]\rightarrow\frac{1}{det(U) det(\overline{U})}{\cal D}[\mathbf{\psi}]{\cal D}[\overline{\mathbf{\psi}}], \tag{22}\] where the matrices \(U\) and \(\overline{U}\) carry both indices for the fermion species and space-time indices: \[U_{xm,yn}=U_{m,n}(x)\delta(x-y), \tag{23}\] \[\overline{U}_{xm,yn}=\left(\gamma^{0}U^{\dagger}(x)\gamma^{0}\right)_{m,n} \delta(x-y). \tag{24}\] Split the spinor into right-handed and left-handed projections, \[\mathbf{\psi}_{R}=\left(\frac{1+\gamma^{5}}{2}\right)\mathbf{\psi},\quad\mathbf{\psi}_{L} =\left(\frac{1-\gamma^{5}}{2}\right)\mathbf{\psi}, \tag{25}\] and consider a chiral transformation \[U(x)=e^{i\alpha(x)\gamma^{5}t}, \tag{26}\] where \(t\) is a Hermitian matrix that does not contain Dirac matrices and \(\alpha(x)\) is pesudofunction. And \[\gamma^{0}U^{\dagger}(x)\gamma^{0}=\gamma^{0}e^{-i\alpha(x)\gamma^{5}t}\gamma^ {0}=e^{i\alpha(x)\gamma^{5}t}=U(x). \tag{27}\] Thus, \(\overline{U}=U\), and \(detU=det\overline{U}\). To calculate anomaly function we need to let \(detU\neq 1\) thus the measure is not invariant and transforms according to \[{\cal D}[\mathbf{\psi}]{\cal D}[\overline{\mathbf{\psi}}]\rightarrow\frac{1}{(detU)^{ 2}}{\cal D}[\mathbf{\psi}]{\cal D}[\overline{\mathbf{\psi}}]. \tag{28}\] And \[\frac{1}{(detU)^{2}}=exp\left(-2\text{Tr}lnU\right)=exp\left(i\int d^{4}x \alpha(x){\cal A}(x)\right), \tag{29}\] where \({\cal A}(x)\) is called the anomaly function and \({\cal A}(x)=-2\delta(x-x)tr(\gamma^{5}t)\). In terms of this function, the measure transforms as \[{\cal D}[\mathbf{\psi}]{\cal D}[\overline{\mathbf{\psi}}]\to e^{i\int d^{4}x \alpha(x){\cal A}(x)}{\cal D}[\mathbf{\psi}]{\cal D}[\overline{\mathbf{\psi}}]. \tag{30}\] This measure can be absorbed into a redefinition of Langrangian, \[{\cal L}(x)\rightarrow{\cal L}(x)+i\alpha(x){\cal A}(x). \tag{31}\] ### Calculation of \({\cal A}(x)\) Manipulate finite expressions, we must regularize the delta function. This can be done by \[{\cal A}(x)=-2\lim_{y\rightarrow x,M\rightarrow+\infty}\text{Tr}\left\{ \gamma^{5}t{\cal F}\left(-\frac{(c_{\mu\nu}\gamma^{\mu}D_{x}^{\mu})^{2}}{M^{2} }\right)\right\}\delta(x-y), \tag{32}\] where \(D_{x\mu}=\partial_{\mu}-ieA_{\mu}(x)\) and \({\cal F}(s)\) is called regulator and satisfies \[{\cal F}(0)=1,\quad{\cal F}(+\infty)=0,\quad s{\cal F}^{\prime}(s)=0\quad at \quad s=0,+\infty. \tag{33}\] Then, we replace the delta function by its Fourier representation, \[\delta(x-y)=\int\frac{d^{4}k}{(2\pi)^{4}}e^{ik\cdot(x-y)}, \tag{34}\] which leads to \[\mathcal{A}(x) =-2\int\frac{d^{4}k}{(2\pi)^{4}}\lim_{M\rightarrow+\infty} \tag{35}\] \[\times\mathrm{Tr}\left\{\gamma^{5}t\mathcal{F}\left(-\frac{(ic_{ \mu\nu}\gamma^{\mu}k^{\nu}+c_{\mu\nu}\gamma^{\mu}D_{x}^{\mu})^{2}}{M^{2}}\right)\right\}\] where we have used the identity[16] \[\lim_{y\to x}\mathcal{F}(\partial_{x})e^{ik(x-y)}=\mathcal{F}(ik+ \partial_{x}). \tag{36}\] Redefining the integration variable, \(k\to Mk\): \[\mathcal{A}(x)=-2M^{4}\int\frac{d^{4}k}{(2\pi)^{4}}\lim_{M\rightarrow+\infty} \times\mathrm{Tr}\left\{\gamma^{5}t\mathcal{F}\left(B\right)\right\} \tag{37}\] \[B=c_{\mu\nu}c_{\sigma}^{\mu}k^{\nu}k^{\sigma}-2i\frac{c_{\mu\nu}c_{\sigma}^{ \mu}k^{\nu}D_{x}^{\sigma}}{M}-\frac{(c_{\mu\nu}\gamma^{\mu}D_{x}^{\nu})^{2}}{ M^{2}} \tag{38}\] where we have used the anticommutation relation \(\{\gamma^{\mu},\gamma^{\nu}\}=2g^{\mu\nu}\). Expand the function \(\mathcal{F}(\cdot)\) in powers of \(1/M\). Let's consider the non-\(M^{4}\) term \[\mathcal{A}(x)=-2\int\frac{d^{4}k}{(2\pi)^{4}}\mathcal{F}^{\prime\prime}(c_{ \mu\nu}c_{\sigma}^{\mu}k^{\nu}k^{\sigma})\mathrm{Tr}\left\{\gamma^{5}t\left(c _{\mu\nu}\gamma^{\mu}D_{x}^{\nu}\right)^{4}\right\}. \tag{39}\] Define \(l_{\nu}=c_{\mu\nu}k^{\mu}\) and consider \(det(c_{\mu\nu})\) is a constant, Then by Wick rotation \(l^{0}=i\kappa\), we obtain \[\int d^{4}k\mathcal{F}^{\prime\prime}(c_{\mu\nu}c_{\sigma}^{\mu}k^{\nu}k^{ \sigma})=\int d^{4}l\frac{1}{det(c_{\mu\nu})}\mathcal{F}^{\prime\prime}(l^{2} )=\frac{i\pi^{2}}{det(c_{\mu\nu})}, \tag{40}\] and \((c_{\mu\nu}\gamma^{\mu}D_{x}^{\nu})^{2}=c_{\mu\nu}c_{\sigma}^{\mu}D^{\nu}D^{ \sigma}-\frac{ie}{4}c_{\mu\nu}c_{\rho\sigma}F^{\mu\rho}\left[\gamma^{\nu}, \gamma^{\sigma}\right]\). Using the identity \[\mathrm{Tr}\left(\gamma^{5}\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}\gamma_{ \sigma}\right)=4i\epsilon_{\mu\nu\rho\sigma}, \tag{41}\] we obtain \[\mathcal{A}(x)=-\frac{e^{2}}{16\pi^{2}det(c)}\epsilon^{\mu\rho\alpha\gamma} \mathrm{Tr}\left(tc_{\mu\nu}c_{\rho\sigma}c_{\alpha\beta}c_{\gamma\lambda}F^{ \nu\sigma}F^{\beta\lambda}\right) \tag{42}\] Here matrix \(t\) can be act on flavors. For example, we can consider u and d quarks sector. But here we just consider one of these cases: \[\mathcal{A}(x)=-\frac{e^{2}}{16\pi^{2}det(c)}\epsilon^{\mu\rho\alpha\gamma}c_ {\mu\nu}c_{\rho\sigma}c_{\alpha\beta}c_{\gamma\lambda}F^{\nu\sigma}F^{\beta \lambda}. \tag{43}\] ## IV Relation between compact bulk scalar propagator and path integral duality Let's go back to (13) and recover the Green function's form and for the convenience of the following discussion, we may wish to change the metric notation to \(\eta^{new}_{MN}=\)diag\((-,+,+,+,+)\), the change of metric will have an effect \(-(n/R)^{2}\rightarrow(n/R)^{2}\) and \(-\mu^{2}\rightarrow\mu^{2}\) \[G_{R}^{(d+1)}=\frac{i}{2\pi R}\sum_{n=-\infty}^{\infty}\int\frac{d^{d}k}{(2 \pi)^{d}}\frac{e^{-ik\cdot x}}{k^{2}+(n/R)^{2}+\mu^{2}}. \tag{44}\] Let's deal this equation by constituting the Schwinger's proper time version of the propagator, \[G_{R}^{(d+1)}=\frac{i}{2\pi R}\sum_{n=-\infty}^{\infty}\int \frac{d^{d}k}{(2\pi)^{d}}e^{-ik\cdot x}\int_{0}^{\infty}ds \tag{45}\] \[\times e^{-s\left(k^{2}+(n/R)^{2}+\mu^{2}\right)}.\] The integral of \(k^{\mu}\) is a Gaussian quadratic integral, so \[G_{R}^{(d+1)}=\frac{i}{2\pi R}\sum_{n=-\infty}^{\infty}\int_{0}^{\infty}\frac {ds}{(2\pi)^{d/2}}exp\left\{-\frac{x^{2}}{s}-s\left[(n/R)^{2}+\mu^{2}\right] \right\}. \tag{46}\] For a clearer view, it is advisable to discretize the integral of \(s\): \(\int_{0}^{\infty}ds\rightarrow\sum_{s\in\mathbb{N}}\) \[G_{R}^{(d+1)}=\frac{i}{2\pi R}\sum_{n\in\mathbb{Z},s\in\mathbb{N}}\frac{1}{(2 \pi)^{d/2}}exp\left\{-\frac{x^{2}}{s}-s\left[(n/R)^{2}+\mu^{2}\right]\right\}. \tag{47}\] when \(s=1\), \(n=1\) and we consider \(\frac{1}{R}=L_{0}\) where \(L_{0}\) is the zero-point length, (47) can be explained as we add a zero point length to any spacetime interval(i.e., modification of the spacetime interval \((x-y)^{2}\rightarrow(x-y)^{2}+L_{0}^{2}\)), so we actually get the same result with the result in path integral duality. ## V Conclusion This work mainly completed two things: (i) calculate the anomalous function of \(\mathcal{L}=i\overline{\psi}c_{\mu\nu}\gamma^{\mu}D^{\nu}\psi\) through path integration, and (ii) discover that the compact bulk scalar propagator and path integral duality are consistent. The second discover is quite interesting, as this relation directly brings new perspective to future research: the behavior of breaking Lorentz invariance caused by dimensional compactness can be seen as path integral duality. ###### Acknowledgements. Thanks **Dan Wohns in Perimeter Institute**. In Perimeter Scholars International(PSI) START Program's mini-project 2023, he provides me the project: Quantum anomalous effects under path integral. Therefore, I started to know how to use path integral to calculate the anomaly. Thanks **Tigao Adorno in Xi'an Jiaotong-Liverpool University** to support help and checked \(\alpha(x)\) needs to be pseudo to preserve the parity invariant symmetry of the action.
2308.16669
Modelling of highly extended Gamma-ray emission around the Geminga Pulsar as detected with H.E.S.S
Geminga is an enigmatic radio-quiet gamma-ray pulsar located at a mere 250 pc distance from Earth. Extended very-high-energy gamma-ray emission around the pulsar has been detected by multiple water Cherenkov detector based instruments. However, the detection of extended TeV gamma-ray emission around the Geminga pulsar has proven challenging for IACTs due to the angular scale exceeding the typical field-of-view. By detailed studies of background estimation techniques and characterising systematic effects, a detection of highly extended TeV gamma-ray emission could be confirmed by the H.E.S.S. IACT array. Building on the previously announced detection, in this contribution we further characterise the emission and apply an electron diffusion model to the combined gamma-ray data from the H.E.S.S. and HAWC experiments, as well as X-ray data from XMM-Newton.
A. M. W. Mitchell, S. Caroff
2023-08-31T12:22:10Z
http://arxiv.org/abs/2308.16669v1
# Modelling of highly extended Gamma-ray emission around the Geminga Pulsar as detected with H.E.S.S. ###### Abstract: Geminga is an enigmatic radio-quiet gamma-ray pulsar located at a mere 250 pc distance from Earth. Extended very-high-energy gamma-ray emission around the pulsar has been detected by multiple water Cherenkov detector based instruments. However, the detection of extended TeV gamma-ray emission around the Geminga pulsar has proven challenging for IACTs due to the angular scale exceeding the typical field-of-view. By detailed studies of background estimation techniques and characterising systematic effects, a detection of highly extended TeV gamma-ray emission could be confirmed by the H.E.S.S. IACT array. Building on the previously announced detection, in this contribution we further characterise the emission and apply an electron diffusion model to the combined gamma-ray data from the H.E.S.S. and HAWC experiments, as well as X-ray data from XMM-Newton. ## 1 Introduction Geminga (PSR J0633+1746) is a middle-aged pulsar (\(\tau_{c}=342\) kyr) in close proximity to Earth (\(d=250\) pc), with a spin-down luminosity of \(\dot{E}=3.2\times 10^{34}\)erg/s, that is radio quiet, yet exhibits pulsed gamma-ray emission. The detection of extended \(\gamma\)-ray emission coincident with the pulsar was first achieved by Milagro [1] and subsequently verified by HAWC [2], yet the angular scale of \(\gtrsim 2^{\circ}\) posed a challenge for Imaging Atmospheric Cherenkov Telescopes (IACTs). The angular scale of the very-high-energy (VHE, \(E\gtrsim 100\) GeV) \(\gamma\)-ray emission of \(\sim 5.5^{\circ}\) is considerably larger than that of the associated X-ray pulsar wind nebula (PWN), of \(\sim 10^{\prime}\)[3]. Given that the majority of PWNe that are detected in VHE \(\gamma\)-rays are associated with young, energetic pulsars and that at these later stages the structure of the former PWN has been disrupted, such that particles can leak out into the surrounding interstellar medium (ISM), it was proposed that the \(\gamma\)-ray emission sounding Geminga (and the nearby companion PSR B0656+14) form a distinct class in the evolutionary history of pulsar environments, termed 'pulsar halos' (or 'TeV halos', whereby the latter is a popular yet ambiguous term) [4, 5]. A key distinguishing feature between PWNe and pulsar halos is the average energy density in electrons responsible for the \(\gamma\)-ray emission via inverse Compton scattering, which for PWNe is higher and for halos lower than that typical of the surrounding ISM [4, 6]. With improving performance and exposure of ground-based particle detector facilities such as HAWC and LHAASO, the \(\gamma\)-ray sky has continued to reveal an increasing number of pulsar halo systems [6]. The morphology of the emission detected with HAWC around the Geminga pulsar indicated that the diffusion coefficient in the vicinity of the pulsar is a factor \(\sim 100\) below the Galactic average expected for the ISM. Several scenarios have been suggested to reconcile the two, such as suppressed diffusion due to turbulence in the vicinity of the pulsar [7]. Accounting for analysis differences between experiments, H.E.S.S. was able to detect the presence of extended \(\gamma\)-ray emission around the Geminga pulsar [8, 9]. To adjust for the large angular size, an observation campaign was conducted in 2019 with telescope pointing offsets of \(1.6^{\circ}\) (much larger than the usual \(\sim 0.7^{\circ}\)), from which a detailed analysis and modelling could be performed. These proceedings provide a summary of the key analysis results and focus on the modelling, where we endeavour to perform a joint fit combining data from HAWC and XMM-Newton to place constraints on the diffusion properties. ## 2 H.E.S.S. Data Analysis In [8], the H.E.S.S. Collaboration reported the significant detection of extended gamma-ray emission around the Geminga pulsar, out to at least \(3.2^{\circ}\) radius. An excess counts sky map constructed using the On-Off background estimation method is shown in figure 1. The 2019 dataset provided 27.2 h exposure with observations obtained at offsets of \(\pm 1.6^{\circ}\) from the location of the Geminga pulsar. Background normalisation was hence performed on data beyond \(3.2^{\circ}\) (twice the angular pointing offset). This limitation to the sky region meant that the full extent of the emission could not be measured, yet a relative measurement indicating a significant excess above background level was nevertheless found. Within the innermost \(1^{\circ}\), a significance of \(\sim 9-10\,\sigma\) was obtained with different background estimation methods. A spectral analysis was performed to this region, indicated by a white dashed line in figure 1. A power law spectral model was fit to the data, \(\frac{dN}{dE}=\phi_{0}\left(\frac{E}{E_{0}}\right)^{-\Gamma}\) with best-fit spectral index \(\Gamma=2.76\pm 0.22\) and flux normalisation at 1 TeV of \((2.8\pm 0.7)\times 10^{-12}\)cm\({}^{-2}\)s\({}^{-1}\)TeV\({}^{-1}\). The spectral results and radial profile are shown together with best-fit models below. The centroid of the \(\gamma\)-ray emission across the energy range \(0.5\,\mathrm{TeV}-40\,\mathrm{TeV}\) was found to be located at an offset of \(0.6^{\circ}\) from the pulsar, at R.A. \(99.1^{\circ}\pm 0.1^{\circ}\pm 0.5^{\circ}\) and Dec. \(17.7^{\circ}\pm 0.1^{\circ}\pm 0.5^{\circ}\), which is nevertheless compatible with the pulsar position within the systematic errors. Evaluating the 68% containment radii in different energy bands, no evidence for statistically significant energy-dependent morphology was found. ## 3 Diffusion Model To describe the \(\gamma\)-ray emission, we consider a scenario of electrons diffusing away from the pulsar within a halo region, where the diffusion coefficient has a dependence on energy as \(D(E)=D_{0}(E/E_{0})^{\,\delta}\) with \(\delta\in[0.3,1]\). The pulsar is considered as a point-like continuous source of electrons, for which we take energy-dependent diffusion and energy losses into account. We solve the diffusive transport equation: \[\partial_{t}N(E,\vec{r},t)-D(E)\Delta N(E,\vec{r},t)+\partial_{E}[b(E)N(E, \vec{r},t)]=Q(E,t)\,\delta(\vec{r}-\vec{r_{s}}), \tag{1}\] where the source term \(Q(E,t)\) depends on the energy released by the pulsar and describes the injection of electrons into the pulsar environment: \[Q(E,t)=Q_{0}(1+t/\tau_{0})^{-\frac{(m+1)}{(m-1)}}(E/E_{0})^{-\,\alpha}\exp{(-E /E_{c})}\, \tag{2}\] Figure 1: Excess counts sky map of the region around the Geminga pulsar using 2019 data from the H.E.S.S. experiment, analysed with an On-Off background method [8]. The location of the Geminga pulsar is indicated with a green triangle. White dashed and dotted circles indicate the \(1^{\circ}\) and \(3.2^{\circ}\) radius regions used for the spectral analysis and the radial profile respectively. with an initial spin-down timescale \(\tau_{0}\) and braking index \(n\). The solution adopted for the diffusion equation is: \[N(E,r,T_{*})=\int_{0}^{T_{*}}\mathrm{d}t_{0}\frac{b(E_{s}(E,t_{0},T_{*}))}{b(E)} \frac{1}{(\pi\lambda^{2}(t_{0},T_{*},E))^{3/2}}\times\exp\left(-\frac{r^{2}+r_ {s}^{2}(t_{0})}{\lambda^{2}(t_{0},T_{*},E)}\right)Q(E_{s}(E,t_{0},T_{*}),t_{0} )\,, \tag{3}\] where the subscript \({}_{*}\) indicates properties of the pulsar at the current time and \(\lambda\) is the diffusion length. Table 1 summarises several parameters of the model, including their fixed and/or scanned values as appropriate. Figure 2 shows the energy loss time and the diffusion radius. Electrons with energies \(\lesssim 1\) TeV have not yet cooled, as the loss timescale is larger than the age of the pulsar. Correspondingly, the peak diffusion radius also occurs at around 1 TeV, above which the expected size due to diffusion decreases with increasing energy. At the energy threshold of H.E.S.S., the diffusion radius is larger than the field of view of the H.E.S.S. telescopes. ## 4 Modelling Results To obtain the best-fit model to the HAWC, H.E.S.S. and XMM-Newton data, we performed a parameter scan over variables of the diffusion model as listed in table 1. Five variables (\(n,\eta,\alpha,B,\delta\)) were scanned over three values, yielding a total of 243 different parameter combinations. The normalisation of the diffusion coefficient was always left as a free parameter of the fit. A global minimisation procedure was found not to converge, as multiple parameter combinations could yield comparably consistent matches to the data. A combination of model parameters was considered a good fit to the data if a p-value of \(>0.003\) was obtained, a criterion achieved by 53 out of the 243 parameter combinations. The process was repeated with both the cut-off energy of the electron spectrum \(E_{c}\) fixed to 1 PeV and left free to vary. Figure 3 shows the distribution of fitted \(E_{c}\) for models with p-value \(>0.003\); as expected this depends strongly on the assumed index \(\alpha\) of the electron injection spectrum. The best-fit normalisation for the diffusion coefficient is found to be systematically less than the Galactic value Figure 2: Properties of the electron diffusion model applied. Left: electron energy loss timescale, and Right: electron diffusion radius, both with electron energy and magnetic field strength. The H.E.S.S. energy threshold of \(\sim\)1 TeV applied to this analysis corresponds to an electron energy of \(\gtrsim 10\) TeV. derived from the cosmic ray B/C ratio (Figure 3). For both cases with \(E_{c}=1\) PeV and \(E_{c}\) as a free parameter of the fit, the majority of models favour a \(D_{0}\) value (at an electron energy of 100 TeV) consistent with that obtained in [2]. Figure 4 shows the model curves for all models with a p-value \(>0.003\). The highlighted curve represents the model with the highest overall p-value of 0.37, corresponding to the parameter values \(n=4.5,\eta=0.1,\alpha=1.8,\delta=1.0,\)\(B=1\mu\)G and with fitted parameters \(D_{0}=7.6^{+1.5}_{-1.2}\times 10^{27}\) cm\({}^{2}\)s\({}^{-1}\) and \(E_{c}=74^{+17}_{-11}\) TeV. Figure 5 shows a comparison between the best fit and a model curve assuming a scenario in which the diffusion coefficient normalisation adopts a typical galactic average value. This galactic diffusion scenario is defined as \(n=3\), \(\eta=0.5\), \(\alpha=1.8\), \(E_{c}=74\) TeV, \(\delta=0.3\), \(B=3\,\mu\)G [12] and \(D_{0}\) fixed to B/C diffusion values obtained under different assumptions of the diffusive halo height [13]. \begin{table} \begin{tabular}{c c} Parameter & Value(s) \\ \hline Braking index, \(n\) & [1.5, 3, 4.5] \\ Initial period, \(P_{0}\) & 15 ms \\ Transverse velocity, \(V_{T}\) & 211 km/s \\ Electron efficiency, \(\eta\) & [0.01, 0.1, 0.5] \\ Electron injection index, \(\alpha\) & [1.8, 2.0, 2.2] \\ Electron energy cut-off, \(E_{c}\) & [free,1 PeV] \\ Ambient magnetic field, \(B\) & [1, 3, 5] \(\mu\)G \\ Diffusion coefficient normalisation, \(D_{0}\) & free \\ Diffusion index, \(\delta\) & [0.3, 0.6, 1] \\ \end{tabular} \end{table} Table 1: Input parameters for the diffusion model, where \(D_{0}\) is the normalisation at an electron energy \(E_{0}=100\) TeV. Properties of the pulsar are set to known values where available (e.g. age \(T_{c}\), distance \(d\), and the luminosity and spin period at the current time, \(L_{*}\), \(P_{*}\)) Figure 3: Distribution of best-fit values for all parameter combinations resulting in a p-value \(<0.003\). Left: correlation between injection index and cut-off energy when \(E_{c}\) is left free. Right: Best-fit diffusion coefficient in the case of an energy cut-off fixed to 1 PeV and left free to vary. ## 5 Conclusion With this work, we show that the \(\gamma\)-ray emission detected by H.E.S.S. in the vicinity of the Geminga pulsar [8] is consistent with that measured by [2] in preferring a normalisation of the diffusion coefficient considerably below the galactic average. The detectability of extended \(\gamma\)-ray emission around the Geminga pulsar for both H.E.S.S. and HAWC would have been impossible in a case of a faster diffusion such as that expected in the galactic diffusion scenario. The discrepancy is particularly clear when the model with typical galactic diffusion values is directly compared with data. Based on our investigation of X-ray upper limits within a \(10^{\prime}\) region surrounding the pulsar, we can draw the conclusion that in a scenario involving a single diffusion zone, and assuming a constant magnetic field spanning the X-ray to \(\gamma\)-ray range, the magnetic field must be less than \(1\,\mu G\) in the absence of a sub-PeV energy cut-off. To account for a magnetic field of \(1\,\mu G\), a lower energy cut-off below \(75\,\)TeV is necessary. In conclusion, a scenario comprising galactic-like diffusion and magnetic field properties in the vicinity of Geminga would imply that the halo of electrons would be undetectable in VHE \(\gamma\)-rays by both HAWC and H.E.S.S., and potentially detectable in X-ray. Observational evidence Figure 4: Diffusion model jointly fit to the HAWC, H.E.S.S. and XMM-Newton data. For the two ground-based instruments radial profiles on degree scales are provided. The XMM-Newton upper limit is extracted from a \(10^{\prime}\) radius region, around the pulsar, from which a corresponding H.E.S.S. flux point is also extracted. now indicates that the converse is actually the case, hence the modelling results are consistent with a diffusion coefficient considerably below galactic average values in the vicinity of the Geminga pulsar.
2309.16540
Unsupervised Pretraining for Fact Verification by Language Model Distillation
Fact verification aims to verify a claim using evidence from a trustworthy knowledge base. To address this challenge, algorithms must produce features for every claim that are both semantically meaningful, and compact enough to find a semantic alignment with the source information. In contrast to previous work, which tackled the alignment problem by learning over annotated corpora of claims and their corresponding labels, we propose SFAVEL (Self-supervised Fact Verification via Language Model Distillation), a novel unsupervised pretraining framework that leverages pre-trained language models to distil self-supervised features into high-quality claim-fact alignments without the need for annotations. This is enabled by a novel contrastive loss function that encourages features to attain high-quality claim and evidence alignments whilst preserving the semantic relationships across the corpora. Notably, we present results that achieve a new state-of-the-art on FB15k-237 (+5.3% Hits@1) and FEVER (+8% accuracy) with linear evaluation.
Adrián Bazaga, Pietro Liò, Gos Micklem
2023-09-28T15:53:44Z
http://arxiv.org/abs/2309.16540v3
# Unsupervised Fact Verification by Language Model Distillation ###### Abstract Unsupervised fact verification aims to verify a claim using evidence from a trustworthy knowledge base without any kind of data annotation. To address this challenge, algorithms must produce features for every claim that are both semantically meaningful, and compact enough to find a semantic alignment with the source information. In contrast to previous work, which tackled the alignment problem by learning over annotated corpora of claims and their corresponding labels, we propose SFAVEL (Self-supervised \(\underline{F}a\)ct \(\underline{V}\)erification via \(\underline{L}\)anguage Model Distillation), a novel unsupervised framework that leverages pre-trained language models to distil self-supervised features into high-quality claim-fact alignments without the need for annotations. This is enabled by a novel contrastive loss function that encourages features to attain high-quality claim and evidence alignments whilst preserving the semantic relationships across the corpora. Notably, we present results that achieve a new state-of-the-art on the standard FEVER fact verification benchmark (+8% accuracy) with linear evaluation. ## 1 Introduction In recent years, the issue of automated fact verification has gained considerable attention as the volume of potentially misleading and false claims rises (Guo et al., 2022), resulting in the development of fully automated methods for fact checking (see Thorne et al. (2018); Zubiaga et al. (2018); Guo et al. (2022); Vladika & Matthes (2023); Das et al. (2023) for recent surveys). Pioneering research in the field of Natural Language Processing (NLP) has led to the emergence of (large) language models (LMS) (e.g. Raffel et al. (2020); Brown et al. (2020); Radford et al. (2019, 2018)). These models have been successful in many applications due to the vast implicit knowledge contained within them, and their strong capabilities for semantic understanding of language. However, issues around fact hallucination have gained considerable attention (Huang et al., 2023; Liu et al., 2023) and are a major concern in the widespread usage of LLM-based applications across different settings. As the world becomes more aware of the issues around information trustworthiness, the importance of developing robust fact verification techniques grows ever more critical. Historically, the design of fact verification methods has been enabled by the creation of annotated datasets, such as FEVER (Thorne et al., 2018) or MultiFC (Augenstein et al., 2019), of appropriate scale, quality, and complexity in order to develop and evaluate models for fact checking. Most recent methods for this task have been dominated by two approaches: natural language inference (NLI) models (e.g., Si et al. (2021); Zhu et al. (2021); Thorne et al. (2018); Luken et al. (2018); Yin & Roth (2018); Ye et al. (2020)), and knowledge graph-augmented methods (e.g. Zhou et al. (2019); Zhong et al. (2020); Chen et al. (2021a,b); Liu et al. (2021)). These proposals mainly leverage the NLI methods to model the semantic relationship between claim and evidence, or further make use of the knowledge graph (KG) structure to capture the features underlying between multiple pieces of evidence. However, these studies have largely relied on annotated data for model training, and while gathering data is often not difficult, its labeling or annotation is always time-consuming and costly. Thus, an emerging trend in the literature (Chen et al., 2020; 20; Caron et al., 2020; He et al., 2020) is to move away from annotation-dependant methods and try to learn patterns in the data using unsupervised training methods. With the advent of unsupervised training methods, new avenues have opened for research into leveraging the huge amounts of unlabeled data to achieve better performance more efficiently. Despite significant advancements in the field of unsupervised learning, only a handful of strategies have been proposed for textual fact verification (e.g. Jobanputra (2019); Kim and Choi (2020); Jolly et al. (2022); Zeng and Gao (2023)). Thus there are still opportunities for the development of unsupervised techniques tailored specifically for such tasks. Following recent trends in unsupervised methods, we eliminate data annotation requirements and instead, without human supervision, automatically try to identify relevant evidence for fact-checking. Thus, in this paper we present SFAVEL (Self-supervised Fact Verification via Language Model Distillation), which introduces a novel self-supervised feature representation learning strategy with well-designed sub-tasks for automatic fact verification. SFAVEL leverages pre-trained features from Language Models and focus on distilling them into compact and discrete structures that attain a high alignment between the textual claims to be verified, and their corresponding evidence in the knowledge graph. In particular, our contributions are summarized as follows: * We introduce Self-supervised Fact Verification via Language Model Distillation (SFAVEL), a novel unsupervised method tailored for fact verification on textual claims and knowledge graph-based evidence by language model distillation. * We demonstrate that SFAVEL achieves state of the art performance on the FEVER fact verification challenge when compared to both previous supervised and unsupervised approaches. * We justify SFAVEL's design decisions with ablation studies on the main architectural components. ## 2 Related Work Fact verification with pre-trained language modelsMost recent works typically divide the fact verification task into two stages. The first stage retrieves a relatively small subset of evidence from a knowledge source (e.g. a knowledge graph) that is relevant to verify a given claim. The second stage performs reasoning over the retrieved evidence to discern the veracity of the claim. Such retrieval-and-reasoning approaches aim to reduce the search space, and have proven their superiority over directly reasoning on the whole knowledge graph (Chen et al., 2019; Saxena et al., 2020). In order to match evidence with claims, a typical approach is to devise claim-fact similarities using semantic matching with neural networks. Due to the great semantic understanding recently demonstrated by pre-trained language models (PLMs), some recent works employ PLMs for addressing the claim-fact semantic matching task. In this vein, some works exploit the implicit knowledge stored within LMs for performing zero-shot fact checking, without any external knowledge or explicit evidence retrieval (Lee et al., 2020; Yu et al., 2023). However, such methods are prone to suffer from hallucination errors, resulting in incorrect predictions. Other work, such as ReAct (Yao et al., 2022), explores the use of LMs to generate both reasoning traces and task-specific actions over a knowledge base by in-context learning via prompting, overcoming prevalent hallucination issues by interacting with a Wikipedia API. However such an approach is limited by input length making it impractical in complex tasks. In SFAVEL, we distill the features of recent pre-trained language models to yield highly-correlated claim and evidence embeddings. We make use of a set of 7 language models as backbones because of their quality, but note that SFAVEL can work with any language model features. Unsupervised pre-training methods for fact verificationLearning meaningful features for claim-fact matching without human labels is a nascent research direction in fact verification approaches, with recent works relying on self-supervised techniques. For instance, CosG (Chen et al., 2021d) proposes a graph contrastive learning approach to learn distinctive representations for semantically similar claims with differing labels, with the goal of mitigating the over-smoothing issues commonly found in graph-based approaches. The model incorporates both unsupervised and supervised contrastive learning tasks to train a graph convolutional encoder, enhancing the representation of claim-fact pairs in the embedding space. Mu et al. (2023) presents SSDL, a multi-task learning strategy that initially builds a student classifier using both self-supervised and semi-supervised methods, and then fine-tunes the classifier using distilled guidance from a larger teacher network that remains frozen during training. However, this method requires pairs of text claims and corresponding visual information as training data. Chen et al. (2021c) introduces KEGA, a knowledge-enhanced graph attention network for fact verification, which uses external knowledge bases to improve claim and evidence representations. It uses a contrastive learning loss to capture graph structure features. With BERT as its backbone, the model includes knowledge from WordNet and pre-trained knowledge base embeddings to enrich token representations, while a graph attention network (Velickovic et al., 2018) and the contrastive loss further enhance the model's ability to reason. LaPraDaOR (Xu et al., 2022) introduces a pre-trained dense retriever approach with contrastive learning for unsupervised training of query and document encoders. The method is applied in a variety of text retrieval challenges, with FEVER being one of them. However, it shows a significant performance gap when compared against the supervised state-of-the-art approaches for FEVER. One of the main reasons is the lack of task-specific contrastive functions. In contrast to these works, SFAVEL is designed with a task-specific self-supervised feature representation strategy, leveraging language model distillation to achieve high-quality claim-fact unsupervised matching on large scale datasets for fact verification. Knowledge distillationKnowledge distillation seeks to transfer the knowledge from a (usually large) model, called the teacher, to another (usually small) model, called the student. This technique is often used for increasing the performance of the small model. One of the first approaches for knowledge distillation was proposed by Hinton et al. (2015), via minimizing the KL-divergence between the teacher and student's logits, using the predicted class probabilities from the teacher as soft labels to guide the student model. Instead of imitating the teacher's logits, Romero et al. (2015) distilled knowledge by minimizing the \(\mathbb{L}_{2}\) distance between the intermediate outputs of the student and teacher. Park et al. (2019) aligned the pair-wise similarity graph of the student with the teacher, and Zagoruyko & Komodakis (2017) used the attention map generated by the teacher to force the student to attend to the same areas as the teacher. More recently, knowledge distillation has been extended for self-supervised settings. For instance, Tian et al. (2022) use the contrastive loss to enforce cross-modality consistency. Xu et al. (2020) and Fang et al. (2021) aim at aligning the features between views of the same instances by computing pair-wise similarities between the student's outputs and features kept in a feature memory bank produced by the teacher. In this work, we propose to transfer the semantic knowledge of a pre-trained language model within a new type of self-supervised task, fact verification, by leveraging the language model capabilities of language understanding to guide the student to produce high-quality features for claim-fact matching. ## 3 Overview of the approach In this section we present our proposed unsupervised approach, SFAVEL, in detail as illustrated in Figure 0(a). First, we begin with the data processing pipeline. Then, we detail our proposed pre-training methodology, followed by details of the different components of our proposed contrastive loss function. Finally, we describe the adaptation phase, where we fine-tune the model pre-trained with our framework for a downstream fact verification task. ### Data processing pipeline Throughout this section, we assume to have sampled a batch of unlabeled claims \(x\) = \(\{x_{i}\}_{i=1}^{N}\), with \(x_{i}\) being the \(i^{th}\) claim and \(N\) the batch size. In addition, we assume access to a knowledge base represented as a knowledge graph \(G\) with facts represented as triples of subject, relation, object, named as head, relation and tail. More formally, let \(H\) = {\(\text{h}_{1}\),..., \(\text{h}_{|\text{H}|}\)} and \(T\) = {\(\text{t}_{1}\),..., \(\text{t}_{|\text{T}|}\)} be sets of head and tail entities, respectively, in the fact set \(\mathbb{V}\) = {\(\text{y}_{1}\),..., \(\text{y}_{|\mathbb{V}|}\)}, where \(H\), \(T\in\varepsilon\), with \(\varepsilon\) depicting the set of real-world entities associated with a name (e.g. Barack Obama, New York City). Then, \(G\) can be defined as \(G\) = {\((\text{h}_{i}\), \(r_{i}\), \(t_{i})\) \(|\)\(\text{h}_{i}\in H\), \(t_{i}\in T\), \(r_{i}\in\mathbb{R}\) }, where \(i\in\{1,\ldots,|\mathbb{V}|\}\). Here, \(R=\{r_{i}\}_{i=1}^{|R|}\) describes the relationship type between entities (e.g. \(was\ born\ in\)). ### Pretraining method As shown in Figure 0(a), our pre-training approach uses a pre-trained language model to obtain a feature tensor for each of the input claims. In SFAVEL, we use a knowledge model to embed facts from the knowledge graph, and a scoring module that scores each of such facts conditioned upon a specific claim. To tackle the discrepancy in information-level between the features from the pre-trained language model and the knowledge model, we introduce an unsupervised distillation loss that encourages the representation of the claim and its related knowledge facts to be mapped close together in the feature space of the knowledge model. A scoring loss function encourages the scoring module to provide higher scores for positive facts than to randomly-generated negative facts. To avoid the network finding trivial solutions where both positive and negative facts are given similar scores, a contrastive loss is used to encourage the separation, within the feature space, of features representing the positive and negative facts. First, the knowledge model is initialized randomly and the backbone is initialized from any off-the-shelf pre-trained language model (e.g. a T5; Raffel et al. (2020)). In this work, the backbone \(f_{L}\) is kept frozen during the entire training and is only used to obtain a feature tensor for each claim, denoted as \(X_{LM}\), which is used for distillation on the much smaller knowledge model. In order to obtain a single tensor representation per claim, we take the global average pooling (GAP) of the backbone features for each claim. For the knowledge model, we utilize a Relational Graph Attention Network (Busbridge et al., 2019). Next, the knowledge model, \(f_{G}:G\rightarrow\mathbb{R}^{|e|\times d_{V}}\), maps the input knowledge graph, \(G\), into \(X_{KB}\)\(\in\mathbb{R}^{|e|\times d_{T}}\), where \(|e|\) is the number of entities in \(G\) and \(d_{V}\) is the feature space dimensionality. In order to obtain a single feature tensor for each fact in \(V\), we use a multilayer perceptron (MLP) that combines the head and tail embeddings for each fact into a single tensor, denoted as \(X_{F}\in\mathbb{R}^{|V|\times d_{T}}\). Then, given the fact embeddings \(X_{F}\), and the claim embeddings \(X_{LM}\), we propose a score function, \(f_{score}\). The goal of \(f_{score}\) is to measure how likely it is that a given fact is in the same context as the corresponding claim. Specifically, the calculation \(f_{score}\) for a single claim embedding, \(x\in X_{F}\), and a set of fact embeddings \(F\in X_{F}\), is defined as: Figure 1: (a) A high-level overview of the SEAVEL framework. Given a textual claim, we use a frozen language model (orange box) to obtain its embedding features, \(X_{LM}\). The knowledge base is fed to the knowledge model to produce a knowledge base embedding \(X_{F}\). Then, the scoring module produces scores for facts in the knowledge base, conditioned upon the claim embedding. The positive sub-graph formed by the top \(K\) facts is kept, denoted as \(X_{F}^{+}\). Next, a negative pool of instances \(\mathcal{N}\). Finally, both the positive and negative sub-graphs are encoded with the knowledge model, obtaining the positive and negative sub-graph embeddings, \(X_{F}^{+}\) and \(X_{F}^{-}\), and their respective scores, \(S^{+}\) and \(S^{-}\). Grey boxes represent three the different components of our self-supervised loss function used to train the knowledge model. (b) Prediction stage on a downstream task using the pre-trained model. \[f_{\text{score}}(x_{claim},x_{fact})=d(x_{claim},x_{fact}) \tag{1}\] where \(d(.)\) is a similarity score, which we take to be the \(\mathbb{L}_{2}\) norm. Then, given a claim, \(x_{i}\) and every fact in the knowledge base, \(X_{F}\), we can compute the relevance scores \(S_{i}=\{f_{\text{score}}(x_{i}^{LM},x_{j}^{F})\mid\forall\text{j}=1,\,2,\,\ldots,\,\lvert\text{V}\rvert\}\). Then, the set of most relevant facts corresponding to \(x_{i}\) is defined as: \[F_{i}^{+}=\text{top-rank}(S_{i},\text{K}) \tag{2}\] where top-rank(\(\cdot\), K) returns the indices of top K items in a set. We can now obtain the embeddings of the positive facts, \(X_{F}^{+}\subset X_{F}\), and the corresponding scores, \(S^{+}\subset S\), by using the indices of the top K facts according to the scores \(S\). ### Generation of negative instances In Section 3.2 we have described the process of obtaining both the positive fact embeddings and scores, \(X_{F}^{+}\) and \(S^{+}\), respectively. In this section we explain how to harness the graph structure of the knowledge base to produce corresponding negative signals for contrastive learning. In order to produce a negative set for claim \(x_{i}\), herein denoted as \(\mathcal{N}_{i}\), we take inspiration from recent advances in graph contrastive learning (e.g. Xia et al. (2022); Yang et al. (2022); Rony et al. (2022)), and propose to generate two sets of negative instances: in-batch negatives and in-knowledge-base negatives, denoted as \(\mathcal{N}_{i}^{in}\) and \(\mathcal{N}_{i}^{kb}\), respectively. Our approach aims to generate negative samples that are factually false while preserving the contextual meaning of the entities, so that meaningful negative samples are fetched. In-batch negativesTo generate in-batch negatives we perform a random perturbation of the entities in the set of positive facts \(F_{i}^{+}\) for a given claim \(x_{i}\). Formally, let us define the set of triples in the positive set of claim \(i\) as \(T_{i}=\{\)(\(h_{i}\), \(r_{i}\), \(t_{i}\)) \(\mid\forall\) i = 1, 2, \(\ldots\), \(\lvert F_{i}^{+}\rvert\) \(\}\), where \(h\), \(r\), \(t\) represents head, relation and tail, respectively. Our goal is to generate \(\mathcal{M}\) negative samples in each given batch \(\mathcal{B}\). For each triple \(t_{h,r,t}\) in \(T_{i}\), we decide in a probabilistic manner whether the perturbation is done on the head or the tail of the triple. For this, let us define a random variable \(perturb\_head\)\(\sim\) Bern(\(p_{head}\)) sampled from a Bernoulli distribution with parameter \(p_{head}\), dictating whether the head of a triple should be perturbed, with probability \(p_{head}\), or the tail otherwise. Then for each triple \(t_{h,r,t}\), we generate a negative triple \(t_{h^{{}^{\prime}},r,t}\) or \(t_{h,r,t^{{}^{\prime}}}\), by altering head (\(p_{head}\) = 1) or tail (\(p_{head}\) = 0), respectively, such that the new head, \(h\), or tail, \(t^{{}^{\prime}}\), are sampled from \(\varepsilon\) uniformly. To provide semantically meaningful negatives, we enforce the entity type of the randomly sampled head/tail to be of the same type as the one in \(t_{h,r,t}\). In-knowledge-base negativesGiven the nature of the in-batch negative generation process, the negative triples are bounded to be semantically similar to the corresponding positive triples, and hence close by in the feature space. Therefore, this bias leads to under-exploration of other parts of the knowledge base feature space. In order to alleviate this issue, we propose to add randomly-sampled facts from the knowledge base into the negative set. Specifically, given the knowledge base \(G\), we sample \(\mathcal{M}\) triples that are at least \(H\) hops away from \(F_{i}^{+}\). This encourages the negative generation procedure to dynamically explore other parts of the knowledge base. To obtain the final negative set for claim \(x_{i}\), we join both the set of in-batch negatives \(\mathcal{N}_{i}^{in}\) and in-knowledge base negatives \(\mathcal{N}_{i}^{kb}\) as: \[\mathcal{N}_{i}=\mathcal{N}_{i}^{in}\cup\mathcal{N}_{i}^{kb} \tag{3}\] Finally, we obtain the embeddings of the negative set, \(X_{F}^{-}\) from the knowledge model, and the negative scores from the scoring module as \(S^{-}=\{f_{\text{score}}(x_{i}^{LM},x_{j}^{F^{-}})\mid\forall\text{j}=1,\,2,\, \ldots,\lvert N_{i}\rvert\}\). ### Claim-Fact Matching via Language Model Distillation Once we get positive and negative fact embeddings and scores, they can be used for the distillation process. In particular, we seek to learn a low-dimensional embedding that "distills" the feature correspondences of a pre-trained language model between textual claims and the knowledge base features produced by the knowledge model. To achieve this, we propose a loss function composed of 3 terms: claim-fact distillation, \(\mathcal{L}_{distill}\), intra-sample contrastive loss, \(\mathcal{L}_{intra}\), and scoring loss, \(\mathcal{L}_{scoring}\). Claim-Fact DistillationTo transfer the feature correspondences from the pre-trained language model to the knowledge model, we propose a feature-based (Zagoruyko & Komodakis, 2017) claim-fact distillation loss. Specifically, we propose the following distillation loss: \[\mathcal{L}_{distill}\!=\!\sum_{j\in F^{+}}\|\frac{F_{KM}^{j}}{\|F_{KM}^{j}\|_ {2}}-\frac{F_{LM}^{j}}{\|F_{LM}^{j}\|_{2}}\|_{p}. \tag{4}\] where \(F_{KM}^{j}\) and \(F_{LM}^{j}\) are respectively the knowledge model (student) and language model (teacher) feature representations, for each fact \(j\), in the positive fact set, \(F^{+}\). \(p\) refers to the norm type, and we use \(p=2\) for \(\mathbb{L}_{2}\)-normalized features. Intra-Sample Contrastive LossThe intra-sample distillation loss derives from the standard contrastive loss. The aim of the contrastive loss is to learn representations by discriminating the positive instance, among negative samples. For instance, in MoCo (He et al., 2020), two views, \(x\) and \(x^{{}^{\prime}}\), of one input image are obtained using augmentation, and an encoder \(f_{q}\) and momentum encoder \(f_{k}\) are used to generate embeddings of the positive pairs, such that \(q=f_{q}(x)\) and \(k=f_{k}(x^{{}^{\prime}})\). In this case, the contrastive loss can be defined as: \[\mathcal{L}_{contrastive}\!=\!-\log\frac{\exp(\mathbf{q}\cdot\mathbf{k}^{+}/ \tau)}{\sum_{i\in N}\exp(\mathbf{q}\cdot\mathbf{k}_{i}/\tau)}. \tag{5}\] We extend the contrastive loss function by replacing the query, \(q\), in the original formulation with the centroid of the positive facts, which can be seen as the positive facts subgraph embedding, denoted as \(\hat{X}^{F^{+}}\). We calculate the positive subgraph embedding as the average of the positive facts embeddings. Then, we contrast the query with respect to each of the individual positive (numerator) and negative (denominator) facts, as follows: \[\mathcal{L}_{intra}\!=\!-\log\frac{\sum_{i\in F^{+}}\exp(\hat{X}^{F^{+}}\cdot X _{i}^{F^{+}}/\tau)}{\sum_{j\in F^{-}}\exp(\hat{X}^{F^{+}}\cdot X_{j}^{F^{-}}/ \tau)}. \tag{6}\] where \(\tau\) is the temperature parameter, used during training to smoothing the logits distribution. The rationale of \(\mathcal{L}_{intra}\) is to pull the positive fact embeddings close to the centroid subgraph while pushing away the negative fact embeddings. Scoring LossThe scoring loss is a variant of the conventional pair-wise ranking loss (Chen et al., 2009). Ranking losses are used to evaluate the performance of a learned ranking function. In this work, we propose the \(\mathcal{L}_{scoring}\) loss function to maximize the scores given by the scoring model to the positive facts, \(F^{+}\), and minimize the scores of the negative facts, \(F^{-}\), for a given claim \(x_{i}\). In particular, we minimize the following loss: \[\mathcal{L}_{scoring}=\sum_{i\in F^{+}}\sum_{j\in F^{-}}\ max\left(0,\gamma+f_{\text{ score}}(x_{i}^{LM},x_{i}^{F^{+}})-f_{\text{score}}(x_{i}^{LM},x_{j}^{F^{-}})\right) \tag{7}\] where \(\gamma\) is a margin factor. Minimizing \(\mathcal{L}_{scoring}\) encourages elements of \(f_{\text{score}}(x_{i}^{LM},x_{i}^{F^{+}})\) to be highly ranked and elements of \(f_{\text{score}}(x_{i}^{LM},x_{j}^{F^{-}})\) to have low scores. More explicitly, it optimizes the model parameters so that the scores of positive facts, \((h,r,t)\in F^{+}\), are higher than the scores of negative facts \((h^{\prime},r^{\prime},t^{\prime})\in F^{-}\). Finally, SFAVEL's full loss is: \[\mathcal{L}_{total}=\lambda_{distill}\mathcal{L}_{distill}+\lambda_{cont} \mathcal{L}_{cont}+\lambda_{scoring}\mathcal{L}_{scoring} \tag{8}\] where \(\lambda_{distill}\), \(\lambda_{cont}\), \(\lambda_{scoring}\in\mathbb{R}\). In practice, we found that a ratio of \(\lambda_{cont}\approx\lambda_{scoring}\approx 2\lambda_{distill}\) led to good experimental results. ## 4 Experiments In this section, we present a comparative study of the results of our proposed method on standard benchmarks for fact verification, as well as ablation studies on the most relevant components. We first describe the datasets, evaluation and training settings. Next, we discuss extensive experiments on our method for the task of fact verification. Finally, we run a set of ablation studies to evaluate the impact of the most important components of our proposed framework. ### Implementation details Datasets and evaluationWe use the FEVER (Thorne et al., 2018) dataset for all our experiments and comparison against previous methods. For pre-training we use the official FEVER training set. For providing the performance comparisons against previous work, we use the official FEVER testing set. In our ablation studies, we employ the official FEVER validation split. To evaluate the learning performance in a low-data regime, we randomly sample 1%, 5% or 10% of the training data. As knowledge base, we use the Wikidata5m (Wang et al., 2021). We provide some examples of claims from FEVER in Section A.1 of the Appendix. PretrainingSeven models with a variety of sizes are used as pre-trained language models: T5-Small (Raffel et al., 2020), DeBERTaV3 (He et al., 2023), XLNet (Yang et al., 2020), GPT-2 (Radford et al., 2019), RoBERTa (Liu et al., 2019), BERT (Devlin et al., 2019) and Transformer-XL (Dai et al., 2019). The pre-trained language models are kept frozen during pre-training with our method. The officially released weights from HuggingFace (Wolf et al., 2020) are used to initialize the pre-trained language models for fair comparisons. Pre-training is run for a total of 1000 epochs. During training we use a RGAT as the knowledge model with 3 convolutional layers, with a hidden size of 512. The projector from node embeddings to triple embeddings is a MLP with the same dimensionality as the pre-trained language model sentence embedding size. The model is trained with the SGD optimizer with momentum 0.9 and weight decay 0.0001. The batch size is set to 512 over 4 A100 GPUs, and the coefficients for the different losses are \(\lambda_{cont}=\lambda_{scoring}=1\), \(\lambda_{distill}=2\). We set the temperature \(\tau\) = 0.1. We use \(K\) = 5 for the number of facts to keep after scoring. The number of negative instances used in the negative pool for contrastive learning is set to \(M\) = 4096. Linear ProbeIn order to evaluate the quality of the distilled claim-fact matching features, we follow common evaluation protocols (Gansbeke et al., 2021; Chen et al., 2020) for measuring transfer learning effectiveness. Specifically, we train a linear classifier to perform label assignment to claims (see Figure 0(b) for an example illustration). The classifier is trained for 200 epochs, using the SGD optimizer with 20 as the initial learning rate. The only purpose of this linear probe is to evaluate the quality of the features and is not part of the SFAVEL training procedure. ### Results We summarize our main results on the FEVER fact verification benchmark in Table 1. Our method significantly outperforms the prior state of the art, both supervised and unsupervised. In particular, SFAVEL improves by +8.2% on label accuracy in the test set when using a simple linear probe and a frozen backbone pre-trained using our method. Notably, even though our method has been trained without any data annotations, it is capable of outperforming the best supervised method (ProoFVer) by +8.98% label accuracy. These experiments demonstrate the benefits of our task-specific unsupervised framework for learning rich feature representations for claim-fact matching. Furthermore, following previous works in contrastive learning (Chen et al., 2020b), we evaluate the proposed method by distilling 3 different language model backbones (T5-Small, RoBERTa, Transformer-XL) and fine-tuning in a low-data setting by using 1%, 5% and 10% of labeled data. As shown in Figure 2, our method is capable of achieve on-par performance with recent methods despite only fine-tuning with 1% of the data, reaching 71.82% and 74.22% test set accuracy with RoBERTa and Transformer-XL backbones, respectively. When using 5% of labelled data, SFAVEL surpasses previous state-of-the-art on the FEVER benchmark. This experiment highlights the high-quality features our framework is capable of learning for claim-fact matching, allowing high accuracy even when only a few labelled data points are available. ### Ablation studies In the following section, we provide several ablation studies for our proposed approach. All experiments and results are performed on the FEVER validation set with the Transformer-XL as language model backbone unless explicitly stated. Pre-trained Language ModelTo understand the impact of pre-trained language model selection for distillation backbone, we perform an ablation study and report the results in Table 2. We analyze the effect of using several different language models in SFAVEL, such as T5-Small, DeBERTaV3, XLNet, GPT-2, RoBERTa, BERT and Transformer-XL. We choose this particular set of language models as they are diverse in terms of their number of parameters. The smallest language model in our experiments is T5-Small (60 Million parameters), with the biggest LM being Transformer-XL (257 Million parameters). This gives some insight into how the language representation capabilities of each of the models affects the distillation effectiveness when using SFAVEL. We find that the Transformer-XL is the best feature extractor of the list and leads by a significant margin in terms of \begin{table} \begin{tabular}{c|c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Unsupervised**} & \multicolumn{2}{c|}{**Dev**} & \multicolumn{2}{c}{**Test**} \\ \cline{3-6} & & **LA** & **Fever Score** & **LA** & **Fever Score** \\ \hline GEAR (Zhou et al., 2019) & ✗ & 74.84 & 70.69 & 71.60 & 67.10 \\ KGAT (Liu et al., 2021) & ✗ & 78.29 & 76.11 & 74.07 & 70.38 \\ Di Leillo et al. (2022) & ✓ & 81.21 & - & 74.39 & - \\ GERE (Chen et al., 2022) & ✗ & 79.44 & 77.38 & 75.24 & 71.17 \\ CoreBERT (Ye et al., 2020) & ✗ & - & - & 75.96 & 72.30 \\ DREAM (Zhong et al., 2020) & ✗ & 79.16 & - & 76.85 & 70.60 \\ ProoFver (Krishna et al., 2021) & ✗ & 80.74 & 79.07 & 79.47 & 76.82 \\ Jobnputra (2019) & ✓ & 80.20 & - & 80.25 & - \\ \hline SFAVEL (Ours) & ✓ & **89.51** & **87.32** & **88.45** & **85.23** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance on the FEVER benchmark (label accuracy and FEVER score in %) of our proposed pre-training approach after fine-tuning a linear classification probe on the FEVER benchmark. In this experiment we use the Transformer-XL as backbone. Underlined performances indicate the top score for a particular metric, bold indicates the overall best method. We show that our SFAVEL outperforms previous methods, both supervised and unsupervised. accuracy. However, we note that even the smallest backbone (T5-Small; 60M parameters), although modestly, achieves performance greater than the previous state-of-the-art (+0.54% accuracy). Influence of \(K\) in fact selectionWe inspect the impact of \(K\) in fact selection after scoring for model performance. As shown in Figure 3, the results are consistent for a range of \(K\) (\(K\) = 1, 5, 10, 20, 30). In particular, we observe a decrease in classification accuracy with \(K\) = 10, 20, 30 compared with \(K\) = 5. We think this decrease is because of the factual noise introduced when \(K\) becomes large, where irrelevant information is used for verifying the specific claim. In contrast, with \(K\) = 1, the performance drop is caused by a lack of information, as only a single fact is used to check a claim. Avoiding this is critical in settings where multiple pieces of evidence are required for reasoning, as it is for FEVER. Loss function componentsWe evaluate the different loss functions described in Section 3.4, and provide the results in Table 3. In particular, we investigate suppressing particular components of the loss function, such as the claim-fact distillation loss, intra-sample contrastive loss, and scoring loss. To do so, we set their respectively \(\lambda\) factors to 0, effectively nullifying their influence during training. We find that these loss components lead to significant performance decreases when removed, therefore justifying our architectural decisions. ## 5 Conclusion This paper proposes a new self-supervised distillation method, named SFAVEL, which aims to produce high-quality features for claim-fact matching in the context of fact verification tasks. We have found that modern self-supervised language model backbones can be distilled into smaller knowledge-aware models to yield state-of-the-art unsupervised fact verification. Our approach achieves this by introducing a novel contrastive loss, that leverages inductive biases in the fact verification task, and exploits them for accurate and entirely unsupervised claim-fact matching. We show that SFAVEL yields a significant improvement over prior state-of-the-art, both over unsupervised and supervised methods, on the FEVER fact verification challenge (+8% accuracy). Finally, we justify the design decisions of SFAVEL by performing ablation studies over the most important architectural components. The proposed self-supervised framework is a general strategy for improving unsupervised fact verification, and we hope it will guide new directions in the unsupervised learning field.
2309.10131
Deep Prompt Tuning for Graph Transformers
Graph transformers have gained popularity in various graph-based tasks by addressing challenges faced by traditional Graph Neural Networks. However, the quadratic complexity of self-attention operations and the extensive layering in graph transformer architectures present challenges when applying them to graph based prediction tasks. Fine-tuning, a common approach, is resource-intensive and requires storing multiple copies of large models. We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning for leveraging large graph transformer models in downstream graph based prediction tasks. Our method introduces trainable feature nodes to the graph and pre-pends task-specific tokens to the graph transformer, enhancing the model's expressive power. By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies, making it suitable for small datasets and scalable to large graphs. Through extensive experiments on various-sized datasets, we demonstrate that deep graph prompt tuning achieves comparable or even superior performance to fine-tuning, despite utilizing significantly fewer task-specific parameters. Our contributions include the introduction of prompt tuning for graph transformers, its application to both graph transformers and message passing graph neural networks, improved efficiency and resource utilization, and compelling experimental results. This work brings attention to a promising approach to leverage pre-trained models in graph based prediction tasks and offers new opportunities for exploring and advancing graph representation learning.
Reza Shirkavand, Heng Huang
2023-09-18T20:12:17Z
http://arxiv.org/abs/2309.10131v1
# Deep Prompt Tuning for Graph Transformers ###### Abstract Graph transformers have gained popularity in various graph-based tasks by addressing challenges faced by traditional Graph Neural Networks. However, the quadratic complexity of self-attention operations and the extensive layering in graph transformer architectures present challenges when applying them to graph based prediction tasks. Fine-tuning, a common approach, is resource-intensive and requires storing multiple copies of large models. We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning for leveraging large graph transformer models in downstream graph based prediction tasks. Our method introduces trainable feature nodes to the graph and pre-pends task-specific tokens to the graph transformer, enhancing the model's expressive power. By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies, making it suitable for small datasets and scalable to large graphs. Through extensive experiments on various-sized datasets, we demonstrate that deep graph prompt tuning achieves comparable or even superior performance to fine-tuning, despite utilizing significantly fewer task-specific parameters. Our contributions include the introduction of prompt tuning for graph transformers, its application to both graph transformers and message passing graph neural networks, improved efficiency and resource utilization, and compelling experimental results. This work brings attention to a promising approach to leverage pre-trained models in graph based prediction tasks and offers new opportunities for exploring and advancing graph representation learning. 1 1 University of Maryland [email protected], [email protected], ## Introduction The remarkable success of transformers in Natural Language Processing [23] and Computer Vision [17] has led to their increasing popularity in graph applications. In recent years, graph transformers have been widely adopted in various graph-based tasks [16, 15, 14, 13]. Graph transformers address key challenges faced by traditional Graph Neural Networks (GNNs), such as limited expressiveness [12, 13], over-smoothing [1] and over-squashing [1], by leveraging highly expressive global self-attention modules. Instead of introducing graph structure bias into intermediate layers, graph transformers use encoded structural and positional information of the graph into the input node features [3, 14, 15]. The quadratic complexity of the self-attention operation, combined with the extensive layering in graph transformer architectures, presents a significant challenge when applying them to graph prediction tasks. This challenge arises from the potential overfitting to small datasets, and even reduced-parameter versions may lack the necessary representational richness for complex graph datasets, such as those involved in molecular property prediction. To address this issue, a viable solution is to adopt a similar approach as in the application of large language models to downstream NLP tasks [1, 16]. This involves pre-training large graph transformer models on extensive datasets and subsequently fine-tuning them on smaller datasets. While fine-tuning can yield satisfactory results [16], it presents a resource-intensive approach due to the need to update the entire parameter set of the large graph transformer model. Furthermore, fine-tuning necessitates storing multiple copies of the same large model for different downstream tasks, which can be especially challenging and even prohibitive on smaller devices. In contrast to fine-tuning, prompt tuning, originally proposed for natural language processing (NLP) tasks, offers an alternative solution. Prompt tuning involves freezing all parameters of a pre-trained model and only updating either discrete [16, 17] or continuous [18, 19] lightweight tokens that are added to the inputs. Despite the success of prompt tuning in the field of NLP, the application of this technique to graph transformers has yet to be experimentally explored. Motivated by the concept of prompt tuning, we introduce a novel approach called _Deep Graph Prompt Tuning (DeepGPT)_, which serves as an alternative to fine-tuning for leveraging large graph transformer models in downstream graph prediction tasks. Fig. 1 illustrates the setup of a typical downstream graph prediction task, where an input graph is transformed into a final graph representation vector as the output. Assuming we have access to a pre-trained large graph transformer network trained on a different dataset, DeepGPT begins by adding a continuous task-specific graph _prompt token_ to the feature vectors of all nodes within the input graph. Subsequently, the modified input graph is fed into the graph transformer. Although this approach focuses on modifying the graph input level only, it theoretically allows for approximating various combinations of node-level, edge-level, and graph-level transformations within the architectures of existing pre-trained GNN models [14]. Additionally, we pre-pend continuous layer-specific task-specific _prefix tokens_ to the embeddings of each layer in the graph transformer network. This enables the self-attention module of each transformer layer to attend to the trainable task-specific tokens as if they were originally part of the sequence of node embeddings. It also increases the expressive power of our DeepGPT method. While our approach is specifically designed for graph transformers with global attention modules, it can be viewed as introducing a set of new nodes with trainable features to the graph and connecting them to all pre-existing nodes in the context of Message Passing Graph Neural Networks (MPGNNs). In contrast to fine-tuning, our approach involves freezing all parameters of the pre-trained graph transformer and exclusively tuning the task-specific tokens that are added. This allows for the practicality of storing a single copy of the pre-trained transformer architecture alongside the task-specific continuous prompts for different downstream tasks. Through extensive experiments conducted on various-sized OGB and Meloculent datasets, we have demonstrated that our approach delivers comparable, and in some cases even superior, performance compared to fine-tuning, despite utilizing fewer than 0.5% of task-specific parameters. This significant reduction in the number of free parameters proves advantageous for small datasets as it mitigates the risk of overfitting. Furthermore, our method enables the pre-trained graph transformer to scale effectively to large graphs, as the required computational resources are substantially reduced. Our main contributions can be summarized as follows: 1. We propose a novel prompt tuning method specifically designed for graph transformers, which, to the best of our knowledge, is the first of its kind in the field. 2. Our approach, designed for graph transformers with global attention modules, is equivalent to introducing additional trainable feature nodes connected to existing nodes within the context of MPGNNs, making it applicable to both Graph Transformers and MPGNNs. 3. Our approach significantly improves the efficiency and resource utilization of tuning a pre-trained graph transformer for downstream tasks. It eliminates the need to store separate copies of a large model for different graph prediction tasks, making it a more streamlined solution. Additionally, our method is well-suited for both small datasets and scales effectively to handle datasets with large graphs, addressing the limitations of graph transformers in dealing with such scenarios. 4. Our extensive experiments demonstrate that our method achieves performance that is on par with, and at times surpasses, fine-tuning, despite utilizing significantly fewer task-specific parameters. Moreover, our method demonstrates superiority over lightweight fine-tuning, where only the classification head is updated. ## Related Work ### Fine-tuning Strategies for Graph Neural Networks Several approaches have been explored in fine-tuning graph neural networks. For instance, ChemBERTa [12] adopts an NLP self-supervised pre-training strategy inspired by RoBERTa [13], where a portion of tokens in the SMILES string representation of PUBCHEM graphs are masked, and the model predicts them from other tokens. [15] propose a motif-based generative pre-training framework, training GNNs to make topological and label predictions. Another work by [16] introduces a molecular graph pre-training model with self-supervised tasks at the node, edge, and graph levels, including contextual property prediction and graph-level motif prediction. [13, 14] design a generative self-supervised strategy that leverages molecular descriptors and fingerprints to guide the model in capturing structural and semantic information from large-scale unlabeled molecular graph datasets. Graph Contrastive Learning (GCL) objectives have also been utilized in several studies [23, 24, 25] to enhance the pre-training process. Additionally, [11] pre-train their proposed Graphformer model on large-scale OGB datasets to capture rich representations, followed by fine-tuning for specific target tasks. ### Prompt Tuning for Graph Neural Networks Although there has been an extensive amount of research into both discrete [14, 15, 16] and continuous [13, 12] prompt tuning for NLP, the application of prompts to the graph domain is relatively unexplored. [23] propose a graph prompting function to transform a downstream node prediction task to an edge prediction task similar to the pretext masked-edge prediction used to pre-train a large GNN. [14] propose Graph Prompt (GP) as a universal graph prompt tuning method to existing pre-trained GNNs. They also introduce Graph Prompt Features (GPF) as a concrete instance of GP. GPF is a trainable token added to the node features of the input graph and updated during the tuning of the pre-trained model on the downstream task. ## Preliminary In this section, we will cover the fundamentals of graph neural networks (GNNs) and graph transformers, as well as provide an overview of the "pre-train, fine-tune" paradigm. ### Graph Neural Networks Let us consider an undirected graph \(G=(V,E)\), where \(V=\{1,\cdots,n\}\) represents the set of nodes and \(E\) represents the set of edges. We assume that each node \(i\) is associated with a d-dimensional feature vector \(x_{i}\in R^{d}\) for \(i=1,\cdots,n\). Graph Neural Networks (GNNs) employ a message-passing strategy [10] to learn node representations by iteratively aggregating the representations of neighboring nodes. Formally, the representation of node \(i\) at the \(k\)-th layer is denoted as \(h_{i}^{k}\), with \(h_{i}^{0}=x_{i}\). The aggregation and combination operation is defined as follows: \[h_{i}^{k}=\text{AGG-COMB}(h_{j}^{k-1}:j\in N(i)\cup i;\theta^{k}) \tag{1}\] Here, \(N(i)\) represents the set of adjacent nodes to node \(i\), and \(\theta^{k}\) is the parameter set of the \(k\)-th layer. The AGG-COMB operation collects the embeddings of neighboring nodes and combines them using sum, mean, or max functions to generate the embeddings for node \(i\). In graph representation learning tasks, a final READOUT function is employed to combine the node representations of a \(K\)-layer GNN into the graph embedding \(h_{G}\): \[h_{G}=\text{READOUT}(h_{i}^{K}:i=1,\cdots,n) \tag{2}\] Graph Neural Networks have demonstrated superior performance in various graph prediction tasks, including node-level [11], edge-level [10], and graph-level [12] tasks. ### Graph Transformers Unlike conventional GNNs, Transformers [20] do not explicitly utilize the graph structure to learn representations. Instead, they treat the graph as a set of nodes and infer similarities between nodes by employing self-attention mechanisms on node features. A typical transformer block consists of a multi-head attention module followed by a feed-forward network. The input node feature matrix \(X\) is linearly projected into Query (Q), Key (K), and Value (V) matrices, represented as \(Q=XW_{Q}\), \(K=XW_{K}\), and \(V=XW_{V}\), respectively. The self-attention activations are computed as follows: \[Attn(X)=\text{softmax}\left(\frac{QK^{T}}{\sqrt{d_{Q}}}\right)V \tag{3}\] Here, \(d_{Q}\) represents the dimension of \(Q\). The output of the transformer layer is obtained by applying a feed-forward network (FNN) to the sum of the input node features and the self-attention activations: \[X=\text{FFN}(X+Attn(X)) \tag{4}\] Multiple transformer layers are usually stacked together to form a transformer network. Since self-attention is a permutation-invariant operator, the transformer produces the same result regardless of changes in the graph structure, as long as the node features remain unchanged. To incorporate the structural information of the graph into the transformer architecture, effective absolute encodings are designed. These encodings are structural and positional representations of the graph that are added or concatenated to the input features to enhance the expressiveness and generalization capability of Graph Transformers. Examples of absolute encodings include Centrality Encoding [21], RWPE [22], and Laplacian Positional Encoding [17]. ### Pre-training and Fine-tuning of GNNs Supervised learning of node and graph representations often requires a large amount of annotated data, which can be challenging to obtain. Consequently, the "pre-train, fine-tune" paradigm has gained significant attention. In this paradigm, neural networks are pre-trained on pretext tasks, such as Contrastive Learning [14, 23, 24], and then fine-tuned on downstream tasks. The process involves training a graph neural network \(f_{\theta}\) on a pre-training dataset \(D_{pt}\) by minimizing the pre-training loss \(L_{pt}\), resulting in parameter set \(\theta_{pt}\). The downstream model parameters are initialized using \(\theta_{init}=\theta_{pt}\), and the prediction head of the GNN is replaced. Fine-tuning is performed by optimizing the downstream loss \(L\) on the downstream dataset \(D\) using the whole parameter set of \(f\) or a subset of it: \[\min_{\theta,\psi}L(p_{\psi}(f_{\theta}(x_{i})),y_{i}) \tag{5}\] Here, \(p_{\psi}\) denotes the new prediction head of the network, and \(y_{i}\) corresponds to the ground truth of \(x_{i}\). ## Methodology The "unsupervised pre-train, fine-tune" framework faces a significant challenge due to the training objective gap between pretext and downstream tasks. This issue becomes more pronounced when dealing with large neural networks like transformers, as it requires storing separate copies of the pre-trained network for each downstream task, incurring high costs. Moreover, fine-tuning these massive models on large downstream datasets is time-consuming. To address these limitations, we draw inspiration from the success of continuous prompt-based learning in NLP [13, 14] and propose deep prompt tuning for Graph Transformers as an alternative approach for node/graph classification tasks. The framework, depicted in Figure 1, leverages continuous prompts to guide the graph transformer's learning process, reducing the need for separate network copies and significantly improving efficiency. Based on intuition from prompt tuning, we believe that providing a proper context in the form of graph embeddings can guide a graph transformer without altering its parameters. By incorporating relevant graph embeddings as context, the graph transformer can achieve higher accuracy in graph prediction tasks. This approach extends the concept of prompting beyond adding single nodes or edges and aims to find a context that influences the transformer's encoding of graphs and the generation of predictions. Rather than discrete alterations to the graph [21], we propose optimizing continuous graph embeddings as context, which can effectively propagate throughout the graph transformer's layers, striking a balance between expressiveness and computational feasibility. Our approach entails transforming both the input graph and the pre-trained graph transformer model. We combine graph prompts with deep prefix prompts. This involves incorporating graph prompt tokens at the input of the graph transformer and pre-pending prefix tokens to each transformer layer and to direct the graph transformer in solving downstream node/graph classification tasks by fine-tuning these task-specific tokens. We demonstrate the superiority of this approach over using solely graph prompts or pre-fix tokens, despite utilizing the same number of parameters. Through our experiments, we showcase the acceptable, and at times even superior, performance of our approach with significantly fewer parameters compared to traditional fine-tuning, where all network parameters are adjusted, or lightweight fine-tuning, which involves freezing the pre-trained model's backbone and training only the prediction head. ### Graph Prompt Tokens In our prompt tuning method, the first step involves incorporating graph prompt tokens into the input graph nodes. For a \(d\)-dimensional node feature vector \(x_{i}\in R^{d}\), we introduce a trainable \(d\)-dimensional prompt token \(p\in R^{d}\) specific to the task, which is added to each node of the input graph. This results in a modified feature matrix \(X^{(p)}\) derived from the original feature matrix \(X_{i}\) as follows: \[x_{i}^{(p)}=x_{i}+p \tag{6}\] Prior research by [10] has demonstrated that such prompt tokens possess the theoretical capability to approximate a wide range of complex node-level, edge-level, and graph-level transformations on most existing Graph Neural Networks (GNNs) under two conditions: utilizing a Graph Convolutional Network (GCN) as the underlying architecture and employing sum/mean as the READOUT operator. Formally, for an input graph \(G(X,A)\), with \(X\) representing the node features and \(A\) denoting the adjacency matrix, given an arbitrary transformation \(g\), and a pre-trained GNN \(f\), there exists a corresponding prompt token \(p^{*}\) satisfying: \[f(X^{p^{*}},A)=f(g(X,A)) \tag{7}\] Although the theoretical foundation of the graph prompt approach primarily focuses on Convolutional Graph Neural Networks (GNNs), we empirically demonstrate that the benefits of graph prompts extend to graph transformers as well. The shared principles between these two models, such as alignment with the READOUT operator and the introduction of task-specific information, contribute to the improved performance of graph transformers when employing the graph prompt technique. ### Graph Transformer Prefix Tokens The second component of our approach involves the inclusion of prefix tokens at the beginning of the embeddings after each layer of the frozen transformer architecture, resulting in updated embeddings. Let \(PS_{idx}\) denote the sequence of prefix tokens, where \(p=|PS_{idx}|\) represents the length of the sequence. Assuming the dimension of the transformer embeddings is \(d\), and denoting the original transformer layer embeddings as \(E\), we utilize a soft prompt matrix \(P\in R^{p\times d}\) Figure 1: Overview of our proposed deep graph prompt tuning framework. **Top**: Fine-tuning requires storing separate copies of the pre-trained graph transformer for each downstream task and updating all parameters of the model. This approach is memory-intensive, time-consuming, and cost-inefficient. **Bottom**: our proposed graph prompt tuning method. We introduce graph prompt tokens to the input graph representations and each transformer layer activation. Additionally, we pre-pend prefix tokens to all transformer layers. By freezing most of the pre-trained model’s parameters and only updating the concatenated and added prompt tokens, we can fine-tune a large transformer on different downstream datasets while only storing the prompt tokens, minimizing memory requirements. to generate the new embeddings \(E^{*}\) according to the following rule: \[E^{*}[i,:]=\begin{cases}P[i,:]&i\in PS_{idx}\\ E[i,:]&otherwise\end{cases} \tag{8}\] The resulting embedding matrix is then passed as input to the subsequent transformer layer. Importantly, while the graph prompt tokens vary with each layer, the tokens in earlier layers still have an impact on the embeddings of later layers. This is due to the influence of the prompt token on the input embeddings through the self-attention operation, enabling these tokens to propagate throughout the entire transformer architecture. ### Deep Graph Transformer Prompt Tuning The shallow approach of simply appending or adding prompt tokens to the input graph [11] counters two significant challenges. Firstly, the limited number of trainable parameters leads to unstable training and unsatisfactory results. Secondly, adding prompts only to the input graph has minimal impact on the deeper layers of the transformer. To address these limitations, we propose the Deep Graph Prompt Tuning (DeepGPT) technique, which involves adding a prompt token to the input the graph transformer and pre-pending prefix prompts to each transformer layer embedding. Assuming a pre-trained graph transformer \(f_{\theta}\), DeepGPT transformation \(T_{\phi}\), and a downstream dataset \(D=(G_{i},y_{i})i=1^{n}\), we fine-tune the trainable parameter set \(\phi\) while keeping the parameters \(\theta\) of the pre-trained graph transformer frozen. This fine-tuning process aims to minimize the downstream loss \(L\), such as the binary cross-entropy loss, given by the following optimization objective: \[\min\phi\sum_{i=1}^{n}L(f_{\theta}(T_{\phi}(G_{i})),y_{i}) \tag{9}\] ### Relation to Conventional Graph Neural Networks Although our work primarily focuses on graph transformer architectures, the construct of prompt tokens we propose is applicable universally. The graph prompt tokens can be incorporated into various graph neural network models without assuming a specific structure for the model \(f\). Similarly, in the context of Message Passing Graph Neural Networks (MPGNNs), the prefix tokens of the transformer architecture can be replaced by adding a set of new trainable nodes to the graph and connecting them with edges to all existing nodes, thereby achieving similar effects. ## Experiments ### Experimental Settings #### Datasets The graph transformer models in this study were pre-trained on the OGB-LSC [13] quantum chemistry regression dataset, known as PCQM4Mv2. After pre-training, we assess the efficacy of our approach using two established sets of machine learning datasets focused on molecular graphs: the Open Graph Benchmark (OGB) [13], which encompasses datasets of diverse sizes, covering a wide range of realistic tasks, and the Moleculenet Benchmark [13], which comprises datasets for predicting molecular properties. These datasets include regression, single-label binary classification, and multi-label classification tasks, spanning various domains (For further details, please refer to the Appendix). #### Graph Transformer Architectures In our study, we investigate three graph transformer architectures that address the key challenge of incorporating structure-awareness into the attention mechanism when graphs are used as input. Firstly, [23] present Graphormer, which modifies the attention mechanism itself to integrate structural information. Secondly, [11] propose General Powerful and Scalable Graph Transformer (GraphGPS), which employs hybrid architectures, integrating GNNs. Thirdly, [11] introduce Line Graph Transformer (LiBtT), which utilizes positional encodings for graphs. While all three architectures share the self-attention module at their core, they exhibit fundamental differences in overall design, structure, and the usage of positional and structural encodings. We conducted experiments with two different settings of the GraphGPS network: one using regular self-attention transformers (GraphGPS small), and the other utilizing Performer [1] modules (GraphGPS large), which are Transformer architectures capable of accurately estimating regular (softmax) full-rank-attention Transformers. The performer modules achieve this accuracy while employing linear space and time complexity instead of the quadratic complexity found in traditional methods. This allowed us to demonstrate the effectiveness of our proposed method on different self-attention modules. The choice of network is explicitly indicated in the result tables. #### Message-Passing GNN Baselines To further evaluate the effectiveness of our DeepGPT framework, we present the performance results of popular Message-Passing Graph Neural Networks (MPGNN), namely GatedGCN [10], GINE [13], and PNA [14]. #### Training Details We employ the AdamW [12] optimizer across all architectures and datasets for both pre-training and evaluation. We utilize a learning rate scheduling with a warm-up stage followed by a cosine decay regiment. We tune a hyper-parameter set including learning rate, weight decay, and the Prefix Token size \(|PS|\). We perform a 5-fold cross validation for all graph transformer experiments to have a fair evaluation. All experiments are are conducted on a Lambda machine with 8 NVIDIA RTX A6000 GPUs. (For all details, see Appendix). ### Results Tables 1 and 2 demonstrate the performance of our proposed method on the classification and regression benchmarks, respectively. The performance of DeepGPT on the OGB benchmarks is presented in Table 3. We compare DeepGPT to Fine-tuning as well as MPGNN baselines on molecular graph tasks. DeepGPT Across Tasks and Model ScalesBased on the results, we observe that DeepGPT exhibits comparable performance and, in certain cases, even outperforms fine-tuning across various tasks. Table 10 (Appendix) displays the sizes of the graph transformer architectures utilized in our experiments. These results demonstrate that our method is adaptable and effective for a diverse range of graph transformer architectures, irrespective of their sizes. DeepGPT Across Dataset ScalesIt is important to highlight that the downstream datasets used in this study consist of varying dataset sizes and graph sizes, as depicted in Figure 7 (Appendix). The results demonstrate the versatility of DeepGPT, as it proves to be effective for small datasets, while also scaling well to larger datasets. Another noteworthy aspect of graph transformer prompt tuning is its ability to address the limitations of many graph transformer models, which are often unsuitable for processing large graphs. By reducing the overhead associated with fine-tuning such models on large graphs, we achieve satisfactory results compared to fine-tuning. Benchmarking DeepGPTThe results obtained from our experiments provide compelling evidence of DeepGPT's remarkable performance, surpassing all MPGNN baselines across nearly every task evaluated. Furthermore, the findings offer a glimpse into the immense potential of prompt tuning large graph transformer models, suggesting that they have the capacity gain wider adoption in graph prediction tasks. Convergence SpeedOverall, DeepGPT exhibits faster convergence during model training. Additionally, each DeepGPT epoch is usually quicker than fine-tuning (Figure 2). The increased average epoch duration of DeepGPT on LiGhT is attributed to the implementation of the original paper, which requires a loop for adding prompt tokens, impacting the overall speed. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{\#Param.} & \multicolumn{5}{c}{Classification Dataset (\(\uparrow\))} \\ \cline{3-8} & & BACE & BBBP\({}^{*}\) & ClinTox\({}^{*}\) & Estrogen\({}^{*}\) & MedStab\({}^{*}\) & SIDER & ToxCast\({}^{*}\) & Tox21 \\ \hline Graphormer (Ying et al., 2021) FT & 48M & 0.704 \(\pm\) 0.018 & 0.778 \(\pm\) 0.043 & 0.839 \(\pm\) 0.114 & 0.673 \(\pm\) 0.034 & 0.607 \(\pm\) 0.015 & 0.550 \(\pm\) 0.007 & 0.665 \(\pm\) 0.032 & 0.582 \(\pm\) 0.051 \\ Graphormer DeepGPT & 100K & 0.883 \(\pm\) 0.022 & 0.914 \(\pm\) 0.016 & 0.884 \(\pm\) 0.030 & 0.941 \(\pm\) 0.006 & 0.871 \(\pm\) 0.004 & 0.646 \(\pm\) 0.003 & 0.725 \(\pm\) 0.011 & 0.813 \(\pm\) 0.010 \\ GraphGPS (Bambeck et al., 2022) FT & 14M (\(\star\) - 1030) & 0.898 \(\pm\) 0.020 & 0.888 \(\pm\) 0.015 & 0.910 \(\pm\) 0.046 & 0.933 \(\pm\) 0.014 & 0.897 \(\pm\) 0.025 & 0.610 \(\pm\) 0.005 & 0.722 \(\pm\) 0.011 & 0.832 \(\pm\) 0.009 \\ GraphGPS DeepGPT & 50K (\(\star\) - 500K) & 0.892 \(\pm\) 0.022 & 0.901 \(\pm\) 0.015 & 0.907 \(\pm\) 0.044 & 0.944 \(\pm\) 0.010 & 0.899 \(\pm\) 0.019 & 0.609 \(\pm\) 0.020 & 0.735 \(\pm\) 0.005 & 0.832 \(\pm\) 0.012 \\ \hline LiGHT (Li et al., 2020a, and Zeng, 2022) FT & 90M & 0.880 \(\pm\) 0.012 & 0.902 \(\pm\) 0.021 & 0.857 \(\pm\) 0.035 & 0.942 \(\pm\) 0.012 & 0.902 \(\pm\) 0.012 & 0.670 \(\pm\) 0.005 & 0.745 \(\pm\) 0.005 & 0.844 \(\pm\) 0.004 \\ LiGHT DeepGPT & 370K & 0.873 \(\pm\) 0.020 & 0.917 \(\pm\) 0.012 & 0.862 \(\pm\) 0.056 & 0.950 \(\pm\) 0.010 & 0.912 \(\pm\) 0.011 & 0.671 \(\pm\) 0.011 & 0.757 \(\pm\) 0.011 & 0.843 \(\pm\) 0.004 \\ \hline GatedGCN (Bresson and Laurent, 2017) & 2.84 & 0.833 \(\pm\) 0.013 & 0.887 \(\pm\) 0.025 & 0.893 \(\pm\) 0.041 & 0.919 \(\pm\) 0.008 & 0.548 \(\pm\) 0.018 & 0.599 \(\pm\) 0.015 & 0.683 \(\pm\) 0.005 & 0.807 \(\pm\) 0.011 \\ GINE (Hu et al., 2020b) & 1.2M & 0.599 \(\pm\) 0.045 & 0.613 \(\pm\) 0.024 & 0.559 \(\pm\) 0.044 & 0.492 \(\pm\) 0.024 & 0.540 \(\pm\) 0.016 & 0.584 \(\pm\) 0.025 & 0.629 \(\pm\) 0.025 & 0.714 \(\pm\) 0.021 \\ PNA (Coxo et al., 2020) & 1.8M & 0.845 \(\pm\) 0.021 & 0.903 \(\pm\) 0.018 & 0.867 \(\pm\) 0.011 & 0.927 \(\pm\) 0.009 & 0.738 \(\pm\) 0.024 & 0.583 \(\pm\) 0.012 & 0.673 \(\pm\) 0.008 & 0.793 \(\pm\) 0.015 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the performance of DeepGPT, Fine Tuning (FT) and MPGNN baselines on classification benchmarks. \begin{table} \begin{tabular}{l c c} \hline \hline Model & \multicolumn{2}{c}{OGB Classification Dataset (\(\uparrow\))} \\ \cline{2-3} & \multicolumn{1}{c}{} & MOLHIV (AUROC) & MOLPCBA (AP) \\ \hline Graphormer FT & 0.805 \(\pm\) 0.005 & 0.313 \(\pm\) 0.003 \\ Graphormer DeepGPT & 0.804 \(\pm\) 0.021 & 0.289 \(\pm\) 0.009 \\ \hline GraphGPS FT & 0.806 \(\pm\) 0.007 & 0.301 \(\pm\) 0.013 \\ GraphGPS DeepGPT & 0.801 \(\pm\) 0.015 & 0.297 \(\pm\) 0.020 \\ \hline LiGhT FT & 0.787 \(\pm\) 0.008 & 0.295 \(\pm\) 0.006 \\ LiGhT DeepGPT & 0.799 \(\pm\) 0.010 & 0.270 \(\pm\) 0.007 \\ \hline \hline GatedGCN & 0.809 \(\pm\) 0.016 & 0.264 \(\pm\) 0.021 \\ GINE & 0.679 \(\pm\) 0.055 & - \\ PNA & 0.782 \(\pm\) 0.013 & 0.257 \(\pm\) 0.006 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of the performance of DeepGPT, Fine Tuning (FT) and MPGNN baselines on OGB classification benchmark. \begin{table} \begin{tabular}{l c c} \hline \hline Model & \multicolumn{2}{c}{OGB Classification Dataset (\(\uparrow\))} \\ \cline{2-3} & MOLHIV (AUROC) & MOLPCBA (AP) \\ \hline Graphormer FT & 0.805 \(\pm\) 0.005 & 0.313 \(\pm\) 0.003 \\ Graphormer DeepGPT & 0.804 \(\pm\) 0.021 & 0.289 \(\pm\) 0.009 \\ \hline GraphGPS FT & 0.806 \(\pm\) 0.007 & 0.301 \(\pm\) 0.013 \\ GraphGPS DeepGPT & 0.801 \(\pm\) 0.015 & 0.297 \(\pm\) 0.020 \\ \hline LiGhT FT & 0.787 \(\pm\) 0.008 & 0.295 \(\pm\) 0.006 \\ LiGhT DeepGPT & 0.799 \(\pm\) 0.010 & 0.270 \(\pm\) 0.007 \\ \hline \hline GatedGCN & 0.809 \(\pm\) 0.016 & 0.264 \(\pm\) 0.021 \\ GINE & 0.679 \(\pm\) 0.055 & - \\ PNA & 0.782 \(\pm\) 0.013 & 0.257 \(\pm\) 0.006 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the performance of DeepGPT, Fine Tuning (FT) and MPGNN baselines on OGB classification benchmarks. Figure 2: Comparison of convergence speed of DeepGPT and FT. DeepGPT converges faster and decreases training and inference epoch duration. ### Ablation Studies Prompt Tokens ContributionsTable 4 demonstrates the impact of adding graph prompt tokens to deep prefix tuning. Generally, this addition leads to performance improvements even with the same number of parameters. The addition of graph prompt tokens alone has not been experimented with due to the limited number of parameters, which leads to an insignificant impact during model training. Additionally, DeepGPT outperforms lightweight tuning, where only the classification head of the pre-trained model is modified. Prompt DepthTo assess the precise influence of the depth of injected prompt tokens when a certain number of k layers are allocated for prompts, we inject them into different sequences of layers within the graph transformer model. Figure 3 illustrates the results. Notably, injecting prompts into middle layers generally produces more favorable outcomes, outperforming other scenarios. We further investigate the effects of injecting prompts into the first or last k layers of the model. The results are shown in Figures 4 and 5. Overall, injecting tokens into the middle layers yields better performance, while injecting prompt tokens into the final layers negatively affects the results. Generally the number of layers the prompt tokens are inserted into does not have a significant influence on the results.(For more architectures and datasets, see Appendix). Prompt LengthWe also conducted experiments to analyze the effect of prompt length. Figure 8 illustrates the results, and interestingly, no clear trend is observed for increasing prompt length. However, it is worth noting that, in general, adding more prompt tokens tends to improve performance (For more architectures and datasets, see Appendix).
2309.14357
Linear maps preserving parallel matrix pairs with respect to the Ky-Fan $k$-norm
Two bounded linear operators $A$ and $B$ are parallel with respect to a norm $\|\cdot\|$ if $\|A+\mu B\| = \|A\| + \|B\|$ for some scalar $\mu$ with $|\mu| = 1$. Characterization is obtained for bijective linear maps sending parallel bounded linear operators to parallel bounded linear operators with respect to the Ky-Fan $k$-norms.
Bojan Kuzma, Chi-Kwong Li, Edward Poon, Sushil Singla
2023-09-23T01:52:16Z
http://arxiv.org/abs/2309.14357v1
# Linear maps preserving parallel matrix pairs ###### Abstract. Two bounded linear operators \(A\) and \(B\) are parallel with respect to a norm \(\|\cdot\|\) if \(\|A+\mu B\|=\|A\|+\|B\|\) for some scalar \(\mu\) with \(|\mu|=1\). Characterization is obtained for bijective linear maps sending parallel bounded linear operators to parallel bounded linear operators with respect to the Ky-Fan \(k\)-norms. Keywords. Matrix, Ky-Fan \(k\)-norms, parallel pairs, bijective linear maps. AMS Classification. 15A60; 15A86. ## 1. Introduction Let \(V\) be a normed space equipped with the norm \(\|\cdot\|\) over \(\mathbb{F}=\mathbb{C}\) or \(\mathbb{R}\). Suppose \(x,y\in V\). Then \(x\) is parallel to \(y\), denoted by \(x\|y\), if \(\|x+\mu y\|=\|x\|+\|y\|\) for some \(\mu\in\mathbb{F}\) with \(|\mu|=1\). We are interested in studying bijective linear maps \(T\colon V\to V\) preserving parallel pairs, i.e., \(T(x)\|T(y)\) whenever \(x\|y\). Note that a map of the form \(A\mapsto f(A)B\) for a fixed \(B\) always preserves parallel pairs, so one needs to be careful in weakening the invertibility assumption. Evidently, if \(T\) is a multiple of a linear isometry for \(\|\cdot\|\), then \(T\) will preserve parallel pairs. However, the converse may not hold for a general norm. For instance, suppose \(\|\cdot\|\) is a strictly convex norm, i.e., \(\|x+y\|<\|x\|+\|y\|\) whenever \(x,y\) are linearly independent. Then \((\alpha x,\beta x)\) with \(x\in V\) and \(\alpha,\beta\in\mathbb{F}\) are the only parallel pairs and consequently in strictly convex spaces every linear map preserves parallel pairs. Denote by \(\mathbb{M}_{n}(\mathbb{F})\) the linear space of \(n\times n\) matrices over the field \(\mathbb{F}\). It turns out that if \(\|\cdot\|\) is the operator norm on \(\mathbb{M}_{n}(\mathbb{F})\), then a bijective linear map \(T\colon\mathbb{M}_{n}(\mathbb{F})\to\mathbb{M}_{n}(\mathbb{F})\) preserving parallel pairs must be a multiple of an isometry except for the \(2\times 2\) real case. In the \(2\times 2\) case, there are additional bijective preservers of parallel pairs. These exceptional maps also preserve matrix pairs \((A,B)\) such that \(\|A+B\|=\|A\|+\|B\|\) in the real case, but not in the complex case; see [6]. Hence, if \(n>2\), then a bijective linear map preserving parallel pairs with respect to the operator norm has the form \[A\mapsto\gamma UAV\quad\text{ or }\quad A\mapsto\gamma UA^{t}V, \tag{1.1}\] where \(U,V\in\mathbb{M}_{n}(\mathbb{F})\) are unitary (orthogonal when \(\mathbb{F}=\mathbb{R}\)), and \(A^{t}\) is the transpose of \(A\) with respect to a fixed orthonormal basis and \(\gamma\) is a nonzero scalar. Let \(k\in\{1,\ldots,n\}\). Given \(A\in\mathbb{M}_{n}(\mathbb{F})\), denote by \(s_{1}(A)\geq\cdots\geq s_{n}(A)\) the singular values of \(A\), i.e., the nonnegative square roots of the eigenvalues of \(A^{*}A\). The Ky-Fan \(k\)-norm for \(A\in\mathbb{M}_{n}(\mathbb{F})\) is defined as \(\|A\|_{(k)}=s_{1}(A)+\cdots+s_{k}(A)\). Evidently, \(\|A\|_{(1)}=s_{1}(A)\) reduces to the operator (i.e., spectral) norm. When \(k=n\), \(\|A\|_{(n)}\) is known as the trace norm and corresponds to the dual of the spectral norm. In this paper, we characterize bijective linear maps preserving parallel pairs with respect to the Ky-Fan \(k\)-norm on \(\mathbb{M}_{n}(\mathbb{F})\). Since the result for \(k=1\) is known, we will focus on \(k>1\). We will prove our result for complex matrices in Section 2, and for real matrices in Section 3. Our results can be easily applied to characterize linear bijections which preserve matrix pairs satisfying equality for the triangle inequality in a Ky-Fan \(k\)-norm. ## 2. Results on complex matrices In this section, we prove the following. **Theorem 2.1**.: _Let \(\|\cdot\|_{(k)}\) be the Ky-Fan \(k\)-norm on \(\mathbb{M}_{n}(\mathbb{C})\) for \(1<k\leq n\). Suppose \(T\colon\mathbb{M}_{n}(\mathbb{C})\to\mathbb{M}_{n}(\mathbb{C})\) is a bijective linear map which preserves parallel pairs, i.e.,_ \[A\|B\quad\text{ implies }\quad T(A)\|T(B)\] _with respect to \(\|\cdot\|_{(k)}\). Then \(T\) is a scalar multiple of a linear isometry of \((\mathbb{M}_{n}(\mathbb{C}),\|\cdot\|_{(k)})\). More precisely, there exists \(\gamma>0\) and unitary \(U,V\in\mathbb{M}_{n}(\mathbb{C})\) such that \(T\) has the form_ \[X\mapsto\gamma UXV\qquad\text{ or }\qquad X\mapsto\gamma UX^{t}V.\] By the above theorem, one can determine the structure of bijective linear maps preserving matrix pairs \(A,B\in\mathbb{M}_{n}(\mathbb{C})\) satisfying the equality in the triangle inequality, i.e., \(\|A+B\|=\|A\|+\|B\|\). Clearly, if a linear bijection \(T\) preserves such pairs, then it also preserves parallelism. To see this, if \(A\|B\), then for a complex unit \(\mu\) we have \(\|A+\mu B\|=\|A\|+\|\mu B\|\), hence \((A,\mu B)\) is a pair where the triangle inequality becomes equality, so that \(\|T(A)+\mu T(B)\|=\|T(A)\|+\|\mu T(B)\|\). Let us state this formally. **Corollary 2.2**.: _Assume a linear bijection \(T\colon\mathbb{M}_{n}(\mathbb{C})\to\mathbb{M}_{n}(\mathbb{C})\) is such that \(\|A+B\|_{(k)}=\|A\|_{(k)}+\|B\|_{(k)}\) implies \(\|T(A)+T(B)\|_{(k)}=\|T(A)\|_{(k)}+\|T(B)\|_{(k)}\), where \(\|\cdot\|_{(k)}\) is a Ky-Fan \(k\)-norm. Then, \(T\) is a scalar multiple of a linear isometry of \((\mathbb{M}_{n}(\mathbb{C}),\|\cdot\|_{(k)})\) and hence takes one of the forms from Theorem 2.1._ We will first present the proof of the theorem for the trace-norm (i.e., for \(k=n\)) and then for general Ky-Fan \(k\)-norms with \(1<k<n\). We start by listing some equivalent conditions for \(A,B\in\mathbb{M}_{n}(\mathbb{C})\) to be parallel relative to the Ky-Fan \(k\)-norm. We will write 'psd' for a positive semidefinite matrix and we will denote the set of all \(n\times n\) psd matrices by psd\({}_{n}\). A complex unit is a complex number with modulus one. For notational simplicity, we write \(\mathbb{M}_{n}\) instead of \(\mathbb{M}_{n}(\mathbb{C})\) in this section. **Proposition 2.3**.: _Let \(1<k\leq n\). Two matrices \(A,B\in\mathbb{M}_{n}\) are parallel if and only if any one of the following conditions holds._ * _There are unitary_ \(U,V\in\mathbb{M}_{n}\) _and a complex unit_ \(\mu\) _such that_ \(U^{*}(A+\mu B)V=\operatorname{diag}\left(c_{1},\ldots,c_{n}\right)\) _with_ \(c_{1}\geq\cdots\geq c_{n}\) _satisfying_ \(\sum_{j=1}^{k}c_{j}=\|A\|_{(k)}+\|B\|_{(k)}\) 2. _There are unitary_ \(U,V\in\mathbb{M}_{n}\) _and a complex unit_ \(\mu\) _such that_ \(U^{*}AV=A_{1}\oplus A_{2}\) _and_ \(U^{*}BV=B_{1}\oplus B_{2}\)_, where_ \(A_{1}=\operatorname{diag}\left(s_{1}(A),\ldots,s_{k}(A)\right)\) _and_ \(\mu B_{1}\) _is psd with eigenvalues_ \(s_{1}(B),\ldots,s_{k}(B)\)_._ 3. _There are_ \(n\times k\) _matrices_ \(U_{1},V_{1}\) _and a complex unit_ \(\mu\) _such that_ \(U_{1}^{*}U_{1}=V_{1}^{*}V_{1}=I_{k}\)_,_ \(U_{1}^{*}AV_{1}=\operatorname{diag}\left(s_{1}(A),\ldots,s_{k}(A)\right)\)_, and_ \(\mu U_{1}^{*}BV_{1}\) _is psd with eigenvalues_ \(s_{1}(B),\ldots,s_{k}(B)\)_._ 4. _There are two orthonormal sets_ \(\{u_{1},\ldots,u_{k}\},\{v_{1},\ldots,v_{k}\}\subseteq\mathbb{C}^{n}\) _such that_ \(Av_{j}=s_{j}(A)u_{j}\) _for_ \(j=1,\ldots,k\) _and_ \([u_{1}|\cdots|u_{k}]^{*}B[v_{1}|\cdots|v_{k}]\) _is a unit multiple of a psd matrix with eigenvalues_ \(s_{1}(B),\ldots,s_{k}(B)\)_._ Proof.: Condition (a) is clearly equivalent to \(A\|B\). Suppose (a) holds. Let \(U^{*}AV\) and \(U^{*}BV\) have leading \(k\times k\) submatrices \(A_{1}\) and \(B_{1}\). Then, by the assumptions in (a) and by [10, Lemma 2], \[\|A\|_{(k)}+\|B\|_{(k)}=|\mathrm{Tr}(A_{1}+\mu B_{1})|\leq|\mathrm{Tr}A_{1}|+| \mathrm{Tr}B_{1}|\leq\|A\|_{(k)}+\|B\|_{(k)}.\] Thus, \(|\mathrm{Tr}A_{1}|=\sum_{j=1}^{k}s_{j}(A)\) and \(|\mathrm{Tr}B_{1}|=\sum_{j=1}^{k}s_{j}(B)\). By [5, Corollary 3.2] there are complex units \(\mu_{1},\mu_{2}\) such that \(U^{*}AV=A_{1}\oplus A_{2}\) and \(U^{*}BV=B_{1}\oplus B_{2}\), where \(\mu_{1}A_{1}\) is psd with eigenvalues \(s_{1}(A),\ldots,s_{k}(A)\) and \(\mu_{2}B_{1}\) is psd with eigenvalues \(s_{1}(B),\ldots,s_{k}(B)\), respectively. Let \(W\in\mathbb{M}_{k}\) be a unitary with \(\mu_{1}W^{*}A_{1}W=\operatorname{diag}\left(s_{1}(A),\ldots,s_{k}(A)\right)\). We get condition (b) if we replace \((U,V)\) by \((U(W\oplus I_{n-k}),\mu_{1}V(W\oplus I_{n-k}))\) and then let \(\mu=\bar{\mu}_{1}\mu_{2}\). If (b) holds, then we have \[\|A+\mu B\|_{(k)}=\|U^{*}(A+\mu B)V\|_{(k)}\geq|\mathrm{Tr}A_{1}+\mu\mathrm{ Tr}B_{1}|=\|A\|_{(k)}+\|B\|_{(k)}.\] Thus, \(A\) and \(B\) are parallel. The equivalence of (b), (c), (d) is clear. We remark that we will rely heavily on the equivalence between (a) and (b) of the above proposition in our subsequent proofs. The other equivalences might be of independent interest. A key step in our proofs is to show that a bijective linear map preserving parallel pairs has the form \(T\colon X\mapsto MXN\) and \(T\colon X\mapsto MX^{t}N\) for invertible matrices \(M\) and \(N\). One can then finish the proof using the following lemma. **Lemma 2.4**.: _Let \(M,N\in\mathbb{M}_{n}\) be invertible matrices. The maps \(T\colon X\mapsto MXN\) and \(T\colon X\mapsto MX^{t}N\) preserve parallel pairs with respect to a Ky-Fan \(k\)-norm \((1<k\leq n)\) if and only if both \(M\) and \(N\) are scalar multiples of a unitary matrix._ Proof.: Let \(M=X_{1}D_{1}Y_{1}\) and \(N=X_{2}D_{2}Y_{2}\), where \(X_{1},X_{2},Y_{1},Y_{2}\in\mathbb{M}_{n}\) are unitary, and \(D_{1}=\operatorname{diag}\left(\xi_{1},\ldots,\xi_{n}\right)\) and \(D_{2}=\operatorname{diag}\left(\eta_{1},\ldots,\eta_{n}\right)\) with \(\xi_{1}\geq\cdots\geq\xi_{n}>0\) and \(\eta_{1}\geq\cdots\geq\eta_{n}>0\). We can replace \(T\) by the linear map \(\Psi\) defined by \[A\mapsto[X_{1}^{*}T(Y_{1}^{*}AX_{2}^{*})Y_{2}^{*}]/(\xi_{1}\eta_{1}),\] or by \[A\mapsto[X_{1}^{*}T((Y_{1}^{*}AX_{2}^{*})^{t})Y_{2}^{*}]/(\xi_{1}\eta_{1})\] if the transpose map is involved. Then \(\Psi\) will also preserve parallel pairs and has the simple form \(A\mapsto\hat{D}_{1}A\hat{D}_{2}\), where \(\hat{D}_{1}=D_{1}/\xi_{1}\), \(\hat{D}_{2}=D_{2}/\eta_{1}\) have diagonal entries \(1,\hat{\xi}_{2},\ldots,\hat{\xi}_{n}\) and \(1,\hat{\eta}_{2},\ldots,\hat{\eta}_{n}\). Observe that \(G_{1}=e_{1}e_{1}^{*}\) is parallel to \(G_{2}=(e_{1}+e_{2})(e_{1}+e_{2})^{*}\) because they are both rank-one psd matrices. Hence, \(\Psi(G_{1})=e_{1}e_{1}^{*}\) and \(\Psi(G_{2})=(e_{1}+\hat{\xi}_{2}e_{2})(e_{1}+\hat{\eta}_{2}e_{2})^{*}\) are also parallel and both belong to a subspace \(\mathbb{M}_{2}\oplus 0_{n-2}\). Since parallelism is defined by the norm, \(\Psi(G_{1}),\Psi(G_{2})\) are parallel if and only if their compressions to \(\mathbb{M}_{2}\) are parallel. By Proposition 2.3(b), there exist unitary \(U_{1},V_{1}\in\mathbb{M}_{2}\) such that \(U_{1}\Psi(G_{1})V_{1}=U_{1}e_{1}e_{1}^{*}V_{1}=e_{1}e_{1}^{*}\) and \(U_{1}\Psi(G_{2})V_{1}\in\mathbb{M}_{2}\) is a scalar multiple of a psd. From the first identity, \(U_{1},V_{1}\) are both diagonal unitary matrices, and then the second requirement, together with \(\hat{\eta}_{2},\hat{\xi}_{2}>0\), implies \(\hat{\eta}_{2}=\hat{\xi}_{2}\). Similarly, \(\hat{\eta}_{i}=\hat{\xi}_{i}\) for each \(i\) so \[\hat{D}_{1}=\hat{D}_{2}=:D=\operatorname{diag}{(1,\hat{\eta}_{2},\ldots,\hat{ \eta}_{n})}.\] Consider next \(G_{1}=e_{1}(e_{1}+e_{2})^{*}\) and \(G_{2}=e_{2}(-e_{1}+e_{2})^{*}\) which are also parallel because, for \(W=\left(\begin{array}{cc}\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\end{array}\right)\oplus I_{n-2}\), \(G_{1}W\) and \(G_{2}W\) are both rank-one psd matrices. They are mapped into the parallel pair \[\Psi(G_{1})=DG_{1}D=e_{1}(e_{1}+\hat{\eta}_{2}e_{2})^{*}\quad\text{and}\quad \Psi(G_{2})=DG_{2}D=\hat{\eta}_{2}e_{2}(-e_{1}+\hat{\eta}_{2}e_{2})^{*}.\] Again we may consider their parallelism restricted to \(\mathbb{R}^{2}\). Up to an inessential scaling, the only possible unitary matrices \(U,V\in\mathbb{M}_{2}\) for which \(U\Psi(G_{1})V=\operatorname{diag}{(*,0)}\in\mathbb{M}_{2}\) are \(U=\left(\begin{smallmatrix}1&0\\ 0&\epsilon\end{smallmatrix}\right)\) and \(V=\frac{1}{\sqrt{\hat{\eta}_{2}^{2}+1}}\left(\begin{smallmatrix}1&\delta\hat{ \eta}_{2}\\ \hat{\eta}_{2}&-\delta\end{smallmatrix}\right)\) with \(|\epsilon|=|\delta|=1\). However, with these unitary matrices, the matrix \(U\Psi(G_{2})V\) is a scalar multiple of a psd if and only if \(|\hat{\eta}_{2}|=1\). Likewise we can argue to show that \(|\hat{\eta}_{i}|=1\). Hence, \(D=I_{n}\), so \(M\) and \(N\) are indeed scalar multiples of unitary matrices. Recall that a cone \(\mathcal{C}\) in \(\mathbb{M}_{n}(\mathbb{C})\) is a convex set which is closed under multiplication with non-negative real numbers. Observe that we assume that cones are pointed, i.e. they contain \(0\). A cone (of affine dimension \(d\)) is maximal if it is not properly contained in a larger cone (of affine dimension \(d\)). For a given \(A\in\mathbb{M}_{n}\), we shall make heavy use of the set \[P(A)=\{B\in\mathbb{M}_{n}:\text{$B$ is parallel to $A$}\}.\] One important observation is that if \(T\colon\mathbb{M}_{n}\to\mathbb{M}_{n}\) is a bijective linear map such that \(T(A)\) is parallel to \(T(B)\) whenever \(A\) is parallel to \(B\), then we have \[T(P(A))\subseteq P(T(A)),\quad\text{and hence}\quad\ T(\operatorname{Span}P (A))\subseteq\operatorname{Span}P(T(A)).\] ### The trace norm We start with a characterization of a singular matrix \(A\) in terms of existence of a particular convex cone in \(P(A)\) with respect to the trace norm. **Lemma 2.5**.: _Let \(\|\cdot\|\) be the trace norm on \(\mathbb{M}_{n}\). Then \(A\in\mathbb{M}_{n}\) is singular if and only if there is a convex cone \(\mathcal{C}\) such that \(\{A,B,-B\}\subseteq\mathcal{C}\subseteq P(A)\) for some \(B\) not equal to a multiple of \(A\)._ Proof.: Let \(A=U\text{diag}{(a_{1},\ldots a_{n})}V\) be the singular value decomposition, with \(a_{1}\geq\cdots\geq a_{n}\geq 0\). If \(A\) is singular, then \(a_{n}=0\). Let \(B=UE_{nn}V\). Then \(\mathcal{C}=\{t_{1}A+t_{2}B+t_{3}(-B):t_{1},t_{2},t_{3}\geq 0\}\) is the desired cone. If \(A\) is invertible then \(a_{n}\neq 0\). Assume to the contrary that the cone \(\mathcal{C}\) and matrix \(B\) do exist. Since \(U^{*}P(A)V^{*}=P(U^{*}AV^{*})\) we may apply the bijection \(X\mapsto U^{*}XV^{*}\) and assume from the start that \(A=\operatorname{diag}\left(a_{1},\ldots,a_{n}\right)\). Then \(tA+B\in\mathcal{C}\subseteq P(A)\) for all \(t\geq 0\), so by Proposition 2.3 (b), there are unitary \(X_{t},Y_{t}\) and a complex unit \(e^{i\phi_{t}}\) with \[X_{t}AY_{t}=A\quad\text{ and }\quad X_{t}(tA+B)Y_{t}\in e^{i\phi_{t}}\mathrm{psd}_{ n},\qquad t\geq 0. \tag{2.2}\] By grouping equal eigenvalues in \(A\) and considering \(Y_{t}^{*}A^{*}AY_{t}=(X_{t}AY_{t})^{*}(X_{t}AY_{t})=A^{*}A\), we see that \(Y_{t}\), and similarly \(X_{t}\), inherit the block-diagonal structure \(Y_{t}=Y_{1,t}\oplus\cdots\oplus Y_{j,t}\) and \(X_{t}=X_{1,t}\oplus\cdots\oplus X_{j,t}\) with block sizes equal to the multiplicities of the corresponding eigenvalues in \(A\). Moreover, since the eigenvalues of \(A\) are positive, \(X_{k,t}=Y_{k,t}^{*}\) and hence \(X_{t}=Y_{t}^{*}\). Using this in (2.2) we deduce that \(A^{1/2}(tI_{n}+A^{-1/2}BA^{-1/2})A^{1/2}=tA+B\in e^{i\phi_{t}}\mathrm{psd}_{n}\), and hence also \[tI_{n}+A^{-1/2}BA^{-1/2}\in e^{i\phi_{t}}\mathrm{psd}_{n};\qquad t\geq 0.\] At \(t=0\) we get \(A^{-1/2}BA^{-1/2}=e^{i\phi}P\) for some psd matrix \(P\) and hence \[tI_{n}+e^{i\phi}P\in e^{i\phi_{t}}\mathrm{psd}_{n}.\] Thus for each \(t\geq 0\), the nonzero eigenvalues of \(tI_{n}+e^{i\phi}P\) all have the same argument. Since \(B\) is not a scalar multiple of \(A\), \(P\geq 0\) is not a scalar matrix. Let \(0\leq\lambda_{1}<\lambda_{2}\) be eigenvalues of \(P\) and choose \(t\in(\lambda_{1},\lambda_{2})\). Then \(t+e^{i\phi}\lambda_{1}\) and \(t+e^{i\phi}\lambda_{2}\) have the same argument, so \(e^{i\phi}=1\). Consequently \(B\) is psd. But applying the same reasoning to \(-B\), we see that \(-B\) is psd, which is a contradiction. Indeed, if \(A\) is invertible, such a cone \(\mathcal{C}\) cannot exist. Proof of Theorem 2.1 for the trace norm.: We first show that \(T\) maps singular matrices into singular ones. If \(A\in\mathbb{M}_{n}\) is singular, there is a cone \(\mathcal{C}\) of the form described in Lemma 2.5. So, \(\{T(A),T(B),-T(B)\}\subseteq T(\mathcal{C})\subseteq T(P(A))\subseteq P(T(A))\). Since \(T(\mathcal{C})\) is a cone, \(T(A)\) is singular again by Lemma 2.5. Thus \(T\) is a bijective map sending singular matrices to singular matrices. By Dieudonne's result [1], \(T\) has the standard form \(A\mapsto MAN\) or \(A\mapsto MA^{t}N\) for some invertible \(M,N\in\mathbb{M}_{n}\). By Lemma 2.4 they are both multiples of a unitary matrix. ### The Ky-Fan norm for \(1<k<n\) The proof for the case when \(1<k<n\) is a bit involved. Let \(T\colon\mathbb{M}_{n}\to\mathbb{M}_{n}\) be a bijective linear map preserving parallel pairs relative to \(\|\cdot\|_{(k)}\). We give the outline of the proof in various steps below, which will be proved subsequently. **Step 1.** We will show that \(\operatorname{Span}P(A)\) (i.e., the complex-linear span of \(P(A)\)) has a minimum dimension equal to \(k^{2}+(n-k)^{2}\), and equality holds if and only if \(s_{k}(A)>s_{k+1}(A)\). These conditions hold if and only if there are unitary \(U,V\) such that \(U(\operatorname{Span}P(A))V=\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\). Consequently, for such an \(A\), \(X=T^{-1}(A)\) must also satisfy \(s_{k}(X)>s_{k+1}(X)\), and there are unitary \(P,Q\) such that \(P(\operatorname{Span}P(X))Q=\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\). **Step 2.** We use the results in Step 1 to analyze the behaviour of the set \[\mathcal{C}=\{X_{1}\oplus X_{2}\in\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}:X_{1} \text{ is a positive semidefinite matrix with }s_{k}(X_{1})\geq s_{1}(X_{2})\}\] under \(T\). It turns out that \(\mathcal{C}\) is a cone and under the map \(T\), the relative interior (relative boundary) of \(\mathcal{C}\) are mapped exactly to the relative interior (relative boundary). **Step 3.** Another important set useful in our analysis will be \[\operatorname{Pert}(X)=\{Z\in\partial\mathcal{C}:X\pm tZ\in\mathcal{C}\text{ for sufficiently small }t>0\}.\] We will analyze when the real dimension of \(\operatorname{Span}_{\mathbb{R}}(\operatorname{Pert}(X))\) (i.e., the real-linear span of \(\operatorname{Pert}(X)\)) equals one; this will lead us to prove that the rank one matrices in \(\partial\mathcal{C}\) are mapped exactly to rank one matrices. **Step 4.** We will prove that \(T\) sends rank one matrices to rank one matrices and use the standard results to conclude Theorem 2.1. We start by counting the dimension of the span of \(P(A)\) subject to different choices for the matrix \(A\). In part (b) we will encounter matrices of the form \(W=I_{p}\oplus W_{1}\oplus I_{n-q}\in\mathbb{M}_{p}\oplus\mathbb{M}_{q-p} \oplus\mathbb{M}_{n-q}\) where \(0\leq p<q\leq n\). At the extremal cases when \(p=0\) or when \(q=n\) we agree that \(I_{0}\) and \(\mathbb{M}_{0}\) indicate that the corresponding summand is omitted. **Lemma 2.6**.: _Let \(A\in\mathbb{M}_{n}(\mathbb{C})\) and let \(U,V\in\mathbb{M}_{n}(\mathbb{C})\) be unitary matrices such that \(U^{*}AV=\operatorname{diag}\left(s_{1}(A),\ldots,s_{n}(A)\right)\). Then, the following holds._ * _Suppose_ \(s_{k}(A)>s_{k+1}(A)\)_. Then_ \(B\in P(A)\) _if and only if_ \(U^{*}BV=B_{1}\oplus B_{2}\) _such that_ \(\mu B_{1}\in\mathbb{M}_{k}(\mathbb{C})\) _is psd for some complex unit_ \(\mu\) _and_ \(|\mathrm{Tr}B_{1}|=\|B\|_{(k)}\)_. Moreover,_ \[\operatorname{Span}P(A)=U(\mathbb{M}_{k}(\mathbb{C})\oplus\mathbb{M}_{n-k}( \mathbb{C}))V^{*}.\] _Consequently,_ \(\operatorname{Span}P(A)\) _has dimension_ \(k^{2}+(n-k)^{2}=n^{2}-2kn+2k^{2}\)_._ * _Suppose_ \(s_{k}(A)=s_{k+1}(A)\)_. Let_ \(p\geq 0\) _and_ \(q\leq n\) _be the smallest integer and the largest integer such that_ \(s_{p+1}(A)=s_{k}(A)=s_{q}(A)\)_._ * _If_ \(q\neq n\) _or_ \(A\) _is invertible, then_ \(B\in P(A)\) _if and only if there is a unitary_ \(W\) _of the form_ \(W=I_{p}\oplus W_{1}\oplus I_{n-q}\in\mathbb{M}_{p}(\mathbb{C})\oplus\mathbb{M} _{q-p}(\mathbb{C})\oplus\mathbb{M}_{n-q}(\mathbb{C})\) _such that_ \(B=UW(B_{1}\oplus B_{2})W^{*}V^{*}\)_,_ \(\mu B_{1}\in\mathbb{M}_{k}(\mathbb{C})\) _is psd for some complex unit_ \(\mu\)_, and_ \(\|B_{1}\|_{(k)}=\|B\|_{(k)}\)_._ * _If_ \(q=n\) _and_ \(A\) _is singular, then_ \(B\in P(A)\) _if and only if there exist unitary_ \(W_{2},W_{3}\in\mathbb{M}_{n-p}(\mathbb{C})\) _such that_ \(B=U(I_{p}\oplus W_{2})(B_{1}\oplus B_{2})(I_{p}\oplus W_{3})V^{*}\)_,_ \(\mu B_{1}\in\mathbb{M}_{k}(\mathbb{C})\) _is psd for some complex unit_ \(\mu\)_, and_ \(\|B_{1}\|_{(k)}=\|B\|_{(k)}\)_._ _Consequently,_ \(\operatorname{Span}P(A)\) _has dimension at least_ \(k^{2}+2k(q-k)+(n-k)^{2}\)_._ Proof.: Since \(P(XAY)=XP(A)Y\) for all unitaries \(X,Y\in\mathbb{M}_{n}\), we may assume that \(A\) is the diagonal matrix \(\operatorname{diag}\left(s_{1}(A),\ldots,s_{n}(A)\right)\). Let \(\{e_{1},\ldots,e_{n}\}\) be the standard basis for \(\mathbb{C}^{n}\). Then \(Ae_{j}=s_{j}(A)e_{j}\) for \(j=1,\ldots,n\). (a) Suppose \(s_{k}(A)>s_{k+1}(A)\). Let \(B\in P(A)\). By Proposition 2.3, there are unitary \(U,V\in\mathbb{M}_{n}\) and a complex unit \(\mu\) such that \(U^{*}AV=A_{1}\oplus A_{2}\) with \(A_{1}=\operatorname{diag}\left(s_{1}(A),\ldots,s_{k}(A)\right)\) and \(U^{*}BV=B_{1}\oplus B_{2}\) such that \(\mu B_{1}\in\mathbb{M}_{k}\) is psd with eigenvalues \(s_{1}(B),\ldots,s_{k}(B)\). Since \(s_{k}(A)>s_{k+1}(A)\), the first \(k\) columns of \(V\) are eigenvectors corresponding to the \(k\) largest eigenvalues of \(A^{*}A\). So, the first \(k\) columns of \(V\) lie in the span of \(\{e_{1},\ldots,e_{k}\}\). Thus, \(V=V_{1}\oplus V_{2}\) with \(V_{1}\in\mathbb{M}_{k}\). Similarly, we can show that \(U=U_{1}\oplus U_{2}\) with \(U_{1}\in\mathbb{M}_{k}\). Then \(U_{1}^{*}A_{1}V_{1}=A_{1}\), so \(V_{1}^{*}A_{1}^{*}A_{1}V_{1}=A_{1}^{*}A_{1}\). Thus \(V_{1}\) commutes with \(A_{1}^{2}\), so \(V_{1}\) commutes with \(A_{1}\) and \(A_{1}=U_{1}^{*}A_{1}V_{1}=U_{1}^{*}V_{1}A_{1}.\) Since \(A_{1}\) is invertible, \(U_{1}=V_{1}\). Hence \(B=(U_{1}B_{1}V_{1}^{*})\oplus(U_{2}B_{2}V_{2}^{*})\). So, every matrix \(B\) in \(P(A)\) has the form \(\hat{B}_{1}\oplus\hat{B}_{2}\), where \(\hat{B}_{1}\) is a scalar multiple of a psd. Hence \(\operatorname{Span}P(A)\subseteq\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\). Conversely, note that every psd matrix \(B=B_{1}\oplus B_{2}\) satisfying \(s_{k}(B_{1})\geq s_{1}(B_{2})\) lies in \(P(A)\). The span of such matrices is \(\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\), so (a) follows. (b) Suppose \(s_{k}(A)=s_{k+1}(A)\) and \(p,q\) satisfy the hypothesis. By Proposition 2.3, we have \(B\in P(A)\) if and only if there are unitary \(X,Y\) such that \(X^{*}AY=A=\operatorname{diag}\,(s_{1}(A),\ldots,s_{n}(A))\) and \(X^{*}BY=B_{1}\oplus B_{2}\in\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\), with \(\mu B_{1}\) psd for some complex unit \(\mu\) and \(\|B_{1}\|_{(k)}=\|B\|_{(k)}\). By grouping the same singular values in \(A\) and considering \(Y^{*}A^{*}AY=(X^{*}AY)^{*}(X^{*}AY)=A^{*}A\) we see that \(Y\), and similarly \(X\), inherit the block-diagonal structure \(X=X_{1}\oplus\cdots\oplus X_{t}\) and \(Y=Y_{1}\oplus\cdots\oplus Y_{t}\) with block sizes equal to the multiplicities of the corresponding singular values in \(A\). Moreover, if the singular value for the \(j\)th group is nonzero, then \(X_{j}=Y_{j}\). Therefore \(B\in P(A)\) if and only if \[B=(X_{1}\oplus X_{2}\oplus\cdots\oplus X_{t})(\mu B_{1}\oplus B_{2})(X_{1} \oplus X_{2}\oplus\cdots\oplus Y_{t})^{*}\] for some complex unit \(\mu\) and psd \(B_{1}\in\mathbb{M}_{k}\) with \(\|B_{1}\|_{(k)}=\|B\|_{(k)}\), where \(X_{j},Y_{j}\) are all unitary, and \(Y_{t}=X_{t}\) if \(s_{n}(A)>0\), i.e., if \(A\) is invertible. Let us index \(m\geq 1\) be such that \(X_{1}\oplus\cdots\oplus X_{m}\) is of size \(q\). Let \(W=I_{p}\oplus X_{m}\oplus I_{n-q}\), where \(I_{0}\) indicates that that summand is missing. Let \(\hat{X}_{j}=X_{j}\) for \(j\neq m\) and \(\hat{X}_{m}=I_{q-p}\). Then if \(q\neq n\), \[B=W(\oplus_{j=1}^{t-1}\hat{X}_{j}\oplus X_{t})(\mu B_{1}\oplus B_{2})(\oplus_{ j=1}^{t-1}\hat{X}_{j}\oplus Y_{t})^{*}W^{*}=W(\mu\hat{B}_{1}\oplus\hat{B}_{2})W^{*},\] where \(\hat{B}_{1}\in\mathbb{M}_{k}\) is psd with \(\|\hat{B}_{1}\|_{(k)}=\|B\|_{(k)}\). When \(q=n\) the above still holds if \(A\) is invertible, but if \(A\) is singular with \(q=n\) then \[B=(I_{p}\oplus X_{m})(\mu\hat{B}_{1}\oplus\hat{B}_{2})(I_{p}\oplus Y_{m})\] for some unitary \(X_{m},Y_{m}\in\mathbb{M}_{n-p}\). For the assertion about \(\operatorname{Span}P(A)\), let \(x\in\mathbb{C}^{q}\) be a unit vector and let \(B=xx^{*}\oplus 0_{n-q}\). There exists a unitary \(W_{1}\in\mathbb{M}_{q-p}\) such that \((I_{p}\oplus W_{1})x\in\mathbb{C}^{p+1}\oplus 0_{q-p-1}\subseteq\mathbb{C}^{k} \oplus 0_{q-k}\). Letting \(W=I_{p}\oplus W_{1}\oplus I_{n-q}\) we see that \(WBW^{*}=yy^{*}\oplus 0_{n-k}\) for some unit vector \(y\in\mathbb{C}^{k}\), so \(B\in P(A)\). Thus \(\operatorname{Span}P(A)\) contains \(\mathbb{M}_{q}\oplus 0_{n-q}\). Since \(tI_{k}\oplus B_{2}\in P(A)\) for any \(B_{2}\in\mathbb{M}_{n-k}\) if \(t>0\) is sufficiently large, \(\operatorname{Span}P(A)\) also contains \(0_{k}\oplus\mathbb{M}_{n-k}\). Thus \(\dim\operatorname{Span}P(A)\) is at least \(k^{2}+2k(q-k)+(n-k)^{2}\). \(\Box\) As a corollary, we get an equivalent condition for \(\operatorname{Span}P(A)=\operatorname{Span}P(B)\) for \(A,B\in\mathbb{M}_{n}\) with \(s_{k}(A)>s_{k+1}(A)\). **Corollary 2.7**.: _Let \(A,B\in\mathbb{M}_{n}(\mathbb{C})\) with \(s_{k}(A)>s_{k+1}(A)\), and \(A=A_{1}\oplus A_{2}\) with \(A_{1}\in\mathbb{M}_{k}(\mathbb{C})\) such that \(\sum_{j=1}^{k}s_{j}(A_{1})=\|A\|_{(k)}\). Then,_ \[\operatorname{Span}P(A)=\operatorname{Span}P(B)\] _if and only if \(s_{k}(B)>s_{k+1}(B)\) and \(B=B_{1}\oplus B_{2}\) is such that_ (i)_\(B_{1}\in\mathbb{M}_{k}(\mathbb{C})\) satisfies \(\sum_{j=1}^{k}s_{j}(B_{1})=\|B\|_{(k)}\), or_ (ii)_\(n=2k\), and \(B_{2}\in\mathbb{M}_{k}(\mathbb{C})\) satisfies \(\sum_{j=1}^{k}s_{j}(B_{2})=\|B\|_{(k)}\)._ Proof.: Let \(U=U_{1}\oplus U_{2},V=V_{1}\oplus V_{2}\in\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\) be unitary matrices such that \(U^{*}AV=\operatorname{diag}\left(s_{1}(A),\ldots,s_{n}(A)\right)\). By Lemma 2.6, \(\operatorname{Span}P(A)\) equals \(\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\). If \(\operatorname{Span}P(A)=\operatorname{Span}P(B)\) then both spans have the minimal possible dimension, so \(s_{k}(B)>s_{k+1}(B)\). Let \(X,Y\) be unitaries such that \(X^{*}BY=\operatorname{diag}\left(s_{1}(B),\ldots,s_{n}(B)\right)\). By Lemma 2.6, \(X(\mathbb{M}_{k}\oplus\mathbb{M}_{n-k})Y^{*}=\operatorname{Span}P(B)=\mathbb{ M}_{k}\oplus\mathbb{M}_{n-k}\). It follows that both \(X,Y\) are block-diagonal unitaries, or \(n=2k\) and \(JX,JY\) are block-diagonal unitaries, where \(J=\left[\begin{smallmatrix}0_{k}&I_{k}\\ I_{k}&0_{k}\end{smallmatrix}\right]\). Thus \(B=B_{1}\oplus B_{2}\) and either condition (i) or (ii) holds. The converse follows readily from Lemma 2.6. **Corollary 2.8**.: _Suppose a bijective linear \(T\colon\mathbb{M}_{n}\to\mathbb{M}_{n}\) preserves parallel pairs relative to Ky-Fan \(k\)-norm for \(1<k<n\). If \(T(X)\) is such that \(s_{k}(T(X))>s_{k+1}(T(X))\), then \(s_{k}(X)>s_{k+1}(X)\) and \(T(\operatorname{Span}P(X))=\operatorname{Span}P(T(X))\)._ Proof.: Since a preserver \(T\) satisfies \(T(P(X))\subseteq P(T(X))\), and \(T\) is bijective, one has \[\dim\operatorname{Span}P(X)=\dim T(\operatorname{Span}P(X))\leq\dim \operatorname{Span}P(T(X)).\] Lemma 2.6 shows that \(\operatorname{Span}P(T(X))\) has the minimal possible dimension, and hence so does \(\operatorname{Span}P(X)\), whence \(s_{k}(X)>s_{k+1}(X)\), and consequently \(T(\operatorname{Span}P(X))=\operatorname{Span}P(T(X))\). Now, we proceed towards proving that a bijective linear map which preserves parallel pairs relative to a Ky-Fan \(k\)-norm for \(1<k<n\), is bijective on a subspace \(\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\), and will map the cone \(\mathcal{C}\) onto itself, where \(\mathcal{C}\) is defined as follows. **Definition 2.9**.: _Define_ \[\mathcal{C}=\{X_{1}\oplus X_{2}\in\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}:X_{1} \text{ is a positive semidefinite matrix with }s_{k}(X_{1})\geq s_{1}(X_{2})\}\] Recall that for a subset \(S\subseteq\mathbb{M}_{n}\), we use \(\operatorname{Span}(S)\) for its complex span and \(\operatorname{Span}_{\mathbb{R}}(S)\) for its real span. **Proposition 2.10**.: _The following holds._ * _The set_ \(\mathcal{C}\) _is a pointed convex cone._ * _Its affine hull is_ \(\operatorname{Span}_{\mathbb{R}}\mathcal{C}=\mathbb{H}_{k}\oplus\mathbb{M}_{n-k}\)_, where_ \(\mathbb{H}_{k}\) _denotes the set of all_ \(k\times k\) _hermitian matrices._ * _The relative interior,_ \(\mathcal{C}^{\circ}\)_, of_ \(\mathcal{C}\) _consists of matrices of the form_ \(X_{1}\oplus X_{2}\) _such that_ \(X_{1}\) _is positive definite and_ \(s_{k}(X_{1})>s_{1}(X_{2})\)_._ * _The relative boundary,_ \(\partial\mathcal{C}\)_, of_ \(\mathcal{C}\) _consists of matrices of the form_ \(X_{1}\oplus X_{2}\) _such that_ \(X_{1}\) _is positive semi-definite and_ \(s_{k}(X_{1})=s_{1}(X_{2})\)_._ * _We have_ \(\mathcal{C}\subseteq P(A)\) _whenever_ \(A=A_{1}\oplus A_{2}\in\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\) _is such that_ \(A_{1}\) _is positive definite and_ \(s_{k}(A_{1})>s_{1}(A_{2})\)_._ Proof.: (a). The set \(\mathcal{C}\) is clearly closed under multiiplication by positive numbers. It is also convex because \(X=X_{1}\oplus X_{2}\in\mathcal{C}\) is equivalent to \(X_{1}\geq 0\) and \(\|X\|_{(k)}=\operatorname{Tr}X_{1}\). So, if \(Y=Y_{1}\oplus Y_{2}\) is also in \(\mathcal{C}\), then \(\|X+Y\|_{(k)}\leq\|X\|_{(k)}+\|Y\|_{(k)}=\operatorname{Tr}(X_{1})+\operatorname {Tr}(Y_{1})=\operatorname{Tr}(X_{1}+Y_{1})=\sum_{1}^{k}s_{j}(X_{1}+Y_{1})\leq\| X+Y\|_{(k)}\). (b)--(d) are straightforward exercises, while (e) follows from Proposition 2.3(b). **Proposition 2.11**.: _Let \(T\colon\mathbb{M}_{n}\to\mathbb{M}_{n}\) be a bijective linear map which preserves parallel pairs relative to a Ky-Fan \(k\)-norm for \(1<k<n\). Assume there exist \(A,B\in\mathcal{C}^{\circ}\) such that \(T(A)=B\). Then_ \[T(\mathcal{C}^{\circ})=\mathcal{C}^{\circ}\quad\text{ and }\quad T(\partial\mathcal{C} )=\partial\mathcal{C}.\] _Moreover, \(T\) maps \(\operatorname{Span}P(B)=\operatorname{Span}P(A)\) bijectively onto itself._ Proof.: Since matrices \(A,B\) belong to the interior of the cone \(\mathcal{C}\), they both satisfy the assumption (a) of Lemma 2.6, so that \(\mathbf{V}:=\operatorname{Span}P(B)=\operatorname{Span}P(A)=\mathbb{M}_{k} \oplus\mathbb{M}_{n-k}\) and by Corollary 2.8\(T(\mathbf{V})=\mathbf{V}\). It remains to prove the first claim. **Step 1**: We show that \(T\) maps \(\mathcal{C}^{\circ}\) into \(\mathcal{C}\). If \(X=X_{1}\oplus X_{2}\) belongs to the interior of \(\mathcal{C}\), then we claim that \(T(X)=Y=Y_{1}\oplus Y_{2}\) is in \(\mathcal{C}\). To see this, observe that \(A+tX\) is parallel to \(A\) for all \(t>0\). So, \(T(A)+tT(X)=B+tY\) is parallel to \(B=B_{1}\oplus B_{2}\). Hence, \(B_{1}+tY_{1}\) is a multiple of a psd matrix and \(s_{k}(B_{1}+tY_{1})\geq s_{1}(B_{2}+tY_{2})\) for all \(t>0\). Letting \(t\to\infty\) we ge \(s_{k}(Y_{1})\geq s_{1}(Y_{2})\) and \(Y_{1}=e^{i\phi}P\) for some psd matrix \(P\). Therefore \[B_{1}+te^{i\phi}P=B_{1}^{1/2}(I_{k}+te^{i\phi}B_{1}^{-1/2}PB_{1}^{-1/2})B_{1}^ {1/2}\in e^{i\varphi_{t}}\mathrm{psd}_{k};\qquad\varphi_{t}\in[0,2\pi].\] Clearly then, by defining \(\hat{P}=B_{1}^{-1/2}PB_{1}^{-1/2}\), also \[I_{k}+te^{i\phi}\hat{P}\in e^{i\varphi_{t}}\mathrm{psd}_{k};\qquad\varphi_{t} \in[0,2\pi],\;t\geq 0.\] Thus for each \(t\geq 0\), the nonzero eigenvalues of \(I_{k}+te^{i\phi}\hat{P}\) all have the same argument. If \(\hat{P}\geq 0\) is not a scalar matrix, then let \(0\leq\lambda_{1}<\lambda_{2}\) be distinct eigenvalues of \(\hat{P}\) and choose \(t\in(\frac{1}{\lambda_{2}},\frac{1}{\lambda_{1}})\). Then \(1+te^{i\phi}\lambda_{1}\), \(1+te^{i\phi}\lambda_{2}\) have the same argument, so \(e^{i\phi}=1\) and \(Y_{1}=P\). Consequently either \(Y_{1}\) is psd or else \(\hat{P}=\lambda I_{k}\geq 0\) and \(Y_{1}=\alpha B_{1}\) for some number \(\alpha\in\mathbb{C}\setminus[0,\infty)\). Assume there exist \(X,X^{\prime}\in\mathcal{C}^{\circ}\) with \(T(X)=\alpha B_{1}\oplus Y_{2}\) and \(T(X^{\prime})=Y_{1}^{\prime}\oplus Y_{2}^{\prime}\) such that \(\alpha B_{1}\) is not psd, while \(Y_{1}^{\prime}\) is psd but not a scalar multiple of \(B_{1}\). Being a convex cone, the matrix line segment \([X,X^{\prime}]:=\{tX+(1-t)X^{\prime}:0\leq t\leq 1\}\) lies in \(\mathcal{C}^{\circ}\) for every \(t\in[0,1]\), so the first block of its \(T\)-image, \[t(\alpha B_{1})+(1-t)Y_{1}^{\prime} \tag{2.3}\] is either a psd or a multiple of \(B_{1}\) for each fixed \(t\in[0,1]\). The latter option is possible only for \(t=1\). But the former option is contradictory because \(\alpha\ngeq 0\) so the diagonal entries of (2.3) cannot always be nonegative for each \(t\in(0,1)\). This shows that either \(T\) maps \(\mathcal{C}^{\circ}\) into \(\mathcal{C}\) or else it maps \(\mathcal{C}^{\circ}\) into \(\mathbb{C}B_{1}\oplus\mathbb{M}_{n-k}\). The latter case is impossible because \(\operatorname{Span}\mathcal{C}^{\circ}=\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\) has greater dimension that \(\mathbb{C}B_{1}\oplus\mathbb{M}_{n-k}\). **Step 2**: We show that \(T(\mathcal{C}^{\circ})=\mathcal{C}^{\circ}\). Consequently, \(T^{-1}\) maps \(\mathcal{C}^{\circ}\) into \(\mathcal{C}\). Choose \(Y\in\mathcal{C}^{\circ}\). By Corollary 2.7, \(\operatorname{Span}P(Y)=\operatorname{Span}P(I_{k}\oplus 0_{n-k})=\mathbf{V}\) and, by Corollary 2.8, \(X=T^{-1}(Y)\) satisfies \(s_{k}(X)>s_{k+1}(X)\) and \[\operatorname{Span}P(X)=T^{-1}\operatorname{Span}P(Y)=T^{-1}\operatorname{Span }P(B)=\operatorname{Span}P(A). \tag{2.4}\] So, \(X\in\operatorname{Span}P(A)=\mathbf{V}\), and hence \(X=X_{1}\oplus X_{2}\in\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\). We claim that the \(k\) largest singular values of \(X=X_{1}\oplus X_{2}\) belong to its first block. Suppose \(n\neq 2k\). Then the claim follows by (2.4) and Corollary 2.7. Suppose \(n=2k\). Since \(\mathcal{C}^{\circ}\) is a cone, the line segment \(tB+(1-t)Y\in\mathcal{C}^{\circ}\) for each \(t\in[0,1]\). Hence, as in (2.4), its preimage, \(L(t)=T^{-1}(tB+(1-t)Y)=tA+(1-t)X\in\mathbb{M}_{k}\oplus\mathbb{M}_{k}\) consists of matrices with \(s_{k}(L(t))>s_{k+1}(L(t))\) for each \(t\). Moreover, by Corollary 2.7, for each \(t\) either the first \(k\) singular values of \(L(t)\) belong to the first block or they belong to the second block. By continuity and the connectedness of the line segment \([0,1]\), they either belong to the first block for each \(t\in[0,1]\) or they belong to the second block for each \(t\in[0,1]\). At \(t=1\), the line segment equals \(L(1)=A\) whose first \(k\) singular values are in the first block. Hence, also at \(t=0\) they are in the first block, that is, \(s_{k}(X_{1})>s_{1}(X_{2})\). Let \(X_{1}=U_{1}D_{1}V_{1}\) be its singular value decomposition, let \(U=U_{1}\oplus I_{n-k}\) and \(V=V_{1}\oplus I_{n-k}\) (so that \(U(D_{1}\oplus X_{2})V=X\)) and temporarily replace \(T\) by a map \(\hat{T}\colon Z\mapsto T(UZV)\). Observe that \(\hat{T}\) still preserves parallelism, leaves the set \(\mathbf{V}\) invariant, and maps \(D_{1}\oplus X_{2}\) into \(T(X)=Y\in\mathcal{C}^{\circ}\). By Step 1, \(\hat{T}\) maps the cone \(\mathcal{C}\) into itself. Then, \(T\) maps the cones \(\mathcal{C}\) and \(\mathit{UCV}\) into \(\mathcal{C}\). Hence, \(T\) also maps their real-linear spans, \[\operatorname{Span}_{\mathbb{R}}(\mathcal{C}\cup\mathit{UCV})=\operatorname{ Span}_{\mathbb{R}}(\mathcal{C})+U\operatorname{Span}_{\mathbb{R}}(\mathcal{C})V\] into \(\operatorname{Span}_{\mathbb{R}}(\mathcal{C})=\mathbb{H}_{k}\oplus\mathbb{M }_{n-k}\). Observe that \[U\operatorname{Span}_{\mathbb{R}}(\mathcal{C})V =(U_{1}\oplus I_{n-k})(\mathbb{H}_{k}\oplus\mathbb{M}_{n-k})(V _{1}\oplus I_{n-k})\] \[=(U_{1}V_{1}V_{1}^{*}\mathbb{H}_{k}V_{1})\oplus\mathbb{M}_{n-k}= W_{1}\mathbb{H}_{k}\oplus\mathbb{M}_{n-k};\qquad W_{1}=U_{1}V_{1}.\] Unless \(W_{1}=\pm I_{k}\) the set \(W_{1}\mathbb{H}_{k}\) differs from \(\mathbb{H}_{k}\) in which case the real dimension of \(\operatorname{Span}_{\mathbb{R}}(\mathcal{C}\cup\mathit{UCV})\) is larger than the real dimension of \(\operatorname{Span}_{\mathbb{R}}(\mathcal{C})\). Since this space contains the \(T\)-image of \(\operatorname{Span}_{\mathbb{R}}(\mathcal{C}\cup\mathit{UCV})\) we contradict injectiveness of \(T\). Thus, \(U_{1}V_{1}=W_{1}=\pm I_{k}\). Consequently, \(X_{1}=U_{1}D_{1}V_{1}=\pm U_{1}D_{1}U_{1}^{*}\) is either positive definite or negative definite. This shows that \(T^{-1}(\mathcal{C}\cup-\mathcal{C})\subseteq\mathcal{C}\cup-\mathcal{C}\). Since \(T\) also maps \(\mathcal{C}\cup-\mathcal{C}\) into itself, \(T(\mathcal{C}\cup-\mathcal{C})=\mathcal{C}\cup-\mathcal{C}\). Since \(T\) is a homeomorphism, \(T\) maps the disconnected interior \(\mathcal{C}^{\circ}\cup-\mathcal{C}^{\circ}\) onto itself. Since \(T(A)=B\in\mathcal{C}^{\circ}\), we must have \(T(\mathcal{C}^{\circ})=\mathcal{C}^{\circ}\). The next proposition proves that a bijective linear map on \(\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\) satisfying \(T(\mathcal{C}^{\circ})=\mathcal{C}^{\circ}\) and \(T(\partial\mathcal{C})=\partial\mathcal{C}\) maps rank one matrices of \(\partial\mathcal{C}\) into rank one matrices. **Definition 2.12**.: _Given \(X\in\partial\mathcal{C}\), we define_ \[\operatorname{Pert}(X)=\{Z\in\partial\mathcal{C}:X\pm tZ\in\mathcal{C}\text{ for sufficiently small }t>0\}.\] The following lemma concerns when the real dimension of \(\operatorname{Span}_{\mathbb{R}}(\operatorname{Pert}(X))\) equals one. **Lemma 2.13**.: _Let \(X\in\partial\mathcal{C}\). Then \(\dim_{\mathbb{R}}\operatorname{Span}_{\mathbb{R}}(\operatorname{Pert}(X))=1\) if and only if_ _(1) \(X\) is psd and has rank one, or_ _(2) \(X=a(I_{k}\oplus P)\) for some \(a>0\) and unitary matrix \(P\)._ Proof.: Note \(\operatorname{Pert}(0)=\{0\}\), so we may assume \(X\neq 0\). Next note that, for any unitary matrices \(U_{1}\in\mathbb{M}_{k}\) and \(V_{2},W_{2}\in\mathbb{M}_{n-k}\), and any positive scalar \(c\), the sets \(\mathcal{C}\) and \(\partial\mathcal{C}\) are invariant under the bijective map \(\phi(X)=c(U_{1}\oplus V_{2})X(U_{1}^{*}\oplus W_{2})\). Consequently \(Z\in\operatorname{Pert}(X)\) if and only if \(\operatorname{Pert}(\phi(X))\), and \(\operatorname{Pert}(\phi(X))=\phi(\operatorname{Pert}(X))\). Thus we may assume that \(X=\operatorname{diag}\left(s_{1},\ldots,s_{n}\right)\) with \(1=s_{1}\geq\cdots\geq s_{n}\geq 0\). We prove necessity by showing the contrapositive. Suppose \(X\) has rank at least \(2\), and \(s_{1}>s_{n}\). Observe that \(X\in\operatorname{Pert}(X)\), so it suffices to show \(\operatorname{Pert}(X)\) contains some \(Z\in\partial\mathcal{C}\) that is not a real scalar multiple of \(X\). If \(s_{k}>s_{n}\) then \(X+dE_{nn}\in\operatorname{Pert}(X)\) for \(d=(s_{k}-s_{n})/2\in\mathbb{R}\). If \(s_{k}=s_{n}\) then \(s_{1}>s_{k}\) and \(X+dE_{11}\in\operatorname{Pert}(X)\) for \(d=(s_{1}-s_{k})/2\in\mathbb{R}\). Necessity follows. To prove sufficiency, first suppose \(X\) has rank one, so \(X=E_{11}\). Let \(Z=Z_{1}\oplus Z_{2}\in\operatorname{Pert}(X)\). Then \(E_{11}\pm tZ_{1}\) is psd for some \(t>0\), so \(Z_{1}\) cannot have a nonzero diagonal entry except for the first one. Since \(Z\in\partial\mathcal{C}\), \(Z_{1}\) is a nonnegative scalar multiple of \(E_{11}\) and \(Z_{2}=0\). Thus \(\operatorname{Pert}(X)\) consists of nonnegative scalar multiples of \(X\). Next we may suppose \(X=I_{n}\). Let \(0\neq Z=Z_{1}\oplus Z_{2}\in\operatorname{Pert}(X)\). Since \(Z\in\partial\mathcal{C}\), \(s_{k}(Z_{1})=s_{1}(Z_{2})\). Also for sufficiently small \(t>0\), \(I_{n}\pm tZ\in\mathcal{C}\), that is, \(I_{k}\pm tZ_{1}\geq 0\) and \(s_{k}(I_{k}\pm tZ_{1})\geq s_{1}(I_{n-k}\pm tZ_{2})\). Thus for sufficiently small \(t>0\), singular values of a psd matrix \(I_{k}\pm tZ_{1}\) are its eigenvalues and \[1-ts_{1}(Z_{1})=s_{k}(I_{k}-tZ_{1})\geq s_{1}(I_{n-k}-tZ_{2})=\|I_{n-k}-tZ_{2} \|\geq 1-t\|Z_{2}\|=1-ts_{1}(Z_{2})=1-ts_{k}(Z_{1}).\] It follows that \(s_{1}(Z_{1})=s_{k}(Z_{1})\) and \(Z_{1}=bI_{k}\) for some \(b>0\); by scaling \(Z\) we may assume \(b=1\). Then for sufficiently small \(t>0\) and all unit vectors \(v\in\mathbb{C}^{n-k}\), \[1-t=s_{k}(I_{k}-tZ_{1})\geq\|I_{n-k}-tZ_{2}\|\geq|v^{*}(I_{n-k}-tZ_{2})v|\geq \operatorname{Re}\,v^{*}(I_{n-k}-tZ_{2})v=1-t\operatorname{Re}\,v^{*}Z_{2}v,\] so \(1\leq\operatorname{Re}\,v^{*}Z_{2}v\). Thus the numerical range \(W(Z_{2})\) is contained in the closed half-plane \(\operatorname{Re}\,z\geq 1\) and the unit disk (because \(\|Z_{2}\|=1\)), so \(W(Z_{2})=\{1\}\), whence \(Z_{2}=I_{n-k}\) and \(Z=I_{n}\). Thus \(\operatorname{Pert}(X)\) consists of nonnegative scalar multiples of \(X\). Sufficiency follows. With Lemma 2.13 in mind we define two path-connected sets \[\mathcal{S}_{1} =\{X\in\partial\mathcal{C}:\operatorname{rank}X=1\}=\{aR\oplus 0 _{n-k}:a>0,R\in\mathbb{H}_{k}\text{ is a rank-one projection}\},\] \[\mathcal{S}_{U} =\{a(I_{k}\oplus P):a>0,P\text{ is unitary}\}.\] Notice that, by continuity of singular values, their union is not path-connected. **Proposition 2.14**.: _Suppose \(1<k<n\) and \(T\) is a bijective linear map on \(\mathbb{M}_{k}\oplus\mathbb{M}_{n-k}\) satisfying \(T(\mathcal{C})=\mathcal{C}\) and \(T(\partial\mathcal{C})=\partial\mathcal{C}\). Then \(T\) maps \(\mathcal{S}_{1}\) onto itself._ Proof.: Being a linear bijection, \(T\) maps \(\operatorname{Span}_{\mathbb{R}}(\operatorname{Pert}(X))\) onto \(\operatorname{Span}_{\mathbb{R}}(\operatorname{Pert}(T(X)))\) for each \(X\in\partial\mathcal{C}\). Hence, by Lemma 2.13, \(T\) maps the subset \(\mathcal{S}_{1}\cup\mathcal{S}_{U}\subseteq\partial\mathcal{C}\) bijectively onto itself, so \(T\) either maps the path connected components \(\mathcal{S}_{1}\), \(\mathcal{S}_{U}\) back onto themselves or swaps them. The latter case would imply that the real-linear span of \(\mathcal{S}_{1}\), that is, \[\operatorname{Span}_{\mathbb{R}}\mathcal{S}_{1}=\mathbb{H}_{k}\oplus 0_{n-k}\] would be mapped onto \[\operatorname{Span}_{\mathbb{R}}\mathcal{S}_{U}=\mathbb{R}I_{k}\oplus\mathbb{M }_{n-k}\] (the equality holds because each norm-one hermitian matrix \(H\) is the midpoint of two unitaries \(H\pm i\sqrt{I-H^{2}}\), so each complex matrix is a real linear span of unitaries) Thus \(\dim_{\mathbb{R}}\operatorname{Span}_{\mathbb{R}}\mathcal{S}_{1}=\dim_{ \mathbb{R}}\operatorname{Span}_{\mathbb{R}}\mathcal{S}_{U}\) which gives \(k^{2}=1+2(n-k)^{2}\). Also, comparing the real dimensions of the \(\mathbb{C}\)-linear spans of \(\mathcal{S}_{1}\), \(\mathcal{S}_{U}\), which \(T\) maps in a like fashion, we would get in addition \(2k^{2}=2+2(n-k)^{2}\). Clearly the two systems of equations are contradictory. Thus \(T\) maps \(\mathcal{S}_{1}\) onto itself. With the above preparation, we can present the following. Proof of Theorem 2.1 for \(1<k<n\).: We first prove that if \(T\colon\mathbb{M}_{n}\to\mathbb{M}_{n}\) is a bijective linear map preserving parallel pairs with respect to \(1<k<n\), then \(T\) preserves rank one matrices also. Let \(R=cUE_{11}V\in\mathbb{M}_{n}\) be a singular value decomposition of a rank-one matrix \(R\). Let \(B=U(I_{k}\oplus 0_{n-k})V\) and \(A=T^{-1}(B)\). Then, by Corollary 2.8, \(s_{k}(A)>s_{k+1}(A)\) so there exist unitary matrices \(\hat{U},\hat{V}\) such that \(\hat{U}A\hat{V}=\operatorname{diag}\left(s_{1}(A),\ldots,s_{n}(A)\right)=D_{1 }\oplus D_{2}\in\mathcal{C}^{\circ}\). Introduce the parallelism preserver \(\hat{T}(Z)=U^{*}T(\hat{U}^{*}Z\hat{V}^{*})V^{*}\), which maps \(D_{1}\oplus D_{2}\in\mathcal{C}^{\circ}\) into \(U^{*}T(A)V^{*}=I_{k}\oplus 0_{n-k}\in\mathcal{C}^{\circ}\). Then, \(\hat{T}\) satisfies the hypotheses of Propositions 2.11 and then also of Proposition 2.14. It follows that \(\hat{T}\), hence also \(\hat{T}^{-1}\), maps the set \(\mathcal{S}_{1}\ni cE_{11}\) onto itself. Therefore, \(\operatorname{rank}T^{-1}(R)=1\), and since \(R\) was an arbitrary rank-one matrix, we see that \(T^{-1}\) preserves the set of rank-one matrices. Since \(T^{-1}\) preserves rank-one matrices, by the classical result on rank-one preservers (see, e.g. [7] and [12]) there exist invertible matrices \(M,N\in\mathbb{M}_{n}\) so that \(T^{-1}(Y)=M^{-1}YN^{-1}\) or \(T^{-1}(Y)=M^{-1}Y^{t}N^{-1}\). Finally, the result follows using Lemma 2.4. ## 3. Results on real matrices In this section we characterize bijective linear maps \(T\colon\mathbb{M}_{n}(\mathbb{R})\to\mathbb{M}_{n}(\mathbb{R})\) preserving parallel pairs with respect to the Ky-Fan \(k\)-norm for \(1<k\leq n\). As in the complex case we again obtain that all of them are scalar multiples of isometries. Linear isometries \(T\colon\mathbb{M}_{n}(\mathbb{F})\to\mathbb{M}_{n}(\mathbb{F})\) of the Ky-Fan \(k\)-norm \(\|\cdot\|_{(k)}\) (\(1\leq k\leq n\)) on \(\mathbb{M}_{n}(\mathbb{F})\) were classified by the work of several authors; see [4, 2, 8, 9]. It turns out that, except for the isometries of the Ky-Fan 2-norm on 4-by-4 real matrices, they have the standard form. More precisely, the following holds: 1. \(T\) has the form \(X\mapsto UXV\) or \(X\mapsto UX^{t}V\) for some \(U,V\in\mathbb{M}_{n}(\mathbb{F})\) satisfying \(U^{*}U=V^{*}V=I_{n}\), i.e., \(U,V\) are unitary if \(\mathbb{F}=\mathbb{C}\), and \(U,V\) are orthogonal if \(\mathbb{F}=\mathbb{R}\), or 2. \((\mathbb{M}_{n}(\mathbb{F}),k)=(\mathbb{M}_{4}(\mathbb{R}),2)\) and \(T\) has the form \[X\mapsto UX^{+}V\quad\text{ or }\quad X\mapsto\mathbb{L}(UX^{+}V)\quad\text{ or }\quad X\mapsto-E\mathbb{L}(EUX^{+}V)\] where \(X^{+}\) denotes either the identity or the transpose map, and \(E=\operatorname{diag}\left(1,-1,-1,-1\right)\), \(U,V\in\mathbb{M}_{4}(\mathbb{R})\) are orthogonal, and \(\mathbb{L}\colon\mathbb{M}_{4}(\mathbb{R})\to\mathbb{M}_{4}(\mathbb{R})\) is defined by \[\mathbb{L}(X)=\tfrac{1}{2}\big{(}X+B_{1}XC_{1}+B_{2}XC_{2}+B_{3}XC_{3}\big{)},\] where \[B_{1}=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right)\otimes\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right),\quad C_{1}=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right)\otimes\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right),\] \[B_{2}=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right)\otimes\left(\begin{array}{cc}-1&0\\ 0&1\end{array}\right),\quad C_{2}=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right)\otimes\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right),\] \[B_{3}=\left(\begin{array}{cc}0&-1\\ 1&0\end{array}\right)\otimes\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\quad C_{3}=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\otimes\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right).\] Let \(\mathrm{SO}(n)\subseteq\mathbb{M}_{n}(\mathbb{R})\) be the group of orthogonal matrices with determinant \(1\) and let \(\mathbb{S}_{n}\subseteq\mathbb{M}_{n}(\mathbb{R})\) be the set of \(n\)-by-\(n\) symmetric matrices. As shown in [4], the map \(\mathbb{L}\) satisfies the following. * \(\mathbb{L}\) is an involution. * \(\mathbb{L}\) maps rank-one matrices of norm-two in \(\mathbb{S}_{2}\oplus 0_{2}\) onto \(I_{2}\oplus\mathrm{SO}(2)\). * \(\mathbb{L}\) fixes the set of orthogonal matrices with negative determinant. The above facts will be used in the proof of our main theorem of this section. **Theorem 3.1**.: _Let \(\|\cdot\|_{(k)}\) be the Ky-Fan \(k\)-norm on \(\mathbb{M}_{n}(\mathbb{R})\) for \(1<k\leq n\). Suppose \(T\colon\mathbb{M}_{n}(\mathbb{R})\to\mathbb{M}_{n}(\mathbb{R})\) is a bijective linear map which preserves parallel pairs with respect to \(\|\cdot\|_{(k)}\). Then \(T\) is a scalar multiple of a linear isometry of \((\mathbb{M}_{n}(\mathbb{R}),\|\cdot\|_{(k)})\). More precisely, there exists \(\gamma>0\) and orthogonal matrices \(U,V\in\mathbb{M}_{n}(\mathbb{R})\) such that \(T\) has the form_ \[X\mapsto\gamma UXV\qquad\text{ or }\qquad X\mapsto\gamma UX^{t}V,\] _except when \((k,n)=(2,4)\), in which case \(T\) either takes one of the above two forms or_ \[X\mapsto\gamma P\mathbb{L}(UXV)\qquad\text{ or }\qquad X\mapsto\gamma P \mathbb{L}(UX^{t}V)\] _for some scalar \(\gamma>0\) and orthogonal \(U,V,P\in\mathbb{M}_{4}(\mathbb{R})\)._ **Corollary 3.2**.: _Assume a linear bijection \(T\colon\mathbb{M}_{n}(\mathbb{R})\to\mathbb{M}_{n}(\mathbb{R})\) is such that \(\|A+B\|_{(k)}=\|A\|_{(k)}+\|B\|_{(k)}\) implies \(\|T(A)+T(B)\|_{(k)}=\|T(A)\|_{(k)}+\|T(B)\|_{(k)}\), where \(\|\cdot\|_{(k)}\) is a Ky-Fan \(k\)-norm. Then, \(T\) is a scalar multiple of a linear isometry of \((\mathbb{M}_{n}(\mathbb{R}),\|\cdot\|_{(k)})\) and hence takes one of the forms from Theorem 3.1._ Our presentation for the proof of Theorem 3.1 is similar to that of Theorem 2.1 in Section 2. Instead of \(\mathbb{H}_{k}\subseteq\mathbb{M}_{k}(\mathbb{C})\), we will be dealing with \(\mathbb{S}_{k}\subseteq\mathbb{M}_{k}(\mathbb{R})\), the set of all symmetric real \(k\)-by-\(k\) matrices. And instead of unitary matrices, we will be dealing with orthogonal matrices. Also, the complex unit in this case is real and therefore equal to \(1\) or \(-1\). The results with proofs follow along the same lines as in Section 2, except when \((k,n)=(2,4)\), where we have additional isometries. We note that the proof for the trace-norm case follows exactly along the same lines as the complex case in Section 2. However, for Ky-Fan \(k\)-norms with \(2\leq k\leq n-1\), there are peculiarities in the real case. One obvious difference from the complex case is the presence of special isometries when \((n,k)=(2,4)\). Another more subtle difference is that the real span of psd matrices is \(\mathbb{S}_{k}\) in the real case, instead of \(\mathbb{H}_{k}\) in the complex case, with different (real) dimensions (\(k(k+1)/2\) versus \(k^{2}\)); another contrast is that the complex span of psd matrices is \(\mathbb{M}_{k}\). With this in mind, the analogous statement of Lemma 2.6 is the following (its proof proceeds along the same lines as for Lemma 2.6 and will not be repeated here). Note that \(X^{*}=X^{t}\) if \(X\) is a real matrix. **Lemma 3.3**.: _Let \(A\in\mathbb{M}_{n}(\mathbb{R})\) and let \(U,V\in\mathbb{M}_{n}(\mathbb{R})\) be orthogonal matrices such that \(U^{*}AV=\mathrm{diag}\left(s_{1}(A),\ldots,s_{n}(A)\right)\). Then, the following holds._ 1. _Suppose_ \(s_{k}(A)>s_{k+1}(A)\)_. Then_ \(B\in P(A)\) _if and only if_ \(U^{*}BV=B_{1}\oplus B_{2}\) _such that_ \(\mu B_{1}\in\mathbb{M}_{k}(\mathbb{R})\) _is psd for some_ \(\mu\in\{-1,1\}\) _and_ \(|\mathrm{Tr}B_{1}|=\|B\|_{(k)}\)_. Moreover,_ \[\mathrm{Span}\,P(A)=U(\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}(\mathbb{R}))V^{*}.\] _Consequently,_ \(\mathrm{Span}\,P(A)\) _has dimension_ \(k(k+1)/2+(n-k)^{2}\)_._ 2. _Suppose_ \(s_{k}(A)=s_{k+1}(A)\)_. Let_ \(p\) _and_ \(q\) _be the smallest integer and the largest integer such that_ \(s_{p+1}(A)=s_{k}(A)=s_{q}(A)\)_._ * _If_ \(q\neq n\) _or_ \(A\) _is invertible, then_ \(B\in P(A)\) _if and only if there is an orthogonal matrix_ \(W\) _of the form_ \(W=I_{p}\oplus W_{1}\oplus I_{n-q}\in\mathbb{M}_{p}(\mathbb{R})\oplus\mathbb{M }_{q-p}(\mathbb{R})\oplus\mathbb{M}_{n-q}(\mathbb{R})\) _such that_ \(B=UW(B_{1}\oplus B_{2})W^{*}V^{*}\)_,_ \(\mu B_{1}\in\mathbb{M}_{k}(\mathbb{R})\) _is psd for some_ \(\mu\in\{-1,1\}\)_, and_ \(\|B_{1}\|_{(k)}=\|B\|_{(k)}\)_._ * _If_ \(q=n\) _and_ \(A\) _is singular, then_ \(B\in P(A)\) _if and only if there exist orthogonal matrices_ \(W_{2},W_{3}\in\mathbb{M}_{n-p}(\mathbb{R})\) _such that_ \(B=U(I_{p}\oplus W_{2})(B_{1}\oplus B_{2})(I_{p}\oplus W_{3})V^{*}\)_,_ \(\mu B_{1}\in\mathbb{M}_{k}(\mathbb{R})\) _is psd for some_ \(\mu\in\{-1,1\}\)_, and_ \(\|B_{1}\|_{(k)}=\|B\|_{(k)}\)_._ _Consequently,_ \(\mathrm{Span}\,P(A)\) _has dimension at least_ \(k(k+1)/2+k(q-k)+(n-k)^{2}\)_._ Corollary 2.7 and Corollary 2.8 also hold for \(\mathbb{M}_{n}(\mathbb{R})\). Note that Item (ii) of Corollary 2.7 is not possible for matrices over real fields because for Item (ii) we would have \(\mathrm{Span}\,P(B)=\mathbb{M}_{k}(\mathbb{R})\oplus(X_{2}\mathbb{S}_{k}Y_{2}^ {*})\neq\mathrm{Span}\,P(A)=\mathbb{S}_{k}\oplus\mathbb{M}_{k}\) (here, \(X_{2},Y_{2}\) are the second diagonal blocks of orthogonal matrices \(X,Y\)). We define \(\mathcal{C}\) in exactly the same way as in Definition 2.9. We just note that its affine hull is \(\mathrm{Span}_{\mathbb{R}}\,\mathcal{C}=\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}( \mathbb{R})\) and it satisfies all other properties mentioned in Proposition 2.10. The proof of Proposition 2.11 requires a separate proof for real fields. We restate the proposition for the real case here for convenience. **Proposition 3.4**.: _Let \(T\colon\mathbb{M}_{n}(\mathbb{R})\to\mathbb{M}_{n}(\mathbb{R})\) be a bijective linear map which preserves parallel pairs relative to a Ky-Fan \(k\)-norm for \(1<k<n\). Assume there exist \(A,B\in\mathcal{C}^{\circ}\) such that \(T(A)=B\). Then_ \[T(\mathcal{C}^{\circ})=\mathcal{C}^{\circ}\quad\text{ and }\quad T(\partial \mathcal{C})=\partial\mathcal{C}.\] _Moreover, \(T\) maps \(\mathrm{Span}\,P(B)=\mathrm{Span}\,P(A)\) bijectively onto itself._ The reason for a separate proof, roughly speaking, is because in the complex case, \(P(A)\setminus\{0\}=\mathbb{T}(\mathcal{C}\setminus\{0\})\) (with \(\mathbb{T}:=\{\mu\in\mathbb{C}:|\mu|=1\}\)) is a connected set whenever \(A\in\mathcal{C}^{\circ}\) while in the real case the blunted cones \(\mathcal{C}\setminus\{0\}\) and \(-\mathcal{C}\setminus\{0\}\) are the two connected components of \(P(A)\setminus\{0\}\). In the subsequent proofs, we will typically write \(\mathbb{M}_{n}\) instead of \(\mathbb{M}_{n}(\mathbb{R})\) for notational simplicity. Proof.: The last claim follows the proof of Proposition 2.11 except that now, Lemma 3.3 gives \(\mathrm{Span}\,P(A)=\mathrm{Span}\,P(B)=\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\) is invariant for \(T\). We now show \(T(\mathcal{C}^{\circ})\subseteq\mathcal{C}^{\circ}\). Every matrix \(X\in\mathcal{C}\) is parallel to \(A\). So, \(T(X)=Y\in\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\) is parallel to \(B\). So, \(Y=Y_{1}\oplus Y_{2}\) with \(\mu Y_{1}\) positive semi-definite for some \(\mu\in\{-1,1\}\) and \(s_{k}(Y_{1})\geq s_{1}(Y_{2})\). So, \(T(\mathcal{C})\subseteq\mathcal{C}\cup-\mathcal{C}\). Now, \(\mathcal{C}\) and \(-\mathcal{C}\) are connected cones with intersection equal to \(\{0\}\). We may replace \(T\) by \(-T\) if needed, and assume that \(T(\mathcal{C})\subseteq\mathcal{C}\). Since \(T\colon\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\to\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\) is bijective linear it will send interior points to interior points. We now claim that \(T^{-1}(\mathcal{C}^{\circ})\subseteq\mathcal{C}^{\circ}\). To see this, let \(Y\in\mathcal{C}^{\circ}\), so \(Y=Y_{1}\oplus Y_{2}\in\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\) with \(Y_{1}\) positive definite and \(s_{k}(Y_{1})>s_{1}(Y_{2}).\) If \(T(X)=Y\), then, by Corollary 2.8, \(T^{-1}(\operatorname{Span}P(Y))=\operatorname{Span}P(X)\), \(s_{k}(X)>s_{k+1}(X)\), and \(X=X_{1}\oplus X_{2}\in\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\). Moreover, since the relative interior of \(\mathcal{C}\) is again a convex cone, for all \(t>0\), \[\tilde{Y}_{t}=(B_{1}\oplus B_{2})+t(Y_{1}\oplus Y_{2})\text{ satisfies }s_{k}(\tilde{Y}_{t})>s_{k+1}(\tilde{Y}_{t}). \tag{3.5}\] By Corollary 2.8, \(\tilde{X}_{t}=(A_{1}+tX_{1})\oplus(A_{2}+tX_{2})\) satisfies \(s_{k}(\tilde{X}_{t})>s_{k+1}(\tilde{X}_{t})\). Suppose, by way of contradiction, that \(s_{1}(X_{2})\) is one of the \(k\) largest singular values of \(X\). Then \(s_{1}(X_{2})>s_{k}(X_{1})\). For large \(t>0\), \(s_{1}(A_{2}\oplus tX_{2})\) is one of the \(k\) largest singular values of \(A+tX\), but for very small \(t>0\), \(s_{k}(A_{1}+tX_{1})>s_{1}(A_{2}+tX_{2})\). So, there will be a \(t_{0}>0\) such that \(s_{1}(A_{1}+t_{0}X_{1})\geq\cdots\geq s_{k}(A_{1}+t_{0}X_{1})=s_{1}(A_{2}+t_{0 }X_{2})\), contradicting \(s_{k}(\tilde{X}_{t})>s_{k+1}(\tilde{X}_{t})\). Thus \(s_{k}(X_{1})>s_{1}(X_{2})\). Observe that \(\operatorname{Span}P(Y)=\operatorname{Span}P(B)=\mathbb{S}_{k}\oplus \mathbb{M}_{n-k}\) and this set is invariant for \(T^{-1}\). Then also \(\operatorname{Span}P(X)=\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\). Now choose an orthogonal matrix \(O_{1}\in\mathbb{M}_{k}\) such that \(D_{1}=O_{1}X_{1}O_{1}^{t}\) is diagonal. Let \(O=O_{1}\oplus I_{n-k}\) and let \(D_{1}^{(s)}=|D_{1}|^{-1}D_{1}\) (so \(D_{1}^{(s)}\) is an orthogonal diagonal matrix). Then, \(\operatorname{Span}(P(OXO^{t}))=(D_{1}^{(s)}\oplus I_{n-k})\operatorname{Span }(P(|D_{1}|\oplus X_{2}))=(D_{1}^{(s)}\mathbb{S}_{k})\oplus\mathbb{M}_{n-k}\), which equals \(O\operatorname{Span}P(X)O^{t}=\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\) if and only if \(D_{1}^{(s)}=\pm I_{k}\). Hence \(X_{1}\) is either positive or negative definite. Thus \(T^{-1}\) maps the connected blunted cone \(\mathcal{C}\backslash\{0\}\) into the disconnected set \((\mathcal{C}\backslash\{0\})\cup((-\mathcal{C})\backslash\{0\})\), whence \(T^{-1}(\mathcal{C}^{\circ})\subseteq\mathcal{C}^{\circ}\). This shows that \(T(\mathcal{C}^{\circ})=\mathcal{C}^{\circ}\). Being a homeomorphism, \(T\) will then map \(\partial\mathcal{C}\), which coincides with the boundary of \(\mathcal{C}^{\circ}\), onto the boundary of \(T(\mathcal{C}^{\circ})=\mathcal{C}^{\circ}\), as claimed. As before, with Lemma 2.13 in mind, we define three sets: \[\mathcal{S}_{1} =\{X\in\partial\mathcal{C}:\operatorname{rank}X=1\}=\{aR\oplus 0_{n -k}:a>0,R\in\mathbb{S}_{k}\text{ is a rank-one projection}\},\] \[\mathcal{S}_{+} =\{a(I_{k}\oplus P):a>0,P\text{ is orthogonal with }\det P=1\},\] \[\mathcal{S}_{-} =\{a(I_{k}\oplus P):a>0,P\text{ is orthogonal with }\det P=-1\}.\] Observe that \(\mathcal{S}_{1},\mathcal{S}_{+},\mathcal{S}_{-}\) are mutually disjoint. Moreover, they are path-connected. For \(\mathcal{S}_{1}\) this holds because if \(X\in\mathcal{S}_{1}\setminus\mathbb{R}E_{11}\), then \(X=xx^{*}\) for some vector \(x\in\mathbb{R}^{k}\oplus 0_{n-k}\), linearly independent of \(e_{1}\), and hence \(\lambda\mapsto(\lambda e_{1}+(1-\lambda)x)(\lambda e_{1}+(1-\lambda)x)^{*}\) is a path in \(\mathcal{S}_{1}\) connecting \(X\) to \(E_{11}\). For \(\mathcal{S}_{+}\) its path-connectedness follows by using a parametrization of the special orthogonal group via \(e^{X}\) where the parameter \(X=-X^{t}\) runs over (the path connected set of) all skew-symmetric matrices, while \[\mathcal{S}_{-}=(I_{k}\oplus\operatorname{diag}\,(1,\ldots 1,-1))\mathcal{S}_{+} \tag{3.6}\] is clearly also path-connected. Moreover, by using the determinant we see that there can be no path between \(\mathcal{S}_{-}\) and \(\mathcal{S}_{+}\) and by using the continuity of singular values we see that neither \(\mathcal{S}_{1}\cup\mathcal{S}_{-}\) nor \(\mathcal{S}_{1}\cup\mathcal{S}_{+}\) are path connected. We will further need that \[\operatorname{Span}\mathcal{S}_{1}=\mathbb{S}_{k}\oplus 0_{n-k}\quad\text{and, for }n-k \geq 3,\quad\operatorname{Span}\mathcal{S}_{+}=\operatorname{Span}\mathcal{S }_{-}=\mathbb{R}I_{k}\oplus\mathbb{M}_{n-k}(\mathbb{R}). \tag{3.7}\] (To see the last two equalities, note that, for any orthonormal set \(y_{1},\ldots,y_{n-k}\in\mathbb{R}^{n-k}\), one has \(I_{k}\oplus Y_{\pm}\in\mathcal{S}_{+}\) where \(Y_{\pm}=\pm(y_{1}y_{1}^{t}+y_{2}y_{2}^{t})+\sum_{j=3}^{n-k}y_{j}y_{j}^{t}\); thus \(0_{k}\oplus R\in\operatorname{Span}\mathcal{S}_{+}\) for every rank-two projection \(R\). One then deduces that \(0_{k}\oplus R\in\operatorname{Span}\mathcal{S}_{+}\) for every rank-one projection \(R\), giving \(\operatorname{Span}\mathcal{S}_{+}=\mathbb{R}I_{k}\oplus\mathbb{M}_{n-k}( \mathbb{R})\). One applies (3.6) to get the first equality.) In contrast, if \(n-k=2\), then \[\operatorname{Span}\mathcal{S}_{+}=\mathbb{R}I_{k}\oplus\left(\mathbb{R} \left(\begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right)+\mathbb{R}\left(\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\right)\right)\quad\text{and}\quad\operatorname{Span} \mathcal{S}_{-}=\mathbb{R}I_{k}\oplus\left(\mathbb{R}\left(\begin{smallmatrix} 1&0\\ 0&-1\end{smallmatrix}\right)+\mathbb{R}\left(\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right)\right) \tag{3.8}\] while if \(n-k=1\), then \[\operatorname{Span}\mathcal{S}_{+}=\mathbb{R}I_{n}\quad\text{and}\quad\ \operatorname{Span}\mathcal{S}_{-}=\mathbb{R}\operatorname{diag}\left(1, \ldots,1,-1\right). \tag{3.9}\] The next result is the counterpart to Proposition 2.14 over the field of real numbers. **Proposition 3.5**.: _Suppose \(1<k<n\) and \(T\) is a bijective linear map on \(\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}(\mathbb{R})\) satisfying \(T(\mathcal{C})=\mathcal{C}\) and \(T(\partial\mathcal{C})=\partial\mathcal{C}\). Then \(T\) permutes the sets \(\mathcal{S}_{1}\), \(\mathcal{S}_{+}\), \(\mathcal{S}_{-}\) among themselves; actually, \(T\) fixes \(\mathcal{S}_{1}\) except possibly when \((k,n)=(2,4)\)._ Proof.: Being a linear bijection, \(T\) maps \(\operatorname{Span}(\operatorname{Pert}(X))\) onto \(\operatorname{Span}(\operatorname{Pert}(T(X)))\) for each \(X\in\partial\mathcal{C}\). Hence, by Proposition 2.13, \(T\) maps the subset \(\mathcal{S}_{1}\cup\mathcal{S}_{+}\cup\mathcal{S}_{-}\) bijectively onto itself. Since \(\mathcal{S}_{1}\), \(\mathcal{S}_{+}\) and \(\mathcal{S}_{-}\) are all connected and disjoint, their images under a homeomorphic map \(T\) are also connected and disjoint. Then \(T\) merely permutes the sets \(\mathcal{S}_{1}\), \(\mathcal{S}_{+}\), and \(\mathcal{S}_{-}\) among themselves. Consequently, \(T\) also permutes the sets \((\operatorname{Span}\mathcal{S}_{1})\), \((\operatorname{Span}\mathcal{S}_{+})\), and \((\operatorname{Span}\mathcal{S}_{-})\) among themselves with the same permutation as for \(\mathcal{S}_{1}\), \(\mathcal{S}_{+}\), \(\mathcal{S}_{-}\). Now, if \(n-k\geq 3\), then \((\operatorname{Span}\mathcal{S}_{1})=\mathbb{S}_{k}\oplus 0_{n-k}\), while \(\operatorname{Span}\mathcal{S}_{+}=\operatorname{Span}\mathcal{S}_{-}= \mathbb{R}I_{k}\oplus\mathbb{M}_{n-k}(\mathbb{R})\). Since \(T\) permutes \(S_{1},S_{-},S_{+}\) among themselves and fixes the cone \(\mathcal{C}\) we see that, unless \(T\) fixes \(\mathcal{S}_{1}\), it would map the cone \[(\operatorname{Span}\mathcal{S}_{1})\cap\mathcal{C}=\operatorname{psd}_{k} \oplus 0_{k}\] bijectively onto the cone \[(\operatorname{Span}\mathcal{S}_{+})\cap\mathcal{C}=(\operatorname{Span} \mathcal{S}_{-})\cap\mathcal{C}=\{\lambda I_{k}\oplus X:s_{1}(X)\leq\lambda\}.\] Then also the extreme rays of the cone \((\operatorname{Span}\mathcal{S}_{1})\cap\mathcal{C}\) would be mapped onto the extreme rays of the cone \((\operatorname{Span}\mathcal{S}_{+})\cap\mathcal{C}\). It is well-known [3, Theorem 3.8] that the extreme rays of the first cone are of the form \(\mathbb{R}_{+}xx^{*}\), i.e., the collection of all extreme rays of the first cone is \(\mathcal{S}_{1}\). On the other hand, the collection of all extreme rays of the second cone contains \(\mathcal{S}_{-}\cup\mathcal{S}_{+}\). (To see this, let \(P\in\mathbb{M}_{n-k}(\mathbb{R})\) be orthogonal. Suppose \(I_{k}\oplus P=\frac{1}{2}(\alpha I_{k}\oplus X+\beta I_{k}\oplus Y)\) is an average of two elements from the second cone. Then \(\alpha+\beta=2\) and \(P=\frac{\alpha}{\alpha+\beta}(X/\alpha)+\frac{\beta}{\alpha+\beta}(Y/\beta)\) is a convex combination of two matrices \(X/\alpha\), \(Y/\beta\) in the unit ball. Since orthogonal matrices are extreme points of the unit ball [2, Theorem 3], \(X/\alpha=Y/\beta=P\), so \(\alpha I_{k}\oplus X=\alpha(I_{k}\oplus P)\) and \(\beta I_{k}\oplus Y=\beta(I_{k}\oplus P)\). Thus each element of \(\mathcal{S}_{-}\cup\mathcal{S}_{+}\) lies in an extreme ray of the second cone, as claimed.) But then \(T\) would map the collection of all extreme rays of the first cone, \(\mathcal{S}_{1}\), onto a set which contains \(\mathcal{S}_{-}\cup\mathcal{S}_{+}\), a contradiction to the fact that \(T\) permutes these three sets. This implies that \(T(\mathcal{S}_{1})=\mathcal{S}_{1}\) if \(n-k\geq 3\). If \(n-k=2\) and \(k\geq 3\), then \(\dim(\operatorname{Span}\mathcal{S}_{1})=\frac{k(k+1)}{2}>3=\dim(\operatorname {Span}\mathcal{S}_{+})=\dim(\operatorname{Span}\mathcal{S}_{-})\) (see (3.8)), so again \(T(\mathcal{S}_{1})=\mathcal{S}_{1}\). If \(n-k=1\) we argue in the same manner (see (3.9)). The only remaining possibility is \((k,n)=(2,4)\). Proof of Theorem 3.1.: We only need to prove that there exists a linear isometry \(\Phi\colon(\mathbb{M}_{n},\|\cdot\|_{(k)})\to(\mathbb{M}_{n},\|\cdot\|_{(k)})\) such that \((T\circ\Phi)^{-1}\) preserves the set of rank-one matrices. Once this is done, \((T\circ\Phi)\) also preserves rank-one matrices and takes the standard form \(X\mapsto MXN\) or \(X\mapsto MX^{t}N\) (see [12]) and the rest follows from Lemma 2.4. To this end, let \(R_{1}=xy^{*}\) and \(R_{2}=uv^{*}\) be two arbitrary rank-one matrices in \(\mathbb{M}_{n}\) with a unit Ky-Fan norm. We can assume that the vectors \(x\), \(y\), \(u\), \(v\) are all normalized. Decompose \[u=cx+sx_{2}\quad\text{ and }\quad v=\hat{c}y+\hat{s}y_{2};\qquad|c|^{2}+|s|^{2 }=|\hat{c}|^{2}+|\hat{s}|^{2}=1,\quad s,\hat{s}\geq 0\] where \(x_{2},y_{2}\in\mathbb{R}^{n}\) are unit vectors orthogonal to \(x\) and \(y\), respectively. Choose a point \((\alpha,\beta)\) on the unit circle of \(\mathbb{R}^{2}\) with \((c-\hat{c})\alpha+(s-\hat{s})\beta=0\) and form a norm-one matrix \[R=(\alpha x+\beta x_{2})(\alpha y+\beta y_{2})^{*}.\] Then there exist orthogonal matrices \(U\), \(V\) such that \(Ux=Vy=e_{1}\) and \(Ux_{2}=Vy_{2}=e_{2}\), the first two standard basis vectors of \(\mathbb{R}^{n}\), and we have \[UR_{1}V^{*}=E_{11}\quad\text{ and }\quad URV^{*}=(\alpha e_{1}+\beta e_{2})( \alpha e_{1}+\beta e_{2})^{*}\in\mathbb{S}_{2}\oplus 0_{n-2}\subseteq\mathbb{S}_{ k}\oplus 0_{n-k}. \tag{3.10}\] Moreover, by the choice of \((\alpha,\beta)\) we further have \(u^{*}(\alpha x+\beta x_{2})=\alpha c+\beta s=v^{*}(\alpha y+\beta y_{2})\) so there exist orthogonal \(U_{2},V_{2}\in\mathbb{M}_{n}\) with \[U_{2}RV_{2}^{*}=E_{11}\quad\text{ and }\quad U_{2}R_{2}V_{2}^{*}\in\mathbb{S}_{ 2}(\mathbb{R})\oplus 0_{n-2}. \tag{3.11}\] Consider first the pair \((R_{1},R)\) in (3.10). Extend \(R_{1}\) to a matrix \(Y=U^{*}(E_{11}+\cdots+E_{kk})V=U^{*}(I_{k}\oplus 0_{n-k})V\) and let \(A=T^{-1}(Y)\). Since \(s_{k}(Y)>s_{k+1}(Y)\), Corollary 2.8 implies that \(s_{k}(A)>s_{k+1}(A)\) so there exist orthogonal \(\hat{U},\hat{V}\) such that \(A=\hat{U}(A_{1}\oplus A_{2})\hat{V}\in\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\) with \(A_{1}\) positive definite and \(s_{k}(A_{1})>s_{1}(A_{2})\). By replacing \(T\) with a map \[\hat{T}\colon X\mapsto UT(\hat{U}X\hat{V})V^{*}\] we achieve that \(\hat{T}(A_{1}\oplus A_{2})=I_{k}\oplus 0_{n-k}\). Then, by Corollary 2.8, \(\hat{T}\) maps \(\operatorname{Span}P(A_{1}\oplus A_{2})=\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\) onto \(\operatorname{Span}P(I_{k}\oplus 0_{n-k})=\mathbb{S}_{k}\oplus\mathbb{M}_{n-k}\), so the conclusions of Propositions 2.11 and 3.5 are valid. We next consider two cases. Assume \((k,n)\neq(2,4)\). Then, by Proposition 3.5, \(\hat{T}\), hence also \(\hat{T}^{-1}\), fixes the set \(\mathcal{S}_{1}\) which by definition consists of all psd rank-one matrices in \(\mathbb{S}_{k}\oplus 0_{n-k}\). It follows that \(\hat{T}^{-1}(E_{11})=\hat{U}^{*}T^{-1}(R_{1})\hat{V}^{*}\) is of rank-one. Since \(R_{1}\) was arbitrary, the linear bijection \(T^{-1}\) preserves rank-one matrices and the claim follows with the isometry \(\Phi(X)=X\). Assume \((k,n)=(2,4)\). Then either \(\hat{T}^{-1}\) fixes \(\mathcal{S}_{1}\) or else maps it onto \(\mathcal{S}_{+}\) or onto \(\mathcal{S}_{-}\). In the last two cases, \(\hat{T}^{-1}\) maps every rank-one in \(\mathbb{S}_{2}\oplus 0_{2}\) into an invertible matrix of the form \(\mathbb{R}(I_{2}\oplus\mathrm{O}(2))\), where \(\mathrm{O}(2)\subseteq\mathbb{M}_{2}\) denotes the group of \(2\)-by-\(2\) orthogonal matrices. By (3.10) and the definition of \(\hat{T}\) it follows that \(T^{-1}\) either maps \(R_{1}\) and \(R\) simultaneously into rank-one matrices or else simultaneously into invertible matrices. We now repeat the above procedure on the pair \((R,R_{2})\) using (3.11) instead of (3.10) to see that either \(T^{-1}(R_{1})\) and \(T^{-1}(R_{2})\) are both rank-one matrices, or else they are both invertible ones. Then, by the arbitrariness of \(R_{1}\) and \(R_{2}\), either \(T^{-1}\) preserves the set of rank-one matrices or else it maps it into the set of invertible ones and the same applies to \(\hat{T}^{-1}\). In the former case we are done by using the isometry \(\Phi(X)=X\). In the latter case, \(\hat{T}^{-1}\) maps the set \(\mathcal{S}_{1}\) onto \(\mathcal{S}_{+}\) or onto \(\mathcal{S}_{-}\). Composing \(\hat{T}\) with an involutive isometry \(\mathbb{L}\) or isometry \(\mathbf{L}\colon X\mapsto\operatorname{diag}\left(1,1,1,-1\right)\mathbb{L}(X)\), respectively, we get that \((\hat{T}\circ\mathbb{L})^{-1}\) or \((\hat{T}\circ\mathbf{L})^{-1}\) fixes \(\mathcal{S}_{1}\), so preserves rank-one. We finally compose \(T\) with the isometry \(\Phi(X)=\hat{U}\mathbb{L}(X)\hat{V}\) or the isometry \(\Phi(X)=\hat{U}\mathbf{L}(X)\hat{V}\); its inverse, \((T\circ\Phi)^{-1}\) will map rank-one matrices to rank-one matrices, as claimed. ## 4. Final remarks and future study 1. In all our results on Ky-Fan norms, if a bijective linear \(T\) preserves parallel pairs, then its inverse does so also. However, this is not true for a general norm. For example, consider on \(\mathbb{R}^{2}\) the truncated Euclidean norm \[\|(a,b)\|_{\mathrm{trE}}:=\Big{\|}\Big{(}\|(a,b)\|_{2}\,,\,\big{\|}(\sqrt{2}a,b)\big{\|}_{\infty}\Big{)}\Big{\|}_{\infty}=\begin{cases}\sqrt{a^{2}+b^{2}}; &|b|>|a|\\ \sqrt{2}|a|;&\text{otherwise}\end{cases}\] One can show that \(x\|y\) if and only if \(x,y\) are linearly dependent or else they both belong to \(\sigma\cup(-\sigma)\), where \(\sigma\) is the pointed convex cone spanned by a vertical line segment through \((1,1)\) and \((1,-1)\). Then any linear map \(T\) which maps \(\sigma\) into itself is a parallelism preserver with respect to \(\|\cdot\|_{\mathrm{trE}}\). But if \(T(\sigma)\) is properly contained in \(\sigma\), it will not preserve parallel pairs in both directions. A concrete example is \(T=\Big{(}\begin{smallmatrix}1&0\\ 12&5\\ \end{smallmatrix}\Big{)}\). In another venue, \(T=\frac{1}{2}\left(\begin{smallmatrix}3&1\\ 1&3\\ \end{smallmatrix}\right)\) preserves parallel pairs of \(\|\cdot\|_{\mathrm{trE}}\) in both directions, but it is not a scalar multiple of an isometry of \(\|\cdot\|_{\mathrm{trE}}\). 2. Observe that without assuming linearity or bijectivity there are more pathological examples. A trivial one is \(T(A)=f(A)A\) where \(f\colon\mathbb{M}_{n}\to\mathbb{C}\) can be any function (this is bijective when \(f\) maps every line, spanned by any fixed nonzero matrix \(A\), bijectively onto \(\mathbb{C}\setminus\{0\}\)); a slightly more subtle one is \(T(A)=F_{1}(A)\oplus 0\), where \(F_{1}\colon\mathbb{M}_{n}\to\mathrm{psd}_{k}\) is an arbitrary map. Finally, the proofs are much easier if one assumes \(T\) preserves parallel pairs in both directions. 3. By our results, the bijective linear preservers of parallel pairs and of matrix pairs \((A,B)\) satisfying \(\|A+B\|_{(k)}=\|A\|_{(k)}+\|B\|_{(k)}\) are both positive multiples of isometries. However, this is not always the case for other norms. For example, in [6], it was shown that the same conclusion holds on \(\mathbb{M}_{n}\) equipped with the spectral norm (i.e., Ky-Fan 1-norm) if \(n\geq 3\). However, when \(n=2\), then in the complex case there are additional maps preserving parallel pairs but no additional maps preserving matrix pairs \((A,B)\) satisfying \(\|A+B\|=\|A\|+\|B\|\) while in the real case there are additional maps preserving parallel pairs that will also preserve matrix pairs \((A,B)\) satisfying \(\|A+B\|=\|A\|+\|B\|\). 4. One may consider bijective linear maps on complex or real rectangular matrices \(\mathbb{M}_{m,n}\) preserving parallel pairs with respect to the Ky-Fan \(k\)-norm. They should be scalar multiples of isometries. 5. More generally, one may consider general unitarily invariant norms on \(\mathbb{M}_{m,n}\) that are not strictly convex. For example, the \((c,p)\)-norm \(\|A\|=(\sum_{j=1}^{k}c_{j}s_{j}(A)^{p})^{1/p}\). 6. It would be interesting to extend our results to \(B(H)\), the set of bounded linear operators acting on a real or complex Hilbert space \(H\), under the Ky-Fan \(k\)-norm \[||A||_{(k)}=\sup\{||X^{*}AY||_{(k)}:X^{*}X=Y^{*}Y=I_{k}\}.\] 7. One may also consider extending the results to infinite dimensional operators in other spaces, such as norm ideals of compact operators or standard operator algebras. 8. Another direction is to extend the results to other matrix or operator algebras. For instance, for finite dimensional irreducible algebras, the problem reduces to the study of real or complex square matrices. For a real matrix algebra one needs to consider the algebra of complex matrices or quaternion matrices over reals. One may also consider the problem on the algebra of triangular matrices. The infinite dimensional extension to nested algebras would be of interest too. ## Acknowledgment The research of Li was supported by the Simons Foundation Grant 851334. This research was supported in part by the Slovenian Research Agency (research program P1-0285, research project N1-0210, and bilateral project BI-US-22-24-129).
2309.04144
Tapping into Permutation Symmetry for Improved Detection of k-Symmetric Extensions
Symmetric extensions are essential in quantum mechanics, providing a lens to investigate the correlations of entangled quantum systems and to address challenges like the quantum marginal problem. Though semi-definite programming (SDP) is a recognized method for handling symmetric extensions, it grapples with computational constraints, especially due to the large real parameters in generalized qudit systems. In this study, we introduce an approach that adeptly leverages permutation symmetry. By fine-tuning the SDP problem for detecting \( k \)-symmetric extensions, our method markedly diminishes the searching space dimensionality and trims the number of parameters essential for positive definiteness tests. This leads to an algorithmic enhancement, reducing the complexity from \( O(d^{2k}) \) to \( O(k^{d^2}) \) in the qudit \( k \)-symmetric extension scenario. Additionally, our approach streamlines the process of verifying the positive definiteness of the results. These advancements pave the way for deeper insights into quantum correlations, highlighting potential avenues for refined research and innovations in quantum information theory.
Youning Li, Chao Zhang, Shi-Yao Hou, Zipeng Wu, Xuanran Zhu, Bei Zeng
2023-09-08T06:05:56Z
http://arxiv.org/abs/2309.04144v1
# Tapping into Permutation Symmetry for Improved Detection of \(k\)-Symmetric Extensions ###### Abstract Symmetric extensions are essential in quantum mechanics, providing a lens to investigate the correlations of entangled quantum systems and to address challenges like the quantum marginal problem. Though semi-definite programming (SDP) is a recognized method for handling symmetric extensions, it grapples with computational constraints, especially due to the large real parameters in generalized qudit systems. In this study, we introduce an approach that adeptly leverages permutation symmetry. By fine-tuning the SDP problem for detecting \(k\)-symmetric extensions, our method markedly diminishes the searching space dimensionality and trims the number of parameters essential for positive definiteness tests. This leads to an algorithmic enhancement, reducing the complexity from \(O(d^{2k})\) to \(O(k^{d})\) in the qudit \(k\)-symmetric extension scenario. Additionally, our approach streamlines the process of verifying the positive definiteness of the results. These advancements pave the way for deeper insights into quantum correlations, highlighting potential avenues for refined research and innovations in quantum information theory. ## I Introduction In the intricate domain of quantum mechanics, symmetric extensions stand out as a cornerstone, providing a structured mathematical lens to explore the nature and behavior of quantum states. A bipartite state \(\rho_{AB}\) is deemed symmetrically extendible if there exists a multipartite density matrix \(\rho_{A_{1}A_{2}\ldots A_{m}B_{1}B_{2}\ldots B_{n}}\) such that each of its reduced density matrices, when traced over its complements, matches \(\rho_{AB}\): \[\mathrm{Tr}_{(A_{j}B_{k})^{c}}(\rho_{A_{1}A_{2}\ldots A_{m}B_{1}B_{2}\ldots B_{ n}})=\rho_{AB},\;\forall j,k. \tag{1}\] Delving into the importance of symmetric extensions, they serve as a tangible framework to probe the nature of quantum entanglement, offering a means to understand the profound correlations present in entangled quantum systems [1; 2; 3]. Furthermore, they pave the way for addressing the quantum marginal problem, which investigates the necessary conditions under which a set of density matrices can correspond to a global state [4; 5]. This problem's universality is show-cased by its resonance with the \(N\)-representability problem in quantum chemistry [6; 7]. A common approach for identifying a \(k\)-extension is to cast the problem as a semi-definite programming (SDP) problem [8; 9; 10]. SDP, a form of convex optimization, involves minimizing a linear function subject to the constraints that the solution lies in the intersection of the cone of positive semidefinite matrices and an affine space. Given that density matrices are inherently semidefinite, SDP has found extensive application in quantum information problems [11; 12]. By leveraging the properties of SDP, we can devise efficient algorithms for detecting \(k\)-extensions. For example, the algorithm used in QETLAB that determines if a bipartite quantum state \(\rho_{AB}\) is \(k\)-symmetric extendible for the \(d\)-dimensional subsystem \(B\) has the form: \[\mathrm{find}\;\bar{\rho} \tag{2}\] \[s.t. \left\{\begin{array}{l}\bar{\rho}\succeq 0\\ \mathrm{Tr}_{(AB_{1})^{c}}\left(\bar{\rho}\right)=\rho_{AB}\\ \left(\mathbbm{1}_{A}\otimes P_{ij}\right)\;\bar{\rho}\left(\mathbbm{1}_{A} \otimes P_{ij}\right)=\bar{\rho},\forall i,j\end{array}\right.\] where \(\bar{\rho}=\rho_{AB_{1}B_{2}\ldots B_{k}}\) is the \(k\)-symmetric extension of \(\rho_{AB}\) and the operator \(P_{ij}\) is an element in permutation group \(S_{k}\). However, the substantial number of real parameters, notably in the general qudit scenario, can present formidable computational obstacles. This is largely due to the requirement of the entire extended Hilbert space \(\mathcal{H}_{AB_{1}B_{2}\ldots B_{k}}\), which scales as \(O((d^{2})^{k})\). Such exponential scaling can make calculations intractable for larger systems or higher dimensions. In this work we are going to present a new optimization scheme, which not only considers the permutation symmetry to reduce the total parameters but also optimize in the subroutine to determine positive definiteness, where the parameters for single time optimization grows no faster than \[\prod_{m=1}^{d-1}\frac{1}{m!}\left(1+\frac{2k}{d(d+1)}\right)^{d(d-1)/2},\] for large \(k\) and \(d\). A testament to our methodology's effectiveness is its application to the renowned bipartite Werner state, where it exhibits a pronounced acceleration in comparison to the established QETLAB software. This enhancement equips us to approach larger \(k\)-extension challenges with unparalleled efficiency. Furthermore, our calculations have explicitly determined the dimensions of the searching space and the number of parameters required for positive definiteness tests. This efficiency stems from our algorithm's ability to undergo multiple distinct positive-definiteness tests, each correlating to a unique Young diagram. Each individual test, though involving a significantly smaller matrix, culminates in a comprehensive and efficient analysis. Our findings contribute to a clearer understanding of quantum systems, potentially aiding in the design of more proficient quantum algorithms and enhancing our grasp of quantum information theory. The structure of our paper is as follows: Sec. II delves into the intricacies of the 3-extension of the qutrit case as an illustrative example. Sec. III elucidates our methodology to compute the reduced density matrix of global states for a general \(k\)-extendible state, and underscores our rationale for dimensionality reduction. Sec.IV shows the comparison of our new algorithm and traditional one. Concluding insights and discussions are furnished in Sec. V. ## II Qutrit example Before starting to solve the general problem, we first take a look at 2 simple examples, 2 and 3-extension of qutrit. In fact, these 2 case clearly demonstrated why our new algorithm can greatly reduce the size of searching space. We are going to investigate how many real parameters are needed to fully describe the global symmetric extended matrix \(\rho_{A\bar{B}}\), which lies in Hilbert space \(V=V_{A}\otimes\mathcal{T}\) constituted by part \(A\) and \(\bar{B}\), with the constrains that \(\text{Tr}_{(AB_{1})^{c}}(\rho_{A\bar{B}})=\rho_{AB}\). ### 2-qutrit In this case, \(\mathcal{T}\equiv V^{(1)}\otimes V^{(2)}\) is constituted by 2-qutrit \(B_{1}\) and \(B_{2}\), where \(V^{(1)}\) and \(V^{(2)}\) represents \(B_{1}\) and \(B_{2}\), respectively. \(\mathcal{T}\) is spanned by nine vectors \(\{|00\rangle,|01\rangle,\cdots,|22\rangle\}\). According to different permutation symmetry, \(\mathcal{T}\) can be decomposed as 2 invariant orthogonal subspace, 6-dimensional bosonic one \(V^{B}\) and 3-dimensional fermionic one \(V^{F}\). It is clear that, there does not exist any cross term of bosonic subspace and fermionic subspace, therefore we only have to consider the density matrix \(\bar{\rho}_{A\bar{B}}\) supporting on \(\text{End}(V_{A})\otimes\text{End}(V^{B})\) and \(\bar{\rho}_{A\bar{B}}\) supporting on \(\text{End}(V_{A})\otimes\text{End}(V^{F})\). The general form of \(\tilde{\rho}_{A\bar{B}}\) reads \[\bar{\rho}_{A\bar{B}}=\sum_{a,\bar{\rho}=1}^{6}\rho_{A}^{(a,\bar{\rho})} \otimes\bar{p}_{a,\bar{\rho}}|\phi_{\bar{B}}^{a}\rangle\langle\,\phi_{\bar{B} }^{\bar{\rho}}|, \tag{1}\] where \(\rho_{A}^{(a,\bar{\rho})}\in\text{End}(V_{A})\), \(\bar{p}_{a,\bar{\rho}}\) is complex number and \[\{|\phi_{\bar{B}}^{a}\rangle\} = \left\{|00\rangle,|11\rangle,|22\rangle,\frac{1}{\sqrt{2}}(|01 \rangle+|10\rangle),\right.\] \[\left.\frac{1}{\sqrt{2}}(|02\rangle+|20\rangle),\frac{1}{\sqrt{2} }(|12\rangle+|21\rangle),\right\}.\] The reduced density matrix can be obtained by doing partial trace over \(B_{2}\) \[2\times\text{Tr}_{V^{(2)}}(\bar{\rho}_{A\bar{B}})=\sum_{a,b=0}^{2}\bar{M}_{ab} \otimes|a\rangle\langle b|, \tag{2}\] where \(\bar{M}_{ab}\) is given by \[\bar{M}_{00} = 2\bar{p}_{1,1}\rho_{A}^{(1,1)}+\bar{p}_{A}\rho_{A}^{(4,4)}+\bar{ p}_{5,5}\rho_{A}^{(5,5)},\] \[\bar{M}_{11} = 2\bar{p}_{2,2}\rho_{A}^{(2,2)}+\bar{p}_{4,4}\rho_{A}^{(4,4)}+\bar {p}_{6,6}\rho_{A}^{(6,6)},\] \[\bar{M}_{22} = 2\bar{p}_{3,3}\rho_{A}^{(3,3)}+\bar{p}_{5,5}\rho_{A}^{(5,5)}+ \bar{p}_{6,6}\rho_{A}^{(6,6)},\] \[\bar{M}_{01} = \bar{M}_{10}^{4}=\sqrt{2}\bar{p}_{4,1}\rho_{A}^{(4,1)}+\sqrt{2} \bar{p}_{2,4}\rho_{A}^{(2,4)}+\bar{p}_{6,5}\rho_{A}^{(6,5)},\] \[\bar{M}_{02} = \bar{M}_{20}^{4}=\sqrt{2}\bar{p}_{5,1}\rho_{A}^{(5,1)}+\sqrt{2} \bar{p}_{3,5}\rho_{A}^{(3,5)}+\bar{p}_{6,4}\rho_{A}^{(6,4)},\] \[\bar{M}_{12} = \bar{M}_{21}^{4}=\sqrt{2}\bar{p}_{6,2}\rho_{A}^{(6,2)}+\sqrt{2} \bar{p}_{3,6}\rho_{A}^{(3,6)}+\bar{p}_{5,4}\rho_{A}^{(5,4)}.\] It is noticed that in each term of the RHS in Eq(2) does not contain every \(\bar{p}_{a,\bar{\rho}}\). In fact, the nonzero coefficients before \(\bar{p}_{a,\bar{\rho}}\) in the term \(|a\rangle\langle b|\) are exactly the nonzero entries of representation matrix \(T_{(ab)}\) over this bosonic invariant subspace.1 Footnote 1: You may find the representation of \(su(3)\) for this case and the following case in standard textbook on group theory, such as [13; 14]. Similarly, one can write down the general form of \(\tilde{\rho}_{A\bar{B}}\) supporting on \(\text{End}(V_{A})\otimes\text{End}(V^{F})\) \[\tilde{\rho}_{A\bar{B}}=\sum_{a,\bar{\rho}=1}^{3}\rho_{A}^{(a,\bar{\rho})} \otimes\bar{p}_{a,\bar{\rho}}|\psi_{\bar{B}}^{a}\rangle\langle\psi_{\bar{B}}^ {\bar{\rho}}|, \tag{4}\] where \(\rho_{A}^{(a,\bar{\rho})}\in\text{End}(V_{A})\), \(\tilde{p}_{a,\bar{\rho}}\) is complex number and \(\{|\psi_{\bar{B}}^{a}\rangle\}\) is \[\left\{\frac{1}{\sqrt{2}}(|01\rangle-|10\rangle),\frac{1}{\sqrt{2}}(|02 \rangle-|20\rangle),\frac{1}{\sqrt{2}}(|12\rangle-|21\rangle),\right\}.\] The reduced density matrix can be obtained by doing partial trace over \(B_{2}\) \[2\times\text{Tr}_{V^{(2)}}(\bar{\rho}_{A\bar{B}})=\sum_{a,b=0}^{2}\bar{M}_{ab} \otimes|a\rangle\langle b|, \tag{5}\] where \(\tilde{M}_{ab}\) is given by \[\begin{split}\tilde{M}_{00}&=~{}2\hat{p}_{1,1}\rho_{A }^{(1,1)}+\hat{p}_{2,2}\rho_{A}^{(2,2)},\\ \tilde{M}_{11}&=~{}\hat{p}_{1,1}\rho_{A}^{(1,1)}+\hat{p }_{3,3}\rho_{A}^{(3,3)},\\ \tilde{M}_{22}&=~{}\tilde{p}_{2,2}\rho_{A}^{(2,2)}+ \hat{p}_{3,3}\rho_{A}^{(3,3)},\\ \tilde{M}_{01}&=~{}\tilde{M}_{10}^{4}=\hat{p}_{3,2} \rho_{A}^{(3,2)},\\ \tilde{M}_{02}&=~{}\tilde{M}_{20}^{4}=-\hat{p}_{3,1} \rho_{A}^{(3,1)},\\ \tilde{M}_{12}&=~{}\tilde{M}_{21}^{4}=\hat{p}_{2,1} \rho_{A}^{(2,1)}.\end{split} \tag{6}\] Taking both \(\tilde{\rho}_{A\bar{B}}\) and \(\tilde{\rho}_{A\bar{B}}\) into account, much less real parameters is needed: the original algorithm searches the entire Hilbert space and thus the number of parameters is \(9^{2}d_{A}^{2}\), while usage of permutation symmetry can reduce such number to \((6^{2}+3^{2})d_{A}^{2}\). It should be stressed that such simplification comes from the fact that the cross terms between subspaces corresponding to different permutation symmetry are forbidden. However, naively usage of symmetry, such as simply symmetrizing the Gell-Mann matrices over \(B_{1}\) and \(B_{2}\) has to determine the positive definiteness of \(\mathbf{1}\) matrix with dimension \((6^{2}+3^{2})d_{A}^{2}\). As a comparison, our method involves determining the positive definiteness of \(\mathbf{2}\) matrices, whose dimension are \(6^{2}d_{A}^{2}\) and \(3^{2}d_{A}^{2}\) respectively. ### \(3\)-qutrit In this case, \(\mathcal{T}\equiv V^{(1)}\otimes V^{(2)}\otimes V^{(3)}\) is constituted by \(3\)-qutrit \(B_{1}\), \(B_{2}\) and \(B_{3}\), where \(V^{(i)}\) represents \(B_{i}\) respectively. Similar as the procedure in the previous subsection, one can decompose \(\mathcal{T}\) as direct sum of subspace according to different permutation symmetry, and further more, there does not exist cross term between subspaces corresponding to different permutation symmetry. The permutation symmetry of \(3\)-qutrit case is much more complicated than the \(2\)-qutrit case. It is easy to verify that there exist a \(10\)-dimensional bosonic subspace and a \(1\)-dimensional fermionic subspace. Therefore one can solve this problem by imitating previous subsection and obtain the constrain equations. In this situation, the dimension of searching space can be reduced from \(27^{2}d_{A}^{2}\) to \((10^{2}+16^{2}+1^{2})d_{A}^{2}\). However, more room is left in simplification. According to Weyl duality, the \(16\)-dimensional subspace can be further decomposed as \(2\) orthogonal \(8\)-dimensional invariant subspaces \(\mathcal{T}_{1}^{[2,1]}\) and \(\mathcal{T}_{2}^{[2,1]}\), and both are loaded with an equivalent \(su(3)\) irreducible representation, described by a two row Young diagram \([2,1]\).2 Footnote 2: Here \([\lambda]\equiv\{\lambda_{i}\lambda_{2},\cdots,\lambda_{nk}\}\) is a partition of integer \(k\), where all \(\lambda_{i}\) are integers satisfying \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{nk}\geq 0,\sum_{i=1}^{n} \lambda_{i}=k\). Such partition is denoted by an \(n\)-row Young diagram. \(\mathcal{T}_{1}^{[2,1]}\) and \(\mathcal{T}_{2}^{[2,1]}\) are spanned by vectors \(\{\varphi_{\bar{B}}^{\alpha,(1)}\}\) and \(\{\varphi_{\bar{B}}^{\alpha,(2)}\}\): \[\left\{\frac{1}{\sqrt{6}}(2|001\rangle-|010\rangle-|100\rangle), \frac{1}{\sqrt{6}}(2|002\rangle-|020\rangle-|200\rangle),\frac{1}{\sqrt{6}}(|01 1\rangle-|101\rangle-2|110\rangle),\right.\] \[\left.\frac{1}{\sqrt{12}}(2|012\rangle-|021\rangle+2|102 \rangle-|120\rangle-|201\rangle),\frac{1}{\sqrt{4}}(|021\rangle-|120\rangle+| 201\rangle-|210\rangle),\right.\] \[\left.\frac{1}{\sqrt{6}}(|022\rangle+|202\rangle-2|220\rangle), \frac{1}{\sqrt{6}}(2|112\rangle-|121\rangle-|211\rangle),\frac{1}{\sqrt{6}}(|1 22\rangle+|212\rangle-2|221\rangle)\right\},\] \[\left\{\frac{1}{\sqrt{2}}(|010\rangle-|100\rangle),\frac{1}{\sqrt{2}}(|020 \rangle-|200\rangle),\frac{1}{\sqrt{2}}(|011\rangle-|101\rangle),\right.\] \[\left.\frac{1}{\sqrt{4}}(|021\rangle+|120\rangle-|201\rangle-|210 \rangle),\frac{1}{\sqrt{12}}(2|012\rangle+|021\rangle-2|102\rangle-|120 \rangle-|201\rangle+|210\rangle),\right.\] \[\left.\frac{1}{\sqrt{2}}(|022\rangle-|202\rangle),\frac{1}{\sqrt{ 2}}(|121\rangle-|211\rangle),\frac{1}{\sqrt{2}}(|122\rangle-|212\rangle), \right\}.\] Under the constrain condition imposed in Eq(1), the general form of global state \(\hat{\rho}_{A\bar{B}}\) supporting on \(\text{End}(V_{A})\otimes\text{End}(\mathcal{T}_{1}^{[2,1]}\oplus\mathcal{T}_{2} ^{[2,1]})\) must be of the following form: \[\hat{\rho}_{A\bar{B}}=\sum_{\alpha,\beta=1}^{8}\rho_{A}^{(\alpha,\beta)} \otimes\hat{\rho}_{\alpha,\beta}\left(|\varphi_{\bar{B}}^{\alpha,(1)}\rangle \langle\varphi_{\bar{B}}^{\beta,(1)}|+|\varphi_{\bar{B}}^{\alpha,(2)}\rangle \langle\varphi_{\bar{B}}^{\beta,(2)}|\right), \tag{7}\] The reduced density matrix can be obtained by doing partial trace over \(B_{2}\) \[\frac{3}{2}\times\text{Tr}_{V^{(2)}}(\hat{\rho}_{A\vec{B}})=\sum_{a,b=0}^{2}\hat{ M}_{ab}\otimes|a\rangle\langle b|, \tag{8}\] where \(\hat{M}_{ab}\) is given by \[\hat{M}_{00} = 2\hat{p}_{1,1}\rho_{A}^{(1,1)}+2\hat{p}_{2,2}\rho_{A}^{(2,2)}+ \hat{p}_{1,1}\rho_{A}^{(1,1)}+\hat{p}_{4}\rho_{A}^{(4,A)}+p_{5,5}\rho_{A}^{(5,5 )}+\hat{p}_{6,6}\rho_{A}^{(6,6)},\] \[\hat{M}_{11} = \hat{p}_{1,1}\rho_{A}^{(1,1)}+2\hat{p}_{3,3}\rho_{A}^{(3,3)}+\hat {p}_{4,4}\rho_{A}^{(4,A)}\hat{p}_{5,5}\rho_{A}^{(5,5)}+2\hat{p}_{7,7}\rho_{A}^{( 7,7)}+\hat{p}_{8,8}\rho_{A}^{(8,8)},\] \[\hat{M}_{22} = \hat{p}_{2,2}\rho_{A}^{(2,2)}+\hat{p}_{4,4}\rho_{A}^{(4,4)}+\hat {p}_{5,5}\rho_{A}^{(5,5)}+2\hat{p}_{6,6}\rho_{A}^{(6,6)}+\hat{p}_{7,7}\rho_{A} ^{(7,7)}+2\hat{p}_{8,8}\rho_{A}^{(8,8)},\] \[\hat{M}_{01}=\hat{M}_{10}^{\star} = -\frac{1}{\sqrt{2}}\hat{p}_{4,1}\rho_{A}^{(4,1)}+\sqrt{\frac{3}{ 2}}\hat{p}_{5,1}\rho_{A}^{(5,1)}+\hat{p}_{6,2}\rho_{A}^{(6,2)}+\frac{1}{\sqrt{ 2}}\hat{p}_{8,4}\rho_{A}^{(8,4)}-\sqrt{\frac{3}{2}}\hat{p}_{8,5}\rho_{A}^{(8, 5)}-\hat{p}_{7,3}\rho_{A}^{(7,3)},\] \[\hat{M}_{02}=\hat{M}_{20}^{\star} = \hat{p}_{2,1}\rho_{A}^{(2,1)}+\frac{1}{\sqrt{2}}\hat{p}_{4,3} \rho_{A}^{(4,3)}+\sqrt{\frac{3}{2}}\hat{p}_{5,3}\rho_{A}^{(5,3)}+\frac{1}{ \sqrt{2}}\hat{p}_{6,4}\rho_{A}^{(6,4)}+\sqrt{\frac{3}{2}}\hat{p}_{6,5}\rho_{A }^{(6,5)}+\hat{p}_{2,1}\rho_{A}^{(2,1)},\] \[\hat{M}_{12}=\hat{M}_{21}^{\star} = \hat{p}_{3,1}\rho_{A}^{(3,1)}+\sqrt{2}\hat{p}_{4,2}\rho_{A}^{(4,2 )}+\hat{p}_{7,4}\rho_{A}^{(7,4)}+\hat{p}_{8,7}\rho_{A}^{(8,7)}.\] Due to the permutation requirement, the number of real parameters is less than \((10^{2}+8^{2}+1^{2})d_{A}^{2}\), since some pairs of \(\alpha\) and \(\beta\) may contribute nothing when computing the 1-body reduced density matrix.3 It should be stressed that the simplification comes from not only the fact that the cross terms between subspaces corresponding to different permutation symmetry are forbidden, but also arises from the fact that majority of cross terms within subspaces corresponding to identical permutation symmetry are also forbidden. It should also be noticed that, via our method, one can check the positive definiteness of global state by successively checking the positive definiteness of density matrix corresponding to different permutation symmetries. Footnote 3: But these number can not be set 0 directly, since they may affect the positive definiteness. ## III Complexity of improved SDP In this section we are going to give the general form of a global state that corresponds to the given quantum marginals \(\rho_{AB}\). Consider the symmetric extension problem described in Eq.(2). It is required that the global state \(\rho_{AB_{1}\cdots B_{k}}\) is invariant under any exchange of \(B_{i}\) and \(B_{j}\), but it does not require that \(\rho_{AB_{1}\cdots B_{k}}\) must support on a subspace with specific permutation symmetry. e.g. for a 2-symmetric extendible state, its extension can be bosonic, which supports on the symmetric subspace only, or fermionic, whose support only resides on the antisymmetric subspace, or more generally, can be a mixture of both. Consider a Hilbert space \(\mathcal{T}=\mathcal{Q}_{i=1}^{k}\,V^{(i)}\) constituted by \(k\)-qudit \(B_{1},B_{2},\cdots,B_{k}\), whose computational basis is \(\big{\{}\Phi_{i_{1}i_{2},\cdots,i_{k}}\equiv|i_{1},i_{2},\cdots,i_{k}\rangle \big{\}}\), where \(i_{1},i_{2},\cdots,i_{k}=0,1,\cdots,d-1\). Each subsystem \(V^{(i)}\) is invariant under \(SU(d)\) 'rotation', and transforms according to the \(d\) dimensional fundamental irreducible representation \(D^{[1]}\), which corresponds to the Young diagram [1]. 4 Footnote 4: this is the 1 block Young diagram \(\square\) Therefore the Lie algebra \(su(d)\), which is constituted by 3 series of zero trace hermitian matrices and describes the infinite small rotation of \(SU(d)\), has the following matrix forms on each identical \(V^{(j)}\), if we set \(|i\rangle\) to be the natural basis, \[\left(T_{mn}^{(1)}\right)_{st} = \frac{1}{2}(\delta_{ms}\delta_{nt}+\delta_{ns}\delta_{mt}),\] \[\left(T_{mn}^{(2)}\right)_{st} = \frac{-i}{2}(\delta_{ms}\delta_{nt}+\delta_{ns}\delta_{mt}),\] \[\left(T_{p}^{(3)}\right)_{st} = \left\{\begin{array}{ll}\delta_{st}[2(p+1)p]^{-\frac{1}{2}},&s<p, \\ -\delta_{st}[p/(2p+2)]^{\frac{1}{2}},&s=p,\\ 0,&s>p,\end{array}\right.,\] where \(m<n\), and \(1\leq p\leq d-1\). Taking the global phase into account, one should also include the identity matrix. Therefore one can obtain a new basis for Lie algebra \(u(d)\) by \[\left\{T_{ab}|\left(T_{ab}\right)_{st}=\delta_{as}\delta_{bt},0\leq a,b\leq d-1\right\}.\] \(\mathcal{T}\) is also invariant under the global \(U(d)\) transformation, whose corresponding Lie algebra is given by \(\{{\bf T}_{ab}|{\bf T}_{ab}=\sum_{i}T^{(i)}_{ab}\}\).5\({\cal T}\) transforms under representation \(\otimes^{k}[1]\), which is not irreducible, but can be decomposed as direct sum of a series of irreducible representations, Footnote 5: Here \(T^{(i)}_{ab}\) denotes that the \(i\)-th subsystem transforms according to \(T_{ab}\) while others according to the identity operator. \[\bigotimes^{k}D^{[1]}=\bigoplus_{[\lambda]}m_{[\lambda]}D^{[\lambda]}, \tag{3.1}\] where \(m_{[\lambda]}\) is the multiplicity of irreducible representation \(D^{[\lambda]}\). This is equivalent to say that \({\cal T}\) can be partitioned as direct sum of subspaces.6 Footnote 6: Please note notified that, subspaces corresponding to different Young diagram are orthogonal to each other, while those corresponding to same Young diagram are not. However, it is guaranteed that the intersection of such 2 different subspaces is zero. \[{\cal T}=\bigoplus_{[\lambda]}m_{[\lambda]}{\cal T}^{[\lambda]}. \tag{3.2}\] It can be easily manifested that, such \({\cal T}^{[\lambda]}\) has particular permutation symmetry described by Young diagram \([\lambda]\), and the multiplicity \(m_{[\lambda]}\) equals to the dimension of irreducible representation of \(S_{k}\) corresponding to the identical Young diagram \([\lambda]\), which gives an equation \[d^{k}=\sum_{[\lambda]}m_{[\lambda]}D^{[\lambda]}. \tag{3.3}\] Two irreducible representation spaces \({\cal T}^{[\lambda]}_{\mu}\) and \({\cal T}^{[\lambda]}_{v}\) corresponding to same Young diagram but different Young tableaus are orthogonal to each other. Although there might probably be multiplicity in some weight subspace for a general irreducible subspace, one can uniquely label a vector within arbitrary given irreducible subspace by its weight \(\vec{\omega}\) in \(su(d)\) and the subgroup chain \(su(d)\supset su(d-1)\supset\cdots\supset su(2)\)[15]. Thus one can safely use weight \(\vec{\omega}\) to label different states inside an irreducible subspace \({\cal T}^{[\lambda]}_{\mu}\). Therefore, \(\{[[\lambda],\mu,\vec{\omega})\}\) labels a complete basis of \({\cal T}\) one by one, where \([\lambda]\) tells inequivalent \(su(d)\) representations while \(\mu\) differentiate equivalent ones. They together determine an orthogonal irreducible subspace, and \(\vec{\omega}\) labels every different vectors inside. On the other hand, \(\{[|\lambda],\mu,\omega)\}\) can be interpreted in another way: \(\vec{\omega}\) describes the weight, \([\lambda]\) tells inequivalent \(S_{k}\) representations, thus these two parameter differentiate orthogonal invariant subspaces, while \(\mu\) labels vectors inside. From now on we shall use \(|\vec{\omega}^{[\lambda]}_{\mu}\rangle\) short for \(|\vec{\omega},[\lambda],\mu\rangle\). Any matrix \(\rho_{AB_{1}B_{2}\cdots B_{n}}\in\mbox{End}(V_{A})\otimes\mbox{End}({\cal T})\) can be expressed as \[\rho_{AB_{1}B_{2}\cdots B_{k}} = \sum_{a,a^{\prime}}\sum_{\{\lambda_{1},[\lambda_{2}^{\prime}]\}} \sum_{\mu,\mu^{\prime}}\sum_{\vec{\omega},\vec{\omega}^{\prime}}|\psi^{a}_{ \vec{\omega},[\lambda_{1}],\mu}\rangle\langle\psi^{a^{\prime}}_{\vec{\omega}^{ \prime},[\lambda^{\prime}],\mu^{\prime}}| \tag{3.4}\] \[\otimes|\vec{\omega}^{[\lambda]}_{\mu}\rangle\langle\vec{\omega} ^{\prime}_{\mu^{\prime}}|^{[\lambda^{\prime}]}|,\] where \(|\psi^{a}_{\vec{\omega},[\lambda],\mu}\rangle\) are non-normalized state in \(V_{A}\) and \(\alpha\) label different states in \(V_{A}\). Insert Eq.(3.4) into Eq.(1.2). \(\forall\pi\in S_{k}\) we get a series of constrains for \(\rho_{AB_{1}B_{2}\cdots B_{k}}\): \[\forall[\lambda],[\lambda^{\prime}]\vec{\omega},\vec{\omega^{ \prime}}\mbox{ and }\mu,\mu^{\prime},\] \[\sum_{a,a^{\prime}}|\psi^{a}_{\vec{\omega},[\lambda],\mu}\rangle \langle\psi^{b^{\prime}}_{\vec{\omega}^{\prime},[\lambda^{\prime}],\mu^{ \prime}}|\sum_{\nu,\nu^{\prime}}{\cal A}(\pi)^{[\lambda]}_{\mu,\nu}{\cal A}( \pi)^{[\lambda^{\prime}]*}_{\nu^{\prime},\mu^{\prime}}|\vec{\omega}^{[ \lambda]}_{\nu}\rangle\langle\vec{\omega}^{\prime}_{\nu^{\prime}}|^{[\lambda^ {\prime}]} \tag{3.5}\] \[=\sum_{a,a^{\prime}}|\psi^{a}_{\vec{\omega},[\lambda],\mu}\rangle \langle\psi^{a^{\prime}}_{\vec{\omega}^{\prime},[\lambda^{\prime}],\mu^{ \prime}}|\vec{\omega}^{[\lambda]}_{\mu}\rangle\langle\vec{\omega}^{\prime}_{ \mu^{\prime}}|,\] where \({\cal A}^{[\lambda]}\) and \({\cal A}^{[\lambda^{\prime}]}\) are irreducible representations of permutation group \(S_{k}\). Define matrix \[M(\vec{\omega},\vec{\omega^{\prime}},[\lambda],[\lambda^{\prime}]) \tag{3.6}\] \[\equiv \sum_{\mu,\mu^{\prime}}M(\vec{\omega},\vec{\omega^{\prime}},[ \lambda],[\lambda^{\prime}])_{\mu\mu^{\prime}}|\vec{\omega}^{[\lambda]}_{\mu} \rangle\langle\vec{\omega}^{\prime}_{\mu^{\prime}}|,\] where \[M(\vec{\omega},\vec{\omega^{\prime}},[\lambda],[\lambda^{\prime}])_{\mu\mu^{ \prime}}\equiv\sum_{a,a^{\prime}}|\psi^{a}_{\vec{\omega},[\lambda],\mu} \rangle\langle\psi^{a^{\prime}}_{\vec{\omega}^{\prime},[\lambda^{\prime}],\mu^{ \prime}}|, \tag{3.7}\] thus \(\forall\pi\in S_{k}\) \[{\cal A}^{[\lambda]}(\pi)M(\vec{\omega},\vec{\omega^{\prime}},[ \lambda],[\lambda^{\prime}]){\cal A}^{[\lambda^{\prime}]}(\pi)^{\dagger}\] \[= M(\vec{\omega},\vec{\omega^{\prime}},[\lambda],[\lambda^{\prime}]).\] Schur's lemma guarantee that, * when \([\lambda]\neq[\lambda^{\prime}]\), \(M=0\); * when \([\lambda]=[\lambda^{\prime}]\), \(M\) is invertible. Choose \(|\vec{\omega}^{[\lambda]}_{\mu}\rangle\) carefully such that the representation \({\cal A}^{[\lambda]}\) are identical, not just an isomorphic matrix, for different weight \(\omega\). Then all \(M(\vec{\omega},\vec{\omega^{\prime}},[\lambda],[\lambda])\) can be proportional to the corresponding identity matrix. Therefore, one could eliminate majority of cross terms and restrict \(\rho_{AB_{1}B_{2}\cdots B_{k}}\) to \[\rho_{AB_{1}B_{2}\cdots B_{k}} = \sum_{[\lambda]}\sum_{\vec{\omega},\vec{\omega^{\prime}}}f([\lambda ],\vec{\omega},\vec{\omega^{\prime}})\sigma([\lambda],\vec{\omega},\vec{\omega^{ \prime}}) \tag{3.9}\] \[\otimes\sum_{\mu}|\vec{\omega}^{[\lambda]}_{\mu}\rangle\langle\vec{ \omega}^{\prime}_{\mu}|,\] where \(f([\lambda],\vec{\omega},\vec{\omega^{\prime}})\) is the coefficient and \(\sigma([\lambda],\vec{\omega},\vec{\omega^{\prime}})\in\mathrm{End}(V_{A})\)(do not have to be a density matrix), both of which correspond to \(S_{k}\) irreducible representation described by Young diagram \([\lambda]\) and different weight \(\vec{\omega}\) and \(\vec{\omega^{\prime}}\). Our next task is to determine the RDM of global state given by Eq.(3.9). For every given \([\lambda],\vec{\omega}\) and \(\vec{\omega^{\prime}}\), one could temporally ignore system \(A\) and concentrate on group \(\{B_{1},B_{2},\cdots,B_{k}\}\), \[\sum_{i,j=0}^{d-1}B_{ji}|i\rangle\langle j| = \mathrm{Tr}_{B_{1}^{c}}\left(\sum_{\mu}|\vec{\omega}_{\mu}^{[ \lambda]}\rangle\langle\vec{\omega}_{\mu}^{\prime[\lambda]}|\right)\] \[= \sum_{i,j=0}^{d-1}|i\rangle\langle j|\,\mathrm{Tr}\left(T_{ji}^{ (1)}\sum_{\mu}|\vec{\omega}_{\mu}^{[\lambda]}\rangle\langle\vec{\omega}_{\mu }^{\prime[\lambda]}|\right)\] \[= \sum_{i,j=0}^{d-1}|i\rangle\langle j|\,\mathrm{Tr}\left(\frac{1} {k}\mathbf{T}_{ji}\sum_{\mu}|\vec{\omega}_{\mu}^{[\lambda]}\rangle\langle \vec{\omega}_{\mu}^{\prime[\lambda]}|\right)\] \[= \sum_{i,j=0}^{d-1}|i\rangle\langle j|\left(\frac{m_{[\lambda]}}{k }\langle\vec{\omega^{\prime[\lambda]}}|\mathbf{T}_{ji}|\vec{\omega}^{[ \lambda]}\rangle\right),\] where \(\langle\vec{\omega}^{\prime[\lambda]}|\mathbf{T}_{ji}|\vec{\omega}^{[\lambda]}\rangle\) is exactly the matrix element of irreducible representation corresponding to Young diagram \([\lambda]\) for generator \(T_{ji}\) in Lie algebra \(u(d)\).7 Taking subsystem \(A\) into account, one can obtain Footnote 7: Do not worry about this part, the general matrix form of \(\mathbf{T}_{ji}\) in irreducible representation \(D^{[\lambda]}\) of \(u(d)\) have been calculated by mathematicians and you can refer to [15]. \[\mathrm{Tr}_{(AB_{1})^{c}}\left(\rho_{AB_{1}B_{2}\cdots B_{k}}\right)\] \[= \sum_{[\lambda]}\sum_{m,n=0}^{d-1}\sum_{i,j=0}^{d-1}|m\rangle \langle n|\otimes|i\rangle\langle j|\] \[\times\sum_{\vec{\omega},\vec{\omega^{\prime}}}\sigma([\lambda], \vec{\omega},\vec{\omega^{\prime}})_{mn}\frac{m_{[\lambda]}}{k}\langle\vec{ \omega^{\prime[\lambda]}}|\mathbf{T}_{ji}|\vec{\omega}^{[\lambda]}\rangle f([ \lambda],\vec{\omega},\vec{\omega^{\prime}}).\] For every given \([\lambda]\), the number of different values for \(\vec{\omega}\) and \(\vec{\omega^{\prime}}\) is just the dimension of \(u(d)\) irreducible representation \(D^{[\lambda]}\). Therefore, ignoring the size of subsystem \(A\), the size of searching space in dealing with symmetric extension is given by \[\sum\left(D^{[\lambda]}\right)^{2}=\binom{d^{2}-1+k}{k}<d^{2k}, \tag{3.12}\] where the summation runs over all possible proper Young diagrams. One may conclude that, the dimension of entire searching space grows not faster than \(O(k^{d^{2}})\), which is significantly smaller than the original \(O(d^{2k})\), therefore the efficiency of SDP could be greatly improved when dealing with symmetric extension problems. To guarantee the positive definiteness of solved global matrix, one can test whether the density matrix corresponding to different permutation is positive definite respectively, and hence each testing need much less resource. To investigate the amount of the resource needed for each single testing, one should focus on the growth rate of \(D^{[\lambda]}\) and \(m_{[\lambda]}\). The asymptotic behavior of the upper bound of \(D^{[\lambda]}\) is given by \[\prod_{m=1}^{d-1}\frac{1}{(m)!}\left(1+\frac{2k}{d(d+1)}\right)^{\frac{d(d-1)} {2}}, \tag{3.13}\] which corresponds to the irreducible representation that satisfies \(\lambda_{i}-\lambda_{i+1}\approx 2k/d(d-1)\)[16]. For a given \(k\), the number of different valid Young diagrams whose row is less than or equal to \(d\) is hard to compute analytically, but for sufficient large \(k\), the asymptotic value is \(\frac{1}{d!}(k^{+d-1})\).8 Footnote 8: To find a partition(not necessarily a partition corresponds to valid Young diagram) that satisfies \(\sum_{i}^{k}\lambda_{i}=k\) is equivalent to insert \(d-1\) separators between a line of \(k\) balls, which reads \((k^{+d-1}_{+d})\). Since some \(\lambda_{i}\)s might be identical, the number of valid different Young diagrams should be less than \(\frac{1}{k}(k^{+d-1}_{+})\), but when \(k\) is sufficient large, such odd approaches to \(0\), so the asymptotic number of valid Young diagrams is given by \(\frac{1}{d!}(k^{+d-1}_{+})\). ## IV Numerical results First, we apply our algorithm on the famous bipartite Werner state \(\rho_{W,d}\left(\alpha\right)\in\mathcal{H}_{d}\otimes\mathcal{H}_{d}\) \[\rho_{W,d}\left(\alpha\right)=\frac{1}{d^{2}-d\alpha}I-\frac{\alpha}{d^{2}-d \alpha}\sum_{ij}|ij\rangle\left\langle ji\right|,\;\alpha\in\left[-1,1\right].\] Previous work [17] proved that the Werner state is \((1,k)\)-extendible for \(\alpha\in\left[-1,\frac{k+d^{2}-d}{kd+d-1}\right]\). As \(k\) goes to infinity, it gives the separable Werner state \(\alpha\in\left[-1,\frac{1}{d}\right]\). To obtain such a \((1,k)\)-extendible boundary \(\alpha_{k}^{*}\), one can solve the following semi-definite programming \[\max\;c,\] \[s.t.\left\{\begin{array}{l}\rho_{AB_{1}\cdots B_{k}}\succeq 0,\\ \left(\mathds{1}^{A}\otimes P_{ij}\right)\rho_{AB_{1}\cdots B_{k}}\left( \mathds{1}^{A}\otimes P_{ij}\right)=\rho_{AB_{1}\cdots B_{k}},\\ \mathrm{Tr}_{B_{1}^{c}}\left(\rho_{AB_{1}\cdots B_{k}}\right)=\left(1-c \right)\rho_{0}+c\rho_{W,d}(1),\end{array}\right.\] with \(\rho_{0}\) denotes the maximally mixed state9, and The boundary can be calculated from the optimal value \(\alpha_{k}^{*}=\frac{c^{*}d}{c^{*}+d-1}\). Footnote 9: As semi-definite programming requires linear or affine equation constraint, we convert the non-linear expression \(\alpha\) in Werner state into a linear interpolation \((1-c)\rho_{0}+c\rho_{W,d}(1)\) used in optimization. The results are shown in TABLE 1. We compare the time required with the software QETLAB [18], a widely-used MATLAB package in quantum information community. The benchmark is performed on a standard laptop, AMD R7-5800H, 16 CPU cores (hyperthread enabled), 16GB memory and our algorithm is implemented in CVXPY package [19] with MOSEK solver [20]. The solved boundary \(\alpha_{k}^{*}\) is within \(10^{-8}\) absolute error compared with the analytical results. From the results, A significant speedup can be observed and a much larger \(k\)-extension problem can be handled for our algorithm. We explicitly calculate the dimension of searching space and the number of parameters required to be tested for positive definiteness, which demonstrate the efficiency of our algorithms, as shown in TABLE 2. 10 Footnote 10: Our algorithm needs to undergo multiple different positive-definiteness tests, where each different Young diagram corresponds to its own test, but each individual test involves significantly small matrix, and hence the efficiency will be improved. ## V Discussion The complexity of our new algorithm for dealing with k-symmetric extensions of quantum states is \(O(k^{d^{2}})\), which is an improvement over the original algorithm with \(O(d^{2k})\) complexity. However, it is important to note that the complexity of detecting entanglement is a QMA problem, which means that it is generally considered to be computationally hard. Although our new algorithm reduces the computational complexity of the problem, it does _not_ change the fundamental difficulty of detecting entanglement. This is due to the fact that the size of the input of this problem is given by \(O(\log k,d)\), and hence the resources needed in our algorithm still grows exponentially relative to the input. Therefore, while our algorithm presents some advance, it does not contradict the known fact that detecting entanglement is a QMA problem. The challenge of detecting entanglement remains an important area of research, with many open questions and opportunities for new breakthroughs. ###### Acknowledgements. The authors would like to thank for Ruan Dong, Huang Huajun, Huang Shilin for help discussion. Y-N.Li is supported by National Natural Science Foundation of China under Grant No. 12005295 S-Y.Hou is supported by National Natural Science Foundation of China under Grant No. 12105195. C.Zhang, Z-P.Wu, and B.Zeng are supported by GRF (grant no. 16305121).
2309.12726
Stokes parameters alone cannot completely characterize the polarization of plane light waves
It was generally assumed that the Stokes parameters are complete characterization for the state of polarization of a plane light wave so that their counterparts in quantum optics, called the Stokes operators, represent the polarization of photons. Here we show, through analyzing the properties of polarized plane waves in an optically active medium, that the Stokes parameters are not able to completely characterize the state of polarization of a plane wave. The key point is that only when a plane wave is expanded in terms of the orthogonal base modes, which are physically meaningful, can the two expansion coefficients make up the Jones vector. Taking this into consideration, we demonstrate that the Stokes parameters of any elliptically polarized wave in an isotropic chiral medium, determined solely by its Jones vector, are transmitted unchanged. They are not able to reflect the rotation of its polarization ellipse along with the propagation. The relationship of the Stokes parameters with the polarization of light needs further investigation.
Chun-Fang Li, Zhi-Juan Hu
2023-09-22T09:16:26Z
http://arxiv.org/abs/2309.12726v1
# Stokes parameters alone cannot completely characterize the polarization of plane light waves ###### Abstract It was generally assumed that the Stokes parameters are complete characterization for the state of polarization of a plane light wave so that their counterparts in quantum optics, called the Stokes operators, represent the polarization of photons. Here we show, through analyzing the properties of polarized plane waves in an optically active medium, that the Stokes parameters are not able to completely characterize the state of polarization of a plane wave. The key point is that only when a plane wave is expanded in terms of the orthogonal base modes, which are physically meaningful, can the two expansion coefficients make up the Jones vector. Taking this into consideration, we demonstrate that the Stokes parameters of any elliptically polarized wave in an isotropic chiral medium, determined solely by its Jones vector, are transmitted unchanged. They are not able to reflect the rotation of its polarization ellipse along with the propagation. The relationship of the Stokes parameters with the polarization of light needs further investigation. Polarization of light, Characterization, Stokes parameters, Jones vector, Optical activity Introduction Polarization is one of the most fundamental phenomena of light. It arises from the vector nature of the electromagnetic wave that is governed by Maxwell's equations. The significant point is that apart from satisfying the coupled equations, the electric field \(\mathbf{E}\) and magnetic field \(\mathbf{H}\) are also constrained by the transversality conditions, \[\nabla\cdot\mathbf{E}(\mathbf{x},t)=0,\quad\nabla\cdot\mathbf{H}(\mathbf{x},t)=0.\] So, there have developed two main mathematical descriptions of the polarization [1]. One makes use of the Stokes parameters that were introduced in 1852 [2]. They are connected with the polarization matrix [3; 4] in classical optics or with the density matrix [5; 6] in quantum mechanics. The other is the so-called Jones vector introduced in 1941 [7]. They are, nevertheless, not equivalent. The former is applicable to partially polarized waves. The latter is only applicable to completely polarized waves. However, for a monochromatic plane wave, they have a definite and simple relationship [8; 9; 10]. It was generally assumed that the Stokes parameters, as physically observable quantities [11; 12; 13], provide a full characterization of the state of polarization, at least for plane waves or beamline waves [14; 15; 16]. Here we will show that this is actually not the case. We will do so by demonstrating that the Stokes parameters cannot characterize the rotation of the polarization ellipse of the plane wave in an isotropic chiral medium. A chiral medium is an optically active medium [17]. It has the ability to rotate the polarization ellipse of an elliptically polarized wave that passes through it [18; 19]. The conventional description of the optical activity is the circular birefringence, proposed in 1825 by Fresnel [1]. By circular birefringence it is meant that the right-handed circularly polarized and left-handed circularly polarized waves in a chiral medium propagate at different velocities. Recently we showed [20] through a logical analysis that such a phenomenological description of the optical activity is incorrect. Circular birefringence does not exist in chiral media. Meanwhile we found that the rotation of the polarization ellipse of the elliptically polarized wave in chiral media lies with the rotation of the polarization basis, without involving the change of the Jones vector. In particular, we demonstrated that the rotation of the right-handed circularly polarized and left-handed circularly polarized waves gives rise to phases of equal magnitude but opposite sign as if they propagated at different phase velocities with their polarization states transmitted unchanged [21]. The aim of this paper is to show that the Stokes parameters determined by the Jones vector cannot reflect the rotation of the polarization ellipse. The reported result raises the problem of how to interpret the physical meaning of the Stokes parameters. ## II Conventional theory of the polarization of light Before discussing the peculiarity of the Stokes parameters of a plane wave in chiral media, it is beneficial to overview how the Stokes parameters of a plane light wave in ordinary achiral media are defined. Because the intensity of a monochromatic plane wave has nothing to do with the state of its polarization, we will consider only electric field of normalized amplitude. Letting a plane wave of frequency \(\omega\) and wavenumber \(k\) propagate along the \(z\)-axis, the electric field can be written as \[\mathbf{E}^{(A)}(z,t)=\mathbf{a}^{(A)}\exp[i(kz-\omega t)], \tag{1}\] where the normalized polarization vector \(\mathbf{a}^{(A)}\) takes the form of \[\mathbf{a}^{(A)}=\alpha_{1}\bar{x}+\alpha_{2}\bar{y}, \tag{2}\] \(\bar{x}\) and \(\bar{y}\) are the unit vectors along the \(x\)- and \(y\)-axes, respectively, and the complex coefficients \(\alpha_{1}\) and \(\alpha_{2}\) satisfy the normalization condition \(|\alpha_{1}|^{2}+|\alpha_{2}|^{2}=1\). The Stokes parameters of the wave are defined as follows [8; 11], \[s_{i}=\alpha^{\dagger}\hat{\sigma}_{i}\alpha,\quad i=1,2,3, \tag{3}\] where \(\alpha=\left(\begin{array}{c}\alpha_{1}\\ \alpha_{2}\end{array}\right)\) is known as the Jones vector of the wave, the superscript \(\dagger\) denotes conjugate transpose, and \[\hat{\sigma}_{1}=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right)\!\!,\quad\hat{\sigma}_{2}=\left(\begin{array}{cc}0&1 \\ 1&0\end{array}\right)\!\!,\quad\hat{\sigma}_{3}=\left(\begin{array}{cc}0&-i \\ i&0\end{array}\right)\] are the Pauli matrices. The unit vectors \(\bar{x}\) and \(\bar{y}\) in Eq. (2) are commonly referred to as the polarization base vectors. It is important to point out, as is known in the literature [5; 11], that when the Stokes parameters are defined in terms of the Jones vector via Eq. (3), it is implied that the plane wave (1) is a superposition of the following two base modes, \[\mathbf{E}_{1}^{(A)}(z,t)=\bar{x}\exp[i(kz-\omega t)],\] \[\mathbf{E}_{2}^{(A)}(z,t)=\bar{y}\exp[i(kz-\omega t)].\] That is, the polarization base vectors \(\bar{x}\) and \(\bar{y}\) are polarization vectors of these base modes, which are physically meaningful. This idea is more understandable from the point of view of quantum mechanics, where one deals with orthonormal base states [12; 13; 14]. For later convenience, we cast Eq. (2) into a compact form [22], \[{\bf a}^{(A)}=\varpi^{(A)}\alpha, \tag{4}\] where the convention of matrix multiplication is used and the row matrix \(\varpi^{(A)}=(\begin{array}{cc}\bar{x}&\bar{y}\end{array})\) represents the polarization basis \(\bar{x}\) and \(\bar{y}\). Any two orthonormal polarization vectors may be chosen as the polarization basis. Apart from the linear-polarization basis, another common basis is the pair of circularly polarized states [9; 10; 14; 23]. In terms of the circular-polarization basis, which is related to the linear-polarization basis via \[{\bf R}=\frac{1}{\sqrt{2}}(\bar{x}+i\bar{y}),\quad{\bf L}=\frac{1}{\sqrt{2}}( \bar{x}-i\bar{y}), \tag{5}\] the same polarization vector (2) can be expanded as \[{\bf a}^{(A)}=\alpha_{R}{\bf R}+\alpha_{L}{\bf L}, \tag{6}\] where \[\alpha_{R}=(\alpha_{1}-i\alpha_{2})/\sqrt{2},\quad\alpha_{L}=(\alpha_{1}+i \alpha_{2})/\sqrt{2}. \tag{7}\] Equations (5) describe a transformation of representation for the state of polarization in the language of quantum mechanics. In this case, the same Stokes parameters (3) can be rewritten as \[s_{i}=\alpha^{c\dagger}\hat{\sigma}_{i}^{c}\alpha^{c}, \tag{8}\] where \(\alpha^{c}=\left(\begin{array}{c}\alpha_{R}\\ \alpha_{L}\end{array}\right)\) is the Jones vector in the representation of circular-polarization basis and \[\hat{\sigma}_{1}^{c}=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\quad\hat{\sigma}_{2}^{c}=\left(\begin{array}{cc}0&-i \\ i&0\end{array}\right)\!,\quad\hat{\sigma}_{3}^{c}=\left(\begin{array}{cc}1&0 \\ 0&-1\end{array}\right)\!.\] This is understandable. According to Eqs. (7), \(\alpha^{c}\) is related to \(\alpha\) via \(\alpha^{c}=M\alpha\), where \(M=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&-i\\ 1&i\end{array}\right)\) is a unitary matrix, so that \[\alpha=M^{\dagger}\alpha^{c}.\] Substituting it into Eq. (3), one will arrive at Eq. (8) in which \(\hat{\sigma}_{i}^{c}\) is related to \(\hat{\sigma}_{i}\) via \(\hat{\sigma}_{i}^{c}=M\hat{\sigma}_{i}M^{\dagger}\). After all, the Stokes parameters as physically observable quantities do not depend on the choice of representation. Of course, expression (8) for the Stokes parameters implies that the polarization base vectors \(\mathbf{R}\) and \(\mathbf{L}\) are the polarization vectors of physically meaningful states. In other words, expression (6) for the polarization vector implies that the plane wave (1) can be expanded in terms of the following circularly polarized base states, \[\mathbf{E}_{R}^{(A)}(z,t) =\mathbf{R}\exp[i(kz-\omega t)],\] \[\mathbf{E}_{L}^{(A)}(z,t) =\mathbf{L}\exp[i(kz-\omega t)].\] We note that because the polarization base vectors \(\bar{x}\) and \(\bar{y}\) in expression (2) are fixed, the Jones vector \(\alpha\) is equivalent to the polarization vector \(\mathbf{a}^{(A)}\) in describing the state of polarization. For this reason, Eq. (2) or (4) is usually simplified as [1; 4; 11] \[\mathbf{a}^{(A)}=\alpha. \tag{9}\] The change in the polarization vector amounts to the change in the Jones vector, \[\mathbf{a}^{(A)}\rightarrow\mathbf{a}^{\prime(A)}=\alpha_{1}^{\prime}\bar{x} +\alpha_{2}^{\prime}\bar{y}=\left(\begin{array}{c}\alpha_{1}^{\prime}\\ \alpha_{2}^{\prime}\end{array}\right)\equiv\alpha^{\prime}.\] When the energy is conserved, the change in the Jones vector is described by a unitary transformation [10], \[\alpha^{\prime}=U\alpha,\] where \(U\) is a \(2\times 2\) unitary matrix, which can be expressed as a matrix of rotation of an angle \(\varphi\) about an axis represented by a real-valued unit vector \(\mathbf{n}\), \[U(\varphi,\mathbf{n})=\exp(-i\mathbf{n}\cdot\boldsymbol{\sigma}\varphi/2). \tag{10}\] It is probably on this basis that the Stokes parameters determined by the Jones vector are believed to provide a complete characterization for the state of polarization represented by the polarization vector. But unfortunately, the plane wave in chiral media no longer has a fixed polarization basis as we will show below. ## III Stokes parameters of plane waves in a chiral medium Now we examine the properties of the Stokes parameters of plane waves in a chiral medium. The optical property of a chiral medium is conveyed by its constitutive relations. The constitutive relations of an isotropic and transparent chiral medium can be written as follows [24; 25], \[{\bf D}=\varepsilon{\bf E}-g\partial{\bf H}/\partial t,\quad{\bf B}=\mu{\bf H}+g \partial{\bf E}/\partial t, \tag{11}\] where \({\bf D}\) and \({\bf B}\) are, as usual, the vectors of electric displacement and magnetic induction, respectively, \(\varepsilon\) is the permittivity, \(\mu\) is the permeability, and the pseudo-scalar constant \(g\) is the gyrotropic parameter. The same as above, we assume that the plane wave of angular frequency \(\omega\) in the chiral medium propagates along the \(z\)-axis. Constitutive relations (11) allow one to find, from Maxwell's equations, the following two orthonormal circularly polarized waves [20], \[\begin{array}{l}{\bf E}_{R}(z,t)={\bf R}\exp(-i\tau z)\exp[i(kz-\omega t)], \\ {\bf E}_{L}(z,t)={\bf L}\exp(i\tau z)\exp[i(kz-\omega t)],\end{array} \tag{12}\] where \({\bf R}\) and \({\bf L}\) are given by Eqs. (5), \(k=(\varepsilon\mu)^{1/2}\omega\), and \(\tau=-g\omega^{2}\). These two circularly polarized waves can be taken as base states to expand any elliptically polarized wave [18], \[{\bf E}(z,t)=\alpha_{R}{\bf E}_{R}(z,t)+\alpha_{L}{\bf E}_{L}(z,t), \tag{13}\] where the expansion coefficients \(\alpha_{R}\) and \(\alpha_{L}\), which are constant, satisfy the normalization condition \(|\alpha_{R}|^{2}+|\alpha_{L}|^{2}=1\). Now that \({\bf E}_{R}\) and \({\bf E}_{L}\) are two orthonormal base states, \(\alpha_{R}\) and \(\alpha_{L}\) make up the Jones vector of the wave (13), \(\alpha^{c}=\left(\begin{array}{c}\alpha_{R}\\ \alpha_{L}\end{array}\right)\), in accordance with Refs. [5; 11; 12; 13; 14]. According to Eq. (8), the Stokes parameters determined by this Jones vector are constants, independent of the propagation distance \(z\). However, the polarization ellipse of the wave (13) is rotated along with the propagation. We are thus convinced that the Stokes parameters are not able to characterize the rotation of the polarization state of the plane wave in the chiral medium. Let us look at this result in the representation of linear-polarization basis. As is well known, Eq. (13) can give rise to linearly polarized waves [1; 11]. In particular, it can give rise to two orthonormal linearly polarized waves. Specifically, if \(\alpha_{R}=\alpha_{L}=\frac{1}{\sqrt{2}}\), Eq. (13) represents a linearly polarized wave, denoted by \[{\bf E}_{1}=\frac{{\bf E}_{R}+{\bf E}_{L}}{\sqrt{2}}={\bf a}_{1}(z)\exp[i(kz- \omega t)], \tag{14}\] where \[{\bf a}_{1}(z)=\bar{x}\cos\tau z+\bar{y}\sin\tau z. \tag{15}\] In addition, if \(\alpha_{R}=-\alpha_{L}=-\frac{i}{\sqrt{2}}\), it also represents a linearly polarized wave, denoted by \[\mathbf{E}_{2}=\frac{\mathbf{E}_{R}-\mathbf{E}_{L}}{i\sqrt{2}}=\mathbf{a}_{2}(z) \exp[i(kz-\omega t)], \tag{16}\] where \[\mathbf{a}_{2}(z)=\bar{y}\cos\tau z-\bar{x}\sin\tau z. \tag{17}\] Being rotated along with the propagation, the polarization vectors of these two linearly polarized waves, (15) and (17), are both \(z\)-dependent. They are, however, orthogonal to each other at the same propagation distance \(z\), \[\mathbf{a}_{1}(z)\cdot\mathbf{a}_{2}(z)=0.\] So, these two linearly polarized waves can also be taken as base states to expand any elliptically polarized wave, \[\mathbf{E}(z,t)=\alpha_{1}\mathbf{E}_{1}+\alpha_{2}\mathbf{E}_{2}, \tag{18}\] where the constant expansion coefficients \(\alpha_{1}\) and \(\alpha_{2}\) make up the Jones vector \(\alpha=\left(\begin{array}{c}\alpha_{1}\\ \alpha_{2}\end{array}\right)\), which satisfies the normalization condition \(\alpha^{\dagger}\alpha=1\). According to Eq. (3), the Stokes parameters determined by this Jones vector are also independent of the propagation distance. In fact, if expression (18) stands for the same wave as expression (13), we must have \[\alpha_{R}=(\alpha_{1}-i\alpha_{2})/\sqrt{2},\quad\alpha_{L}=(\alpha_{1}+i \alpha_{2})/\sqrt{2},\] by virtue of Eqs. (14) and (16). They are the same as Eqs. (7). In a word, the Stokes parameters are not able to characterize the rotation of the polarization state of a plane wave in the chiral medium. ## IV Explanations and Discussions The above result can be explained with the newly-advanced description [20] of the optical activity. As is well known, the planes of polarization of the linearly polarized waves (14) and (16) are rotated with propagation. To illustrate this more clearly, we rewrite their polarization vectors (15) and (17) as follows, \[\mathbf{a}_{1}(z)=\exp[-i(\bar{z}\cdot\mathbf{\Sigma})\tau z]\bar{x},\quad \mathbf{a}_{2}(z)=\exp[-i(\bar{z}\cdot\mathbf{\Sigma})\tau z]\bar{y}, \tag{19}\] where \((\Sigma_{k})_{ij}=-i\epsilon_{ijk}\) with \(\epsilon_{ijk}\) the Levi-Civita pseudotensor and \(\bar{z}\) denotes the unit vector along the \(z\)-axis. To give Eqs. (19), we have made use of the formula [26] \[\exp[-i({\bf a}\cdot{\bf\Sigma})\phi]{\bf b}={\bf b}\cos\phi-i({\bf a}\cdot{ \bf\Sigma}){\bf b}\sin\phi+{\bf a}({\bf a}\cdot{\bf b})(1-\cos\phi)\] and the equlity [27] \[({\bf a}\cdot{\bf\Sigma}){\bf b}=i{\bf a}\times{\bf b},\] where \({\bf a}\) and \({\bf b}\) are any two vectors. It is seen that the polarization vectors \({\bf a}_{1}\) and \({\bf a}_{2}\) result from the same rotation of \(\bar{x}\) and \(\bar{y}\), respectively. This is why they are orthogonal to each other at the same propagation distance \(z\). Substituting Eqs. (14) and (16) into Eq. (18), we get \[{\bf E}(z,t)=[\alpha_{1}{\bf a}_{1}(z)+\alpha_{2}{\bf a}_{2}(z)]\exp[i(kz- \omega t)]. \tag{20}\] Now that \({\bf a}_{1}\) and \({\bf a}_{2}\) are polarization vectors of physically meaningful states (14) and (16), respectively, and are orthogonal to each other, they act in this expression as the linear-polarization basis. So there is no doubt that the coefficients \(\alpha_{1}\) and \(\alpha_{2}\) make up the Jones vector \(\alpha\). What is noteworthy here is, as just mentioned, that these polarization base vectors are simultaneously rotated with propagation, in sharp contrast with those in expression (2) for the polarization vector in the ordinary achiral medium. This shows that the optical activity described by the polarization vector \[{\bf a}(z)=\alpha_{1}{\bf a}_{1}(z)+\alpha_{2}{\bf a}_{2}(z)\] comes from the rotation of the polarization basis other than from the change of the Jones vector. To express this peculiarity explicitly, we cast the above equation for the polarization vector into a compact form, \[{\bf a}(z)=\varpi(z)\alpha, \tag{21}\] where the row matrix \(\varpi(z)=(\begin{array}{cc}{\bf a}_{1}&{\bf a}_{2}\end{array})\), which consists of the unit vectors \({\bf a}_{1}(z)\) and \({\bf a}_{2}(z)\), represents the linear-polarization basis. It indicates that the polarization vector \({\bf a}(z)\) at any propagation distance \(z\) has its own polarization basis \(\varpi(z)\) with the Jones vector \(\alpha\) being the same. Obviously, we have \[\varpi(z)=\exp[-i(\bar{z}\cdot{\bf\Sigma})\tau z]\varpi(0) \tag{22}\] in accordance with Eqs. (19), where \(\varpi(0)=(\begin{array}{cc}\bar{x}&\bar{y}\end{array})\) is the polarization basis at \(z=0\). Due to the rotation of the polarization basis with propagation, the Jones vector \(\alpha\) here is no longer equivalent to the polarization vector \({\bf a}(z)\). That is to say, Eq. (21) cannot be simplified as \[{\bf a}(z)=\alpha\] in the way that Eq. (2) is simplified as Eq. (9). This is why the Stokes parameters determined by the Jones vector via Eq. (3) are not able to characterize the rotation of the state of polarization. It is remarked that the rotation of the polarization basis given by Eqs. (19) is not to be confused with Eqs. (5), which describe a transformation of the representation. By the way, it is pointed out that the state of polarization of light in a particular chiral medium is always rotated in the same way no matter what its Jones vector is. In fact, substituting Eq. (22) into Eq. (21), we have \[{\bf a}(z)=\exp[-i(\bar{z}\cdot{\bf\Sigma})\tau z]{\bf a}(0), \tag{23}\] where \({\bf a}(0)=\varpi(0)\alpha\) is the polarization vector at \(z=0\). Eq. (23) shows that the angle of rotation of the polarization ellipse per unit length is \(\tau\) regardless of the Jones vector. In the cases of circular polarization, the result of rotation turns out to be \(z\)-dependent phase factors [20] as is explicitly expressed in Eqs. (12). It is also noted that substitution of Eqs. (15) and (17) into Eq. (20) gives \[{\bf E}(z,t)=[\alpha_{x}(z)\bar{x}+\alpha_{y}(z)\bar{y}]\exp[i(kz-\omega t)], \tag{24}\] where \[\alpha_{x}(z) = \alpha_{1}\cos\tau z-\alpha_{2}\sin\tau z,\] \[\alpha_{y}(z) = \alpha_{1}\sin\tau z+\alpha_{2}\cos\tau z.\] If the fixed unit vectors \(\bar{x}\) and \(\bar{y}\) were taken as the polarization basis in the same way as is done in the ordinary achiral medium, the coefficients \(\alpha_{x}\) and \(\alpha_{y}\) would make up a \(z\)-dependent Jones vector [8], \(\alpha^{\prime}(z)=\left(\begin{array}{c}\alpha_{x}\\ \alpha_{y}\end{array}\right)\). From Eq. (3) it follows that the Stokes parameters defined by this "Jones vector", \(s^{\prime}_{i}=\alpha^{\prime\dagger}\hat{\sigma}_{i}\alpha^{\prime}\), are given by \[s^{\prime}_{1}= (\alpha^{*}_{1}\alpha_{1}-\alpha^{*}_{2}\alpha_{2})\cos 2\tau z-( \alpha^{*}_{1}\alpha_{2}+\alpha^{*}_{2}\alpha_{1})\sin 2\tau z, \tag{25a}\] \[s^{\prime}_{2}= (\alpha^{*}_{1}\alpha_{1}-\alpha^{*}_{2}\alpha_{2})\sin 2\tau z+( \alpha^{*}_{1}\alpha_{2}+\alpha^{*}_{2}\alpha_{1})\cos 2\tau z,\] (25b) \[s^{\prime}_{3}= -i(\alpha^{*}_{1}\alpha_{2}-\alpha^{*}_{2}\alpha_{1}), \tag{25c}\] which are usually \(z\)-dependent. Nevertheless, so defined Stokes parameters do not correctly reflect the rotation of the state of polarization. This is because in the cases of circular polarization, \(\alpha=\frac{1}{\sqrt{2}}\bigg{(}\begin{array}{c}1\\ \pm i\end{array}\bigg{)}\), the Stokes parameters given by these equations are not \(z\)-dependent, \[s^{\prime}_{1}=0,\quad s^{\prime}_{2}=0,\quad s^{\prime}_{3}=\pm 1.\] They are not able to convey the rotation of the circularly polarized waves as we just mentioned. The key point is that the state of polarization of a linearly polarized wave in the chiral medium is rotated with propagation. Its polarization vector at any propagation distance \(z\) can no longer be the fixed \(\bar{x}\) or \(\bar{y}\). So, even though the electric field of any elliptically polarized wave in the chiral medium can be written mathematically in the form of Eq. (24), it is physically unreasonable to take \(\bar{x}\) and \(\bar{y}\) as the polarization basis. This is to say that \(\alpha^{\prime}(z)\) cannot be a physically meaningful Jones vector at all. ## V Conclusions and Remarks In conclusion, we showed, through analyzing the optical rotation in an isotropic chiral medium, that the Stokes parameters determined by the Jones vector via Eq. (3) are not able to completely characterize the state of polarization of plane light waves. We demonstrated that the polarization vector of the plane wave in the chiral medium can no longer be equivalent to its Jones vector, in contrast with the polarization vector of the plane wave in the ordinary achiral medium. The key difference is that in the latter case the fixed unit vectors \(\bar{x}\) and \(\bar{y}\) in the laboratory reference frame can always be taken as the polarization basis, whereas in the former case no such polarization basis exists. However, Eq. (21) indicates that the Stokes parameters determined by the Jones vector \(\alpha\) are indeed physical quantities to describe the state of polarization of the plane wave at any propagation distance \(z\) in the chiral medium, though they are the same at different values of \(z\). Therefore, in order to com pletely characterize the rotation of the polarization of this wave along with the propagation, an extra degree of freedom is needed. This is the rotation angle \(\tau z\) that characterizes the rotation of the polarization basis (22) along with the propagation. In a word, we discovered from the optical activity that the Stokes parameters are not able to completely characterize the state of polarization of plane waves. This result raises a significant question about the nature of the Stokes parameters themselves. As a matter of fact, the statements about the nature of the Stokes parameters in the literature are rather subtle. On one hand, they are theoretically described as quantities in an abstract three-dimensional Euclidean space [9], which is sometimes called Stokes space [8]. On the other hand, they are considered to be experimentally observable [11; 12; 13]. It is hard to conceive that a physical quantity in an abstract space can be observed. The fact is that when they are measured experimentally [11; 11; 28; 29], the Stokes parameters defined via Eqs. (3) and (4) are always treated as quantities in the laboratory reference frame. This is because what the linear-polarization basis \(\varpi^{(A)}\) in Eq. (4) represents is exactly the laboratory reference frame [6] when the plane wave is assumed to propagate along the \(z\)-axis. The unit vector \(\mathbf{n}\) in Eq. (10) is with respect to this reference frame. But as pointed out by Messiah [30] and Mandel and Wolf [31], even the linear-polarization basis for plane waves in the achiral medium is undefined up to a rotation about the propagation direction. One can choose, within the laboratory reference frame represented by \(\varpi^{(A)}\), a new linear-polarization basis, say \(\varpi^{\prime(A)}=(\begin{array}{cc}\bar{x}^{\prime}&\bar{y}^{\prime}\end{array})\), where the unit vectors \(\bar{x}^{\prime}\) and \(\bar{y}^{\prime}\) together with the propagation direction, the \(z\)-axis, form a new right-handed Cartesian frame. In such a case, the resultant Stokes parameters of the same plane wave (1) will be quantities in the new reference frame rather than in the laboratory reference frame. Noticing such a relationship, we have reason to believe that the Stokes parameters defined via Eqs. (3) and (21) are no longer quantities in the laboratory reference frame. Instead, they correspond, at any propagation distance, to a local reference frame represented by the polarization basis \(\varpi(z)\). The rotation of the polarization basis implies the rotation of the local reference frame. This may help understand why the Stokes parameters are not able to completely characterize the state of polarization of plane waves. After all, whether polarization vector (4) in the achiral medium or polarization vector (21) in the chiral medium is quantity in the laboratory reference frame. If that is the case, what is the physical meaning of the Stokes parameters themselves in the local reference frame? How are they related to the state of
2310.00241
The Complexity of Distance-$r$ Dominating Set Reconfiguration
For a fixed integer $r \geq 1$, a distance-$r$ dominating set (D$r$DS) of a graph $G = (V, E)$ is a vertex subset $D \subseteq V$ such that every vertex in $V$ is within distance $r$ from some member of $D$. Given two D$r$DSs $D_s, D_t$ of $G$, the Distance-$r$ Dominating Set Reconfiguration (D$r$DSR) problem asks if there is a sequence of D$r$DSs that transforms $D_s$ into $D_t$ (or vice versa) such that each intermediate member is obtained from its predecessor by applying a given reconfiguration rule exactly once. The problem for $r = 1$ has been well-studied in the literature. We consider D$r$DSR for $r \geq 2$ under two well-known reconfiguration rules: Token Jumping ($\mathsf{TJ}$, which involves replacing a member of the current D$r$DS by a non-member) and Token Sliding ($\mathsf{TS}$, which involves replacing a member of the current D$r$DS by an adjacent non-member). It is known that under any of $\mathsf{TS}$ and $\mathsf{TJ}$, the problem on split graphs is $\mathtt{PSPACE}$-complete for $r = 1$. We show that for $r \geq 2$, the problem is in $\mathtt{P}$, resulting in an interesting complexity dichotomy. Along the way, we prove some non-trivial bounds on the length of a shortest reconfiguration sequence on split graphs when $r = 2$ which may be of independent interest. Additionally, we design a linear-time algorithm under $\mathsf{TJ}$ on trees. On the negative side, we show that D$r$DSR for $r \geq 1$ on planar graphs of maximum degree three and bounded bandwidth is $\mathtt{PSPACE}$-complete, improving the degree bound of previously known results. We also show that the known $\mathtt{PSPACE}$-completeness results under $\mathsf{TS}$ and $\mathsf{TJ}$ for $r = 1$ on bipartite graphs and chordal graphs can be extended for $r \geq 2$.
Niranka Banerjee, Duc A. Hoang
2023-09-30T03:41:54Z
http://arxiv.org/abs/2310.00241v2
# The Complexity of Distance-\(r\) Dominating Set Reconfiguration ###### Abstract For a fixed integer \(r\geq 1\), a _distance-\(r\) dominating set_ of a graph \(G=(V,E)\) is a vertex subset \(D\subseteq V\) such that every vertex in \(V\) is within distance \(r\) from some member of \(D\). Given two distance-\(r\) dominating sets \(D_{s},D_{t}\) of \(G\), the Distance-\(r\) Dominating Set Reconfiguration (DrDSR) problem asks if there is a sequence of distance-\(r\) dominating sets that transforms \(D_{s}\) into \(D_{t}\) (or vice versa) such that each intermediate member is obtained from its predecessor by applying a given reconfiguration rule exactly once. The problem for \(r=1\) has been well-studied in the literature. We consider DrDSR for \(r\geq 2\) under two well-known reconfiguration rules: Token Jumping (TJ, which involves replacing a member of the current DrDS by a non-member) and Token Sliding (TS, which involves replacing a member of the current DrDS by an adjacent non-member). We show that DrDSR (\(r\geq 2\)) is PSPACE-complete under both TJ and TS on bipartite graphs, planar graphs of maximum degree six and bounded bandwidth, and chordal graphs. On the positive side, we show that DrDSR (\(r\geq 2\)) can be solved in polynomial time on split graphs and cographs under both TS and TJ and on trees and interval graphs under TJ. Along the way, we observe some properties of a shortest reconfiguration sequence in split graphs when \(r=2\), which may be of independent interest. **Keywords:** distance-\(r\) dominating set, reconfiguration problem, computational complexity, PSPACE-completeness, polynomial-time algorithm **2020 MSC:** 05C85 ## 1 Introduction For the last few decades, _reconfiguration problems_ have independently emerged in different areas of computer science, including recreational mathematics (e.g., games and puzzles), computational geometry (e.g., flip graphs of triangulations), constraint satisfaction (e.g., solution space of Boolean formulas), and even quantum complexity theory (e.g., ground state connectivity), and so on [3, 8, 10, 17]. In a _reconfiguration variant_ of a computational problem (e.g., Satisfiability, Independent Set, Dominating Set, Vertex-Coloring, etc.), two _feasible solutions_ (e.g., satisfying truth assignments, independent sets, dominating sets, proper vertex-colorings, etc.) \(S\) and \(T\) are given along with a _reconfiguration rule_ that describes how to slightly modify one feasible solution to obtain a new one. The question is whether one can transform/reconfigure \(S\) into \(T\) via a sequence of feasible solutions such that each intermediate member is obtained from its predecessor by applying the given rule exactly once. Such a sequence, if exists, is called a _reconfiguration sequence_. In 1975, Meir and Moon [24] combined the concepts of "distance" and "domination" in graphs and introduced the so-called _distance-\(r\) dominating set_ of a graph (which they called a "\(r\)-covering"), where \(r\geq 1\) is some fixed integer. For a fixed positive integer \(r\) and a graph \(G\), a _distance-\(r\) dominating set_ (DrDS) (also known as \(r\)_-hop dominating set_ or \(r\)_-basis_) of \(G\) is a vertex subset \(D\) where each vertex of \(G\) is within distance \(r\) from some member of \(D\). In particular, a D1DS is nothing but a _dominating set_ of \(G\). Given a graph \(G\) and some positive integer \(k\), the Distance-\(r\) Dominating Set problem asks if there is a DrDS of \(G\) of size at most \(k\). Distance-\(r\) Dominating Set remains NP-complete even on bipartite graphs or chordal graphs of diameter \(2r+1\)[22]. Since the work of Meir and Moon, this notion of distance domination in graphs has been extensively studied from different perspectives. We refer readers to the survey of Henning [7] for more details. Reconfiguration of dominating sets has been well-studied in the literature from both algorithmic and graph-theoretic viewpoints. We briefly mention here some well-known algorithmic results and refer readers to [8] for recent developments from the graph-theoretic perspective. Imagine that a token is placed on each vertex of a dominating set of a graph \(G\) and no vertex has more than one token. In a reconfiguration setting for dominating sets, the following reconfiguration rules have been considered. * **Token Sliding (T):** one can move a token to one of its unoccupied neighbors as long as the resulting token-set forms a dominating set. * **Token Jumping (T):** one can move a token to any unoccupied vertex as long as the resulting token-set forms a dominating set. * **Token Addition/Removal (\(\mathsf{TAR}(k)\)):** one can either add or remove a token as long as the resulting token-set forms a dominating set of size at most some threshold \(k\geq 0\). Haddadan et al. [14] first studied the computational complexity of Dominating Set Reconfiguration (DSR) under \(\mathsf{TAR}\) and showed that the problem is PSPACE-complete on planar graphs of maximum degree six, bounded bandwidth graphs, split graphs and bipartite graphs. On the positive side, they designed polynomial-time algorithms for solving DSR under \(\mathsf{TAR}\) on cographs, forests, and interval graphs. Bonamy et al. [4] observed that the TJ and \(\mathsf{TAR}\) rules are equivalent under some constraints, and therefore the above-mentioned results indeed hold for DSR under TJ. Bonamy et al. [4] first studied the computational complexity of DSR under TS and showed that the hardness results of Haddadan et al. [14] hold even under TS. On the positive side, Bonamy et al. [4] designed polynomial-time algorithms for solving DSR under TS on cographs and dually chordal graphs (which contains trees and interval graphs). Bousquet and Joffard [5] later showed that DSR under TS is PSPACE-complete on circle graphs and can be solved in polynomial time on circular-arc graphs, answering some open questions previously asked by Bonamy et al. [4]. Recently, Krist'an and Svoboda [1] improved the positive results of Bonamy et al. [4] by showing polynomial-time algorithms to find a _shortest_ reconfiguration sequence, if exists, between two given dominating sets under TS when the input graph is either a tree or an interval graph. However, their techniques cannot be extended to dually chordal graphs. A systematic study on the parameterized complexity of several reconfiguration problems, including DSR, was initiated by Mouawad et al. [13]. There are two natural parameterizations: the number of tokens \(k\) and the length of a reconfiguration sequence \(\ell\). In [13], Mouawad et al. showed that DSR under \(\mathsf{TAR}\) on general graphs is \(\mathsf{W[1]}\)-hard parameterized by \(k\) and \(\mathsf{W[2]}\)-hard parameterized by \(k+\ell\). When parameterized by \(k\) on graphs excluding \(K_{d,d}\) as a subgraph for any constant \(d\) (including bounded degeneracy and nowhere dense graphs), Lokshtanov et al. [9] designed an FPT algorithm for solving the problem. When parameterized by \(\ell\) alone, it was mentioned in [3] that the problem is fixed-parameter tractable on any class where first-order model-checking is fixed-parameter tractable. We refer readers to [3, 9] and the references therein for more details. To the best of our knowledge, for some fixed integer \(r\geq 2\), the computational complexity of Distance-\(r\) Dominating Set Reconfiguration (D\(r\)DSR) has not yet been studied. On the other hand, from the parameterized complexity viewpoint, Siebertz [11] studied DrDSR under \(\mathsf{TAR}\) parameterized by \(k\) and proved that there exists a constant \(r\) such that the problem is \(\mathsf{W[2]}\)-hard on somewhere dense graphs which are close under taking subgraphs. On the positive side, Siebertz showed that the problem is in FPT on nowhere dense graphs. From the graph-theoretic viewpoint, DeVos et al. [6] introduced the \(\gamma_{r}\)_-graph_ of a graph \(G\)--a (reconfiguration) graph whose nodes are _minimum_ distance-\(r\) dominating sets of \(G\) and two nodes are adjacent if one can be obtained from the other by applying a single TJ-move--and proved a number of results on its realizability. In this paper, to obtain a better understanding of the separating line between "hard" and "easy" instances of DrDSR (\(r\geq 2\)) on different graphs, we study the problem under TS and TJ from the computational complexity viewpoint. (The definitions of these rules are similar to those for DSR.) ### Our Results In Section 3, we prove hardness results of DrDSR in different graph classes. We show that most of the hardness results of Haddadan et al. [14] and Bonamy et al. [4] for \(r=1\) can be extended for fixed integer \(r\geq 2\) under both TS and TJ. In Section 3.1, we show that DrDSR (\(r\geq 2\)) is PSPACE-complete on bipartite graphs (Theorem 1). In Section 3.2 we show that DrDSR (\(r\geq 2\)) on planar graphs of maximum degree six and bounded bandwidth is PSPACE-complete. (Theorem 2). Recall that Haddadan et al. [14] proved the PSPACE-completeness of DrDSR under \(\mathsf{TAR}\) on split graphs (and therefore on chordal graphs) for \(r=1\). In Section 3.3, we show that for fixed integer \(r\geq 2\), \(\mathrm{D}r\mathrm{DSR}\) remains \(\mathtt{PSPACE}\)-complete on chordal graphs (Theorem 3) under \(\mathtt{TS}\) and \(\mathtt{TJ}\). Note that all of our hardness results also hold even when considering minimum \(\mathrm{D}r\mathrm{DS}\)s. In Section 4, we prove some upper bound results of \(\mathrm{D}r\mathrm{DSR}\) in different graph classes. In Sections 4.1 and 4.2, we prove some simple observations which then imply that \(\mathrm{D}r\mathrm{DSR}\) can be solved in polynomial time under \(\mathtt{TJ}\) on interval graphs (Corollary 5) and under both \(\mathtt{TS}\) and \(\mathtt{TJ}\) on cographs (Corollary 7). In Section 4.3, we show that for fixed integer \(r\geq 2\), \(\mathrm{D}r\mathrm{DSR}\) can be solved in polynomial time on split graphs under both \(\mathtt{TS}\) and \(\mathtt{TJ}\) (Theorem 9 and Corollary 10). This result provides a surprising dichotomy to the lower bound result on split graphs of Haddadan et al. [14]. We also provide examples showing that the natural lower bounds on the lengths of a shortest reconfiguration sequence are sometimes not achievable on split graphs under both \(\mathtt{TS}\) and \(\mathtt{TJ}\) when \(r=2\) (Theorems 11 and 12). In Section 4.4, we prove that \(\mathrm{D}r\mathrm{DSR}\) under \(\mathtt{TJ}\) on trees (and forests) can be solved in polynomial time (Theorem 13). In short, we show that the positive results of Haddadan et al. [14] for \(r=1\) under \(\mathtt{TAR}\) on cographs, forests, and interval graphs also hold for fixed \(r\geq 2\) under \(\mathtt{TJ}\). ## 2 Preliminaries For the concepts and notations not defined here, we refer readers to [12]. Unless otherwise mentioned, throughout this paper, we always consider simple, connected, undirected graphs \(G\) with vertex-set \(V(G)\) and edge-set \(E(G)\). For any pair of vertices \(u,v\), the _distance_ between \(u\) and \(v\) in \(G\), denoted by \(\mathsf{dist}_{G}(u,v)\), is the length of a shortest path between them. For two vertex subsets \(X,Y\), we use \(X-Y\) and \(X+Y\) to indicate \(X\setminus Y\) and \(X\cup Y\), respectively. If \(Y\) contains a single vertex \(u\), we write \(X-u\) and \(X+u\) instead of \(X-\{u\}\) and \(X+\{u\}\), respectively. We denote by \(X\Delta Y\) their _symmetric difference_, i.e., \(X\Delta Y=(X-Y)+(Y-X)\). For a subgraph \(H\) of \(G\), we denote by \(G-H\) the graph obtained from \(G\) by deleting all vertices of \(H\) and their incident edges in \(G\). A _dominating set (DS)_ of \(G\) is a vertex subset \(D\) such that for every \(u\in V(G)\), there exists \(v\in D\) such that \(\mathsf{dist}_{G}(u,v)\leq 1\). For a fixed positive integer \(r\), a _distance-\(r\) dominating set (DrDS)_ of \(G\) is a vertex subset \(D\) such that for every \(v\in V(G)\), there exists \(v\in D\) such that \(\mathsf{dist}_{G}(u,v)\leq r\). In particular, any D1DS is also a DS and vice versa. Let \(N^{r}_{G}[u]\) be the set of all vertices of distance at most \(r\) from \(u\) in \(G\). We say that a vertex \(v\) is _\(r\)-dominated_ by \(u\) (or \(u\)_\(r\)-dominates_\(v\)) if \(v\in N^{r}_{G}[u]\). We say that a vertex subset \(X\) is _\(r\)-dominated_ by some vertex subset \(Y\) if each vertex in \(X\) is \(r\)-dominated by some vertex in \(Y\). A \(\mathrm{D}r\mathrm{DS}\) is nothing but a vertex subset \(D\) that \(r\)-dominates \(V(G)\). We denote by \(\gamma_{r}(G)\) the size of a minimum \(\mathrm{D}r\mathrm{DS}\) of \(G\). We say that \(u\)_covers_ the edge \(e\in E(G)\) if \(u\) is an endpoint of \(e\). A _vertex cover (VC)_ of \(G\) is a vertex subset \(C\) such that for every edge \(uv\in E(G)\), either \(u\in C\) or \(v\in C\). Intuitively, vertices in \(C\) cover all edges of \(G\). Observe that in a connected graph \(G\), any VC is also a DS. We denote by \(\tau(G)\) the size of a minimum VC of \(G\). Throughout this paper, we write "\((G,D_{s},D_{t})\) under \(\mathsf{R}\)" to indicate an instance of \(\mathrm{D}r\mathrm{DSR}\) where \(D_{s}\) and \(D_{t}\) are two given \(\mathrm{D}r\mathrm{DS}\)s of a graph \(G\) and the reconfiguration rule is \(\mathsf{R}\in\{\mathtt{TS},\mathtt{TJ}\}\). Imagine that a token is placed on each vertex in a \(\mathrm{D}r\mathrm{DS}\) of a graph \(G\). A \(\mathsf{TS}\)_-sequence_ in \(G\) between two \(\mathrm{D}r\mathrm{DS}\)s \(D_{s}\) and \(D_{t}\) is the sequence \(\mathcal{S}=\langle D_{s}=D_{0},D_{1},\ldots,D_{q}=D_{t}\rangle\) such that for \(i\in\{0,\ldots,q-1\}\), the set \(D_{i}\) is a \(\mathrm{D}r\mathrm{DS}\) of \(G\) and there exists a pair \(x_{i},y_{i}\in V(G)\) such that \(D_{i}-D_{i+1}=\{x_{i}\}\), \(D_{i+1}-D_{i}=\{y_{i}\}\), and \(x_{i}y_{i}\in E(G)\). A \(\mathsf{TJ}\)_-sequence_ in \(G\) can be defined similarly without the restriction \(x_{i}y_{i}\in E(G)\). Depending on the considered rule \(\mathsf{R}\in\{\mathtt{TS},\mathtt{TJ}\}\), we can also say that \(D_{i+1}\) is obtained from \(D_{i}\) by _immediately sliding/jumping_ a token from \(x_{i}\) to \(y_{i}\) and write \(x_{i}\stackrel{{ G}}{{\longrightarrow}}\mathsf{R}\ y_{i}\). Thus, we can also write \(\mathcal{S}=\langle x_{0}\stackrel{{ G}}{{\longrightarrow}} \mathsf{R}\ y_{0},\ldots,x_{q-1}\stackrel{{ G}}{{\longrightarrow}} \mathsf{R}\ y_{q-1}\rangle\). In short, \(\mathcal{S}\) can be viewed as a (ordered) sequence of either \(\mathrm{D}r\mathrm{DS}\)s or token-moves. (Recall that we defined \(\mathcal{S}\) as a sequence between \(D_{s}\) and \(D_{t}\). As a result, when regarding \(\mathcal{S}\) as a sequence of token-moves, we implicitly assume that the initial \(\mathrm{D}r\mathrm{DS}\) is \(D_{s}\).) With respect to the latter viewpoint, we say that \(\mathcal{S}\)_slides/jumps a token \(t\) from \(u\) to \(v\) in \(G\)_ if \(t\) is originally placed on \(u\in D_{0}\) and finally on \(v\in D_{q}\) after performing \(\mathcal{S}\). The _length_ of a \(\mathsf{R}\)-sequence is simply the number of times the rule \(\mathsf{R}\) is applied. Additionally, the length of a _shortest_\(\mathsf{R}\)-sequence in \(G\) between two \(\mathrm{D}r\mathrm{DS}\)s \(D_{s}\) and \(D_{t}\) is denoted by \(\mathsf{OPT}_{\mathsf{R}}(G,D_{s},D_{t})\). ## 3 Hardness Results We begin this section with the following simple observation. It is well-known that for any computational problem in \(\mathtt{NP}\), any of its reconfiguration variants is in \(\mathtt{PSPACE}\)[18][Theorem 1]. As a result, when proving the PSPACE-completeness of \(\mathrm{D}r\mathrm{DSR}\) on a certain graph class, it suffices to show a polynomial-time reduction. ### Bipartite Graphs **Theorem 1**.: \(\mathrm{D}r\mathrm{DSR}\) _under \(\mathsf{R}\in\{\mathsf{TS},\mathsf{TJ}\}\) on bipartite graphs is PSPACE-complete for any \(r\geq 2\)._ Proof.: We give a polynomial-time reduction from Minimum Vertex Cover Reconfiguration (M-VCR) on general graphs, which was showed to be PSPACE-complete by Ito et al. [18]. Our reduction extends the one given by Bonamy et al. [4] for the case \(r=1\). Let \((G,C_{s},C_{t})\) be an instance of M-VCR under \(\mathsf{R}\) where \(C_{s},C_{t}\) are two minimum VCs of a graph \(G\). We will construct an instance \((G^{\prime},D_{s},D_{t})\) of \(\mathrm{D}r\mathrm{DSR}\) under \(\mathsf{R}\) where \(D_{s}\) and \(D_{t}\) are two \(\mathrm{D}r\mathrm{DSS}\)s of a bipartite graph \(G^{\prime}\). Suppose that \(V(G)=\{v_{1},\ldots,v_{n}\}\). We construct \(G^{\prime}\) from \(G\) as follows. 1. Replace each edge \(v_{i}v_{j}\) by a path \(P_{ij}=x_{ij}^{0}x_{ij}^{1}\ldots x_{ij}^{2r}\) of length \(2r\) (\(1\leq i,j\leq n\)) with \(x_{ij}^{0}=v_{i}\) and \(x_{ij}^{2r}=v_{j}\). Observe that \(x_{ij}^{p}=x_{ji}^{2r-p}\) for \(0\leq p\leq 2r\). 2. Add a new vertex \(x\) and join it to every vertex in \(V(G)\). 3. Attach a new path \(P_{x}\) of length \(r\) to \(x\). We define \(D_{s}=C_{s}+x\) and \(D_{t}=C_{t}+x\). Clearly, this construction can be done in polynomial time. (See Figure 1.) In the next two claims, we prove that our construction results in an instance of \(\mathrm{D}r\mathrm{DSR}\) on bipartite graphs: Claim 1.1 shows that \(G^{\prime}\) is bipartite and Claim 1.2 implies that both \(D_{s}\) and \(D_{t}\) are minimum \(\mathrm{D}r\mathrm{DSS}\)s of \(G^{\prime}\). **Claim 1.1**.: \(G^{\prime}\) _is a bipartite graph._ Proof.: We show that any cycle \(\mathcal{C}^{\prime}\) in \(G^{\prime}\) has even length and therefore \(G^{\prime}\) is bipartite. Observe that no cycle of \(G^{\prime}\) contains a vertex from \(V(P_{x})-x\). From the construction, if \(\mathcal{C}^{\prime}\) does not contain \(x\), it follows that \(V(\mathcal{C}^{\prime})\cap V(G)\) must form a cycle \(\mathcal{C}\) of \(G\). Therefore, \(\mathcal{C}^{\prime}\) is of length \(2r|E(\mathcal{C})|\), which is even. On the other hand, if \(\mathcal{C}^{\prime}\) contains \(x\), it follows that \(V(\mathcal{C}^{\prime})\cap V(G)\) must form a path \(\mathcal{P}\) of \(G\). Therefore, \(\mathcal{C}^{\prime}\) is of length \(2+2r|E(\mathcal{P})|\), which again is even. **Claim 1.2**.: _Any set of the form \(C+x\), where \(C\) is a minimum VC of \(G\), is a minimum DrDS of \(G^{\prime}\)._ Proof.: To see this, note that, by construction, \(x\)\(r\)-dominates every vertex in \(V(G^{\prime})-\bigcup_{v_{i}v_{j}\in E(G)}\{x_{ij}^{r}\}\). Additionally, since each \(x_{ij}^{r}\) belongs to exactly one path \(P_{ij}\) and \(C\) is a minimum VC of \(G\), it follows that \(C\)\(r\)-dominates \(\bigcup_{v_{i}v_{j}\in E(G)}\{x_{ij}^{r}\}\). Thus, \(C+x\)\(r\)-dominates \(V(G^{\prime})\), i.e., \(C+x\) is a \(\mathrm{D}r\mathrm{DSS}\) of \(G^{\prime}\). It remains to show that \(C+x\) is minimum. Indeed, it is sufficient to show that \(\tau(G)+1=\gamma_{r}(G^{\prime})\) where \(\tau(G)\) and \(\gamma_{r}(G^{\prime})\) are respectively the size of a minimum VC of \(G\) and a minimum \(\mathrm{D}r\mathrm{DSS}\) of \(G^{\prime}\). Since \(C+x\) is a \(\mathrm{D}r\mathrm{DSS}\) of \(G^{\prime}\), we have \(\tau(G)+1=|C|+x\geq\gamma_{r}(G^{\prime})\). On the other hand, note that any minimum \(\mathrm{D}r\mathrm{DSS}\)\(D^{\prime}\) of \(G^{\prime}\) must \(r\)-dominate \(V(P_{x})\) and therefore contains a vertex of \(P_{x}\) (which, by the Figure 1: An example of constructing a bipartite graph \(G^{\prime}\) from a graph \(G\) when \(r=2\) in the proof of Theorem 1. construction, does not belong to \(G\)). Moreover, from the construction of \(G^{\prime}\), each path \(P_{ij}\) (\(1\leq i,j\leq n\) and \(v_{i}v_{j}\in E(G)\)) is of length \(2r\). Thus, in order to \(r\)-dominate all \(V(P_{ij})\), \(D^{\prime}\) needs to contain at least one vertex from each path \(P_{ij}\). Therefore, \(\gamma_{r}(G^{\prime})=|D^{\prime}|\geq 1+\tau(G)\). Our proof is complete. Before proving the correctness of our reduction, we prove some useful observations. **Claim 1.3**.: _Let \(D=C+x\) be a DrDS of \(G^{\prime}\) where \(C\) is a minimum VC of \(G\). Then,_ 1. \(D-u+y\) _is not a DrDS of_ \(G^{\prime}\)_, for any_ \(u\in C\) _and_ \(y\in V(P_{x})-x\)_._ 2. \(D-x+v\) _is not a DrDS of_ \(G^{\prime}\)_, for any_ \(v\in V(G^{\prime})-x\)_._ 3. \(D-u+z\) _is not a DrDS of_ \(G^{\prime}\)_, for_ \(u=v_{i}\in C\) _and_ \(z\notin\bigcup_{\{j|v_{j}\in N_{G}(v_{i})-C\}}V(P_{ij})\)_._ Proof.: 1. Suppose to the contrary there exists \(u\in C\) and \(y\in V(P_{x})-x\) such that \(D^{\prime}=D-u+y\) is a DrDS of \(G^{\prime}\). Since \(|D^{\prime}|=|D|=\tau(G)+1\), Claim 1.2 implies that \(D^{\prime}\) is a minimum DrDS of \(G^{\prime}\). On the other hand, from the construction of \(G^{\prime}\), any vertex \(r\)-dominated by \(y\) must also be \(r\)-dominated by \(x\). Thus, \(D^{\prime}-y\) is also a DrDS of \(G^{\prime}\), which is a contradiction. 2. Suppose to the contrary there exists \(v\in V(G^{\prime})-x\) such that \(D^{\prime}=D-x+v\) is a DrDS of \(G^{\prime}\). From (a), it follows that \(v\in V(P_{x})-x\); otherwise some vertex of \(P_{x}\) would not be \(r\)-dominated by any member of \(D^{\prime}\). Since \(C\) is a minimum VC of \(G\), it follows that there exists a pair \(i,j\in\{1,\ldots,n\}\) such that \(v_{i}v_{j}\in E(G)\) and \(C\cap\{v_{i},v_{j}\}=\{v_{i}\}\); otherwise every vertex of \(G\) would contain a token in \(C\) and therefore \(C\) would not be minimum--a contradiction. From the construction of \(G^{\prime}\), \(x\) is the unique vertex in \(V(P_{x})\) that \(r\)-dominates \(x_{ij}^{r+1}\). Thus, \(x_{ij}^{r+1}\) is not \(r\)-dominated by any vertex in \(D^{\prime}=D-x+v\) for \(v\in V(P_{x})-x\), which is a contradiction. 3. Let \(j\in\{1,\ldots,n\}\) be such that \(v_{j}\in N_{G}(u)-C=N_{G}(v_{i})-C\). Since \(C\) is a minimum VC of \(G\), such a vertex \(v_{j}\) must exist. From the construction of \(G^{\prime}\), the vertex \(u=v_{i}\) is the unique vertex in \(D\) that \(r\)-dominates \(x_{ij}^{r}\). Thus, in order to keep \(x_{ij}^{r}\) being \(r\)-dominated, a token on \(u\) can only be moved to some vertex in \(\bigcup_{\{j|v_{j}\in N_{G}(v_{i})-C\}}V(P_{ij})\). Intuitively, starting from a token-set of the form \(C+x\) for some minimum VC \(C\) of \(G\), Claim 1.3(a) means that as long as \(x\) has a token, no other token can be moved in \(G^{\prime}\) to a vertex in \(P_{x}-x\), Claim 1.3(b) implies that if \(x\) has a token, it will never be moved in \(G^{\prime}\), and Claim 1.3(c) says that a token cannot be moved in \(G^{\prime}\) "too far" from its original position. We are now ready to show the correctness of our reduction. (Claims 1.4 and 1.5.) **Claim 1.4**.: _Under \(\mathsf{TS}\), \((G,C_{s},C_{t})\) is a yes-instance if and only if \((G^{\prime},D_{s},D_{t})\) is a yes-instance._ Proof.: (\(\Rightarrow\)) Suppose that \(\mathcal{S}\) is a \(\mathsf{TS}\)-sequence in \(G\) between \(C_{s}\) and \(C_{t}\). We construct a sequence \(\mathcal{S}^{\prime}\) of token-slides in \(G^{\prime}\) between \(D_{s}\) and \(D_{t}\) by replacing each \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{TS}}v_{j}\) in \(\mathcal{S}\) with the sequence \(\mathcal{S}_{ij}=\langle v_{i}\stackrel{{ G^{\prime}}}{{ \longrightarrow}}_{\mathsf{TS}}x_{ij}^{1},x_{ij}^{1}\stackrel{{ G^{\prime}}}{{ \longrightarrow}}_{\mathsf{TS}}x_{ij}^{2},\ldots,x_{ij}^{2r-1}\stackrel{{ G^{\prime}}}{{ \longrightarrow}}_{\mathsf{TS}}x_{ij}^{2r}\rangle\), where \(i,j\in\{1,\ldots,n\}\) and \(v_{i}v_{j}\in E(G)\). Intuitively, sliding a token \(t_{ij}\) from \(v_{i}\) to its neighbor \(v_{j}\) in \(G\) corresponds to sliding \(t_{ij}\) from \(v_{i}\) to \(v_{j}\) in \(G^{\prime}\) along the path \(P_{ij}\). Since \(x\) always \(r\)-dominates \(V(G^{\prime})-\bigcup_{v_{i}v_{j}\in E(G)}\{x_{ij}^{r}\}\) and after each move in \(\mathcal{S}_{ij}\) the token \(t_{ij}\) always \(r\)-dominates \(x_{ij}^{r}\), it follows that \(\mathcal{S}_{ij}\) is indeed a \(\mathsf{TS}\)-sequence in \(G^{\prime}\). Thus, \(\mathcal{S}^{\prime}\) is our desired \(\mathsf{TS}\)-sequence. (\(\Leftarrow\)) Let \(\mathcal{S}^{\prime}\) be a \(\mathsf{TS}\)-sequence in \(G^{\prime}\) between \(D_{s}\) and \(D_{t}\). We describe how to construct a \(\mathsf{TS}\)-sequence \(\mathcal{S}\) in \(G\) between \(C_{s}\) and \(C_{t}\). Initially, \(\mathcal{S}=\emptyset\). For each move \(u\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{TS}}v\) (\(u\neq v\)) in \(\mathcal{S}^{\prime}\), we consider the following cases: **Case 1: Either \(u\in V(P_{x})\) or \(v\in V(P_{x})\).**: Since \(\mathcal{S}^{\prime}\) starts with \(D_{s}=C_{s}+x\), Claim 1.3 ensures that no token can be placed on some vertex in \(P_{x}-x\) and \(x\) always contains a token. As a result, this case does not happen. **Case 2: \(u=x_{ij}^{p}\) and \(v=x_{ij}^{p+1}\) for \(0\leq p\leq 2r\).**: Recall that \(x_{ij}^{0}=v_{i}\) and \(x_{ij}^{2r}=v_{j}\). Additionally, from our construction, since \(x_{ij}^{p}\) has a token, so does \(v_{i}\). If \(p=2r-1\), append the move \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{TS}}v_{j}\) to \(\mathcal{S}\). Otherwise, do nothing. To see that \(\mathcal{S}\) is indeed a \(\mathsf{T}\mathsf{S}\)-sequence in \(G\), it suffices to show that if \(C\) is the minimum VC obtained right before the move \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{T}\mathsf{S}}v_{j}\) then \(C^{\prime}=C-v_{i}+v_{j}\) is also a minimum VC of \(G\). Suppose to the contrary that \(C^{\prime}\) is not a vertex cover of \(G\). It follows that there exists \(k\in\{1,\ldots,n\}\) such that \(v_{i}v_{k}\in E(G)\), \(v_{k}\neq v_{j}\), and \(v_{k}\notin C\). Intuitively, the edge \(v_{i}v_{k}\) is not covered by any vertex in \(C^{\prime}\). On the other hand, let \(D\) be the DrDS of \(G^{\prime}\) obtained right before the move \(x_{ij}^{2r-1}\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{ \mathsf{T}\mathsf{S}}x_{ij}^{2r}=v_{j}\). Since \(D^{\prime}=D-x_{ij}^{2r-1}+x_{ij}^{2r}\) is also a DrDS of \(G^{\prime}\), there must be some vertex in \(D^{\prime}\) that \(r\)-dominates \(x_{ik}^{r}\), which implies \(V(P_{ik})\cap D^{\prime}\neq\emptyset\). However, from the construction of \(\mathcal{S}\), it follows that \(v_{k}\in C\), which is a contradiction. Thus, \(C^{\prime}\) is a vertex cover of \(G\). Since \(|C^{\prime}|=|C|\), it is also minimum. **Claim 1.5**.: _Under \(\mathsf{T}\mathsf{J}\), \((G,C_{s},C_{t})\) is a yes-instance if and only if \((G^{\prime},D_{s},D_{t})\) is a yes-instance._ Proof.: (\(\Rightarrow\)) Suppose that \(\mathcal{S}\) is a \(\mathsf{T}\mathsf{J}\)-sequence in \(G\) between \(C_{s}\) and \(C_{t}\). It follows from Claim 1.2 that the sequence \(\mathcal{S}^{\prime}\) of token-jumps obtained from \(\mathcal{S}\) by replacing each move \(u\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{T}\mathsf{J}}v\) in \(\mathcal{S}\) by \(u\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{T}\mathsf{J}}v\) is a \(\mathsf{T}\mathsf{J}\)-sequence between \(D_{s}=C_{s}+x\) and \(D_{t}=C_{t}+x\). (\(\Leftarrow\)) On the other hand, let \(\mathcal{S}^{\prime}\) be a \(\mathsf{T}\mathsf{J}\)-sequence in \(G^{\prime}\) between \(D_{s}\) and \(D_{t}\). We describe how to construct a \(\mathsf{T}\mathsf{J}\)-sequence \(\mathcal{S}\) in \(G\) between \(C_{s}\) and \(C_{t}\). Initially, \(\mathcal{S}=\emptyset\). For each move \(u\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{T}\mathsf{J}}v\) (\(u\neq v\)) in \(\mathcal{S}^{\prime}\), we consider the following cases: **Case 1: Either \(u\in V(P_{x})\) or \(v\in V(P_{x})\).**: As before, it follows from Claim 1.3 that this case does not happen. **Case 2: \(u=x_{ij}^{p}\) and \(v=x_{ij}^{q}\) for \(0\leq p,q\leq 2r\).**: Recall that \(x_{ij}^{0}=v_{i}\) and \(x_{ij}^{2r}=v_{j}\). If \(q=2r\), append the move \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{T}\mathsf{J}}v_{j}\) to \(\mathcal{S}\). Otherwise, do nothing. **Case 3: \(u=x_{ij}^{p}\) and \(v=x_{k\ell}^{q}\) for two edges \(v_{i}v_{j}\) and \(v_{k}v_{\ell}\) in \(G\) and \(0\leq p,q\leq 2r\).**: Note that if (\(\star\)) \(v_{i}v_{j}\) and \(v_{k}v_{\ell}\) are adjacent edges in \(G\) and either \(u\) or \(v\) is their common endpoint, we are back to **Case 2**. Thus, let's assume (\(\star\)) does not happen and consider the _first_ move of this type. From Claim 1.3, it must happen that before the move \(x_{ij}^{p}\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{T} \mathsf{J}}x_{k\ell}^{q}\), some move in \(\mathcal{S}\) places a token on \(u=x_{ij}^{p}\) and moreover such a token must come from either \(v_{i}\) or \(v_{j}\). Additionally, again by Claim 1.3, if the token comes from \(v_{i}\) then \(v_{j}\) contains no other token, and vice versa. Let \(D\) be the DrDS obtained right before the move \(x_{ij}^{p}\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{T} \mathsf{J}}x_{k\ell}^{q}\). Now, since \(x_{ij}^{p}\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{T} \mathsf{J}}x_{k\ell}^{q}\) is the first move of its type, it follows that the path \(P_{ij}\) contains exactly one token from \(D\) which is placed on \(x_{ij}^{p}\). However, this means we cannot perform \(x_{ij}^{p}\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{T} \mathsf{J}}x_{k\ell}^{q}\), otherwise \(x_{ij}^{r}\) would not be \(r\)-dominated by the resulting token-set. Thus, this case does not happen unless (\(\star\)) is satisfied. To see that \(\mathcal{S}\) is indeed a \(\mathsf{T}\mathsf{J}\)-sequence in \(G\), it suffices to show that if \(C\) is the minimum VC obtained right before the move \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{T} \mathsf{J}}v_{j}\) then \(C^{\prime}=C-v_{i}+v_{j}\) is also a minimum VC of \(G\). Suppose to the contrary that \(C^{\prime}\) is not a vertex cover of \(G\). It follows that there exists \(k\in\{1,\ldots,n\}\) such that \(v_{i}v_{k}\in E(G)\), \(v_{k}\neq v_{j}\), and \(v_{k}\notin C\). Intuitively, the edge \(v_{i}v_{k}\) is not covered by any vertex in \(C^{\prime}\). On the other hand, let \(D\) be the DrDS of \(G^{\prime}\) obtained right before the move \(x_{ij}^{p}\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{T} \mathsf{J}}x_{ij}^{2r}=v_{j}\). Since \(D^{\prime}=D-x_{ij}^{p}+x_{ij}^{2r}\) is also a DrDS of \(G^{\prime}\), there must be some vertex in \(D^{\prime}\) that \(r\)-dominates \(x_{ik}^{r}\), which implies \(V(P_{ik})\cap D^{\prime}\neq\emptyset\). However, from the construction of \(\mathcal{S}\), it follows that \(v_{k}\in C\), which is a contradiction. Thus, \(C^{\prime}\) is a vertex cover of \(G\). Since \(|C^{\prime}|=|C|\), it is also minimum. Our proof is complete. ### Planar Graphs **Theorem 2**.: DrDSR _under \(\mathsf{R}\in\{\mathsf{TS},\mathsf{T}\mathsf{J}\}\) on planar graphs of maximum degree six and bounded bandwidth is \(\mathsf{PSPACE}\)-complete for any \(r\geq 2\)._ Proof.: We give a polynomial-time reduction from Minimum Vertex Cover Reconfiguration (M-VCR) on planar graphs of maximum degree three and bounded bandwidth, which was (implicitly) showed to be PSPACE-complete in the works of Hearn and Demaine [19] and van der Zanden [16]1. Our reduction extends the classic reduction from Vertex Cover to Dominating Set[23]. This reduction has also been modified for showing the hardness of the problem for \(r=1\) (i.e., Dominating Set Reconfiguration) by Haddadan et al. [14] under TAR and later by Bonamy et al. [4] under TS. Let \((G,C_{s},C_{t})\) be an instance of M-VCR under R where \(C_{s},C_{t}\) are two minimum VCs of a planar graph \(G\) of maximum degree three and bounded bandwidth. We will construct an instance \((G^{\prime},D_{s},D_{t})\) of DrDSR under R where \(D_{s}\) and \(D_{t}\) are two DrDSs of a planar graph \(G^{\prime}\) of maximum degree three and bounded bandwidth. Footnote 1: Hearn and Demaine [19] proved that Independent Set Reconfiguration is PSPACE-complete on planar graphs of maximum degree three and later van der Zanden extended their results for those graphs with the additional “bounded bandwidth” restriction. Since their proofs only involve _maximum independent sets_, the same results hold for _minimum vertex covers_. Suppose that \(V(G)=\{v_{1},\ldots,v_{n}\}\). We construct \(G^{\prime}\) from \(G\) as follows. For each edge \(v_{i}v_{j}\in E(G)\), add a new path \(P_{ij}=x_{ij}^{0}x_{ij}^{1}\ldots,x_{ij}^{2r}\) of length \(2r\) (\(1\leq i,j\leq n\)) with \(x_{ij}^{0}=v_{i}\) and \(x_{ij}^{2r}=v_{j}\). Observe that \(x_{ij}^{p}=x_{ji}^{2r-p}\) for \(0\leq p\leq 2r\). Intuitively, \(G^{\prime}\) is obtained from \(G\) by replacing each edge of \(G\) by a cycle \(\mathcal{C}_{ij}\) of length \(2r+1\) formed by the path \(P_{ij}\) and the edge \(uv=v_{i}v_{j}\). We define \(D_{s}=C_{s}\) and \(D_{t}=C_{t}\). Clearly, this construction can be done in polynomial time. (See Figure 2.) It follows directly from the construction that \(G^{\prime}\) is planar and has maximum degree six and both \(D_{s}\) and \(D_{t}\) are DrDSs of \(G^{\prime}\) (and so is any minimum VC of \(G\)). Moreover, \(D_{s}\) and \(D_{t}\), and in general all minimum VCs of \(G\), are also minimum DrDSs of \(G^{\prime}\). To see this, it suffices to prove that \(\tau(G)=\gamma_{r}(G^{\prime})\) where \(\tau(G)\) and \(\gamma_{r}(G^{\prime})\) are respectively the size of a minimum VC of \(G\) and a minimum DrDS of \(G^{\prime}\). Since any minimum VC of \(G\) is also a DrDS of \(G^{\prime}\), we have \(\tau(G)\geq\gamma_{r}(G^{\prime})\). On the other hand, from the construction of \(G^{\prime}\), observe that for any pair \(i,j\in\{1,\ldots,n\}\) with \(v_{i}v_{j}\in E(G)\), the vertex \(x_{ij}^{r}\) (whose distance from both \(v_{i}\) and \(v_{j}\) is exactly \(r\)) can only be \(r\)-dominated by some vertex in \(V(\mathcal{C}_{ij})\), which implies that one needs at least \(\tau(G)\) tokens to \(r\)-dominate \(V(G^{\prime})\). Therefore, \(\gamma_{r}(G^{\prime})\geq\tau(G)\). Since the number of edges of \(G^{\prime}\) is exactly \((2r+1)\) times the number of edges of \(G\), the bandwidth of \(G^{\prime}\) increases (comparing to that of \(G\)) only by a constant multiplicative factor, which implies that \(G^{\prime}\) is a bounded bandwidth graph. We are now ready to show the correctness of our reduction. **Claim 2.1**.: _Under_ TS_, \((G,C_{s},C_{t})\) is a yes-instance if and only if \((G^{\prime},D_{s},D_{t})\) is a yes-instance._ Proof.: (\(\Rightarrow\)) Let \(\mathcal{S}\) be a TS-sequence in \(G\) between \(C_{s}\) and \(C_{t}\). Since any minimum VC of \(G\) is also a minimum DrDS of \(G^{\prime}\), the sequence \(\mathcal{S}^{\prime}\) obtained by replacing each move \(u\xrightarrow{G}_{\textsf{TS}}v\) in \(\mathcal{S}\) by \(u\xrightarrow{G^{\prime}}_{\textsf{TS}}v\) is also a TS-sequence in \(G^{\prime}\) between \(D_{s}=C_{s}\) and \(D_{t}=C_{t}\). Figure 2: An example of constructing \(G^{\prime}\) from a planar, subcubic, and bounded bandwidth graph \(G\) in the proof of Theorem 2. Vertices in \(V(G^{\prime})-V(G)\) are marked with the gray color. Each dotted path is of length \(r-1\). Let \(\mathcal{S}^{\prime}\) be a \(\mathsf{TS}\)-sequence in \(G^{\prime}\) between \(D_{s}\) and \(D_{t}\). We construct a sequence of token-slides \(\mathcal{S}\) in \(G\) between \(C_{s}=D_{s}\) and \(C_{t}=D_{t}\) as follows. Initially, \(\mathcal{S}=\emptyset\). For each move \(u\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{TS}}v\) in \(\mathcal{S}^{\prime}\), we consider the following cases. **Case 1: \(u\in V(G)\) and \(v\in V(G)\).**: It must happen that \(u=v_{i}\) and \(v=v_{j}\) for some \(i,j\in\{1,\ldots,n\}\) such that \(v_{i}v_{j}\in E(G)\). We append \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{TS}}v_{j}\) to \(\mathcal{S}\). **Case 2: \(u\in V(G)\) and \(v\in V(G^{\prime})-V(G)\).**: Do nothing. **Case 3: \(u\in V(G^{\prime})-V(G)\) and \(v\in V(G)\).**: It must happen that \(u=x_{ij}^{2r-1}\) and \(v=x_{ij}^{2r}=v_{j}\) for some \(i,j\in\{1,\ldots,n\}\) such that \(v_{i}v_{j}\in E(G)\). We append \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{TS}}v_{j}\) to \(\mathcal{S}\). **Case 4: \(u\in V(G^{\prime})-V(G)\) and \(v\in V(G^{\prime})-V(G)\).**: Do nothing. To see that \(\mathcal{S}\) is indeed a \(\mathsf{TS}\)-sequence in \(G\), it suffices to show that if \(C\) is the minimum VC obtained right before the move \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{TS}}v_{j}\) in \(G\) then \(C^{\prime}=C-v_{i}+v_{j}\) is also a minimum VC of \(G\). If **Case 1** happens, this is trivial. Thus, it remains to consider the case **Case 3** happens. In this case, suppose to the contrary that \(C^{\prime}\) is not a VC of \(G\). It follows that there exists \(k\in\{1,\ldots,n\}\) such that \(v_{i}v_{k}\in E(G)\), \(v_{k}\neq v_{j}\), and \(v_{k}\notin C\). Intuitively, the edge \(v_{i}v_{k}\) is not covered by any vertex in \(C^{\prime}\). On the other hand, let \(D\) be the DrDS of \(G^{\prime}\) obtained right before the move \(x_{ij}^{2r-1}\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{ \mathsf{TS}}x_{ij}^{2r}=v_{j}\). Since \(D^{\prime}=D-x_{ij}^{2r-1}+x_{ij}^{2r}\) is also a DrDS of \(G^{\prime}\), there must be some vertex in \(D^{\prime}\) that \(r\)-dominates \(x_{ik}^{r}\), which implies \(V(P_{ik})\cap D^{\prime}\neq\emptyset\). However, from the construction of \(\mathcal{S}\), it follows that \(v_{k}\in C\), which is a contradiction. Thus, \(C^{\prime}\) is a vertex cover of \(G\). Since \(|C^{\prime}|=|C|\), it is also minimum. **Claim 2.2**.: _Under \(\mathsf{TJ}\), \((G,C_{s},C_{t})\) is a yes-instance if and only if \((G^{\prime},D_{s},D_{t})\) is a yes-instance._ Proof.: (\(\Rightarrow\)) Let \(\mathcal{S}\) be a \(\mathsf{TJ}\)-sequence in \(G\) between \(C_{s}\) and \(C_{t}\). Since any minimum VC of \(G\) is also a minimum DrDS of \(G^{\prime}\), the sequence \(\mathcal{S}^{\prime}\) obtained by replacing each move \(u\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{TJ}}v\) in \(\mathcal{S}\) by \(u\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{TJ}}v\) is also a \(\mathsf{TS}\)-sequence in \(G^{\prime}\) between \(D_{s}=C_{s}\) and \(D_{t}=C_{t}\). Let \(\mathcal{S}^{\prime}\) be a \(\mathsf{TJ}\)-sequence in \(G^{\prime}\) between \(D_{s}\) and \(D_{t}\). We construct a sequence of token-jumps \(\mathcal{S}\) in \(G\) between \(C_{s}=D_{s}\) and \(C_{t}=D_{t}\) as follows. Initially, \(\mathcal{S}=\emptyset\). For each move \(u\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{TJ}}v\) in \(\mathcal{S}^{\prime}\), we consider the following cases. **Case 1: \(u\in V(G)\) and \(v\in V(G)\).**: It must happen that \(u=v_{i}\) and \(v=v_{j}\) for some \(i,j\in\{1,\ldots,n\}\). We append \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{TJ}}v_{j}\) to \(\mathcal{S}\). **Case 2: \(u\in V(G)\) and \(v\in V(G^{\prime})-V(G)\).**: Do nothing. **Case 3: \(u\in V(G^{\prime})-V(G)\) and \(v\in V(G)\).**: From the construction of \(G^{\prime}\), for each pair \(i,j\in\{1,\ldots,n\}\) such that \(v_{i}v_{j}\in E(G)\), the vertex \(x_{ij}^{r}\) must be \(r\)-dominated by at least one vertex of \(\mathcal{C}_{ij}\). Additionally, note that any token-set resulting from a move in \(\mathcal{S}^{\prime}\) must be a minimum DrDS of \(G^{\prime}\). Thus, we must have \(u=x_{ij}^{p}\) and \(v=x_{ij}^{2r}=v_{j}\) for some \(i,j\in\{1,\ldots,n\}\) such that \(v_{i}v_{j}\in E(G)\) and \(1\leq p\leq 2r-1\). Now, we append \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{TS}}v_{j}\) to \(\mathcal{S}\). **Case 4: \(u\in V(G^{\prime})-V(G)\) and \(v\in V(G^{\prime})-V(G)\).**: Do nothing. To see that \(\mathcal{S}\) is indeed a \(\mathsf{TJ}\)-sequence in \(G\), it suffices to show that if \(C\) is the minimum VC obtained right before the move \(v_{i}\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{TJ}}v_{j}\) in \(G\) then \(C^{\prime}=C-v_{i}+v_{j}\) is also a minimum VC of \(G\). If **Case 1** happens, this is trivial. Thus, it remains to consider the case **Case 3** happens. In this case, suppose to the contrary that \(C^{\prime}\) is not a VC of \(G\). It follows that there exists \(k\in\{1,\ldots,n\}\) such that \(v_{i}v_{k}\in E(G)\), \(v_{k}\neq v_{j}\), and \(v_{k}\notin C\). Intuitively, the edge \(v_{i}v_{k}\) is not covered by any vertex in \(C^{\prime}\). On the other hand, let \(D\) be the DrDS of \(G^{\prime}\) obtained right before the move \(x_{ij}^{p}\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{TJ}}x_{ ij}^{2r}=v_{j}\), for \(1\leq p\leq 2r-1\). Since \(D^{\prime}=D-x_{ij}^{p}+x_{ij}^{2r}\) is also a DrDS of \(G^{\prime}\), there must be some vertex in \(D^{\prime}\) that \(r\)-dominates \(x_{ik}^{r}\), which implies \(V(P_{ik})\cap D^{\prime}\neq\emptyset\). However, from the construction of \(\mathcal{S}\), it follows that \(v_{k}\in C\), which is a contradiction. Thus, \(C^{\prime}\) is a vertex cover of \(G\). Since \(|C^{\prime}|=|C|\), it is also minimum. Our proof is complete. ### Chordal Graphs **Theorem 3**.: D\(r\)DSR _under \(\mathsf{R}\in\{\mathsf{TS},\mathsf{TJ}\}\) on chordal graphs is_ PSPACE_-complete for any \(r\geq 2\)._ Proof.: We give a polynomial-time reduction from Minimum Vertex Cover Reconfiguration (M-VCR) on general graphs, which was showed to be PSPACE-complete by Ito et al. [18]. Let \((G,C_{s},C_{t})\) be an instance of M-VCR under \(\mathsf{R}\) where \(C_{s},C_{t}\) are two minimum VCs of a graph \(G\). We will construct an instance \((G^{\prime},D_{s},D_{t})\) of DrDSR under \(\mathsf{R}\) where \(D_{s}\) and \(D_{t}\) are two DrDSs of a chordal graph \(G^{\prime}\). Suppose that \(V(G)=\{v_{1},\ldots,v_{n}\}\). We construct \(G^{\prime}\) from \(G\) as follows. * Form a clique in \(G^{\prime}\) having all vertices \(v_{1},\ldots,v_{n}\) of \(G\) * For each edge \(v_{i}v_{j}\in E(G)\), add a corresponding new vertex \(x_{ij}\) to \(G^{\prime}\) and join it to both \(v_{i}\) and \(v_{j}\), where \(i,j\in\{1,\ldots,n\}\). Observe that \(x_{ij}=x_{ji}\). Furthermore, attach to each \(x_{ij}\) a new path \(P_{ij}\) of length exactly \(r-1\). * For each vertex \(v_{i}\in V(G)\), add a corresponding new vertex \(v^{\prime}_{i}\) to \(G^{\prime}\) and join it to any \(v_{j}\) satisfying that \(\mathsf{dist}_{G}(v_{i},v_{j})\leq 1\). Furthermore, attach to each \(v^{\prime}_{i}\) a new path \(Q_{i}\) of length exactly \(r-1\). We define \(D_{s}=C_{s}\) and \(D_{t}=C_{t}\). Clearly, this construction can be done in polynomial time. (See Figure 3.) It follows from the construction that \(G^{\prime}\) is indeed a chordal graph. More precisely, if we define \(H=(K\uplus S,F)\) to be the split graph with \(K=\{v_{1},\ldots,v_{n}\}\) forming a clique and \(S=\bigcup_{\{i,j|v_{i}v_{j}\in E(G)\}}\{x_{ij}\}\cup\bigcup_{i=1}^{n}\{v^{ \prime}_{i}\}\) forming an independent set, then \(G^{\prime}\) is obtained from \(H\) by attaching paths to each member of \(S\), which clearly results a chordal graph. Additionally, one can verify that any minimum vertex cover of \(G\) is also a minimum dominating set of \(H\) and therefore a minimum D\(r\)DS of \(G^{\prime}\). (Recall that in a connected graph, any VC is also a DS.) **Claim 3.1**.: _Under \(\mathsf{R}\in\{\mathsf{TS},\mathsf{TJ}\}\), \((G,C_{s},C_{t})\) is a yes-instance if and only if \((G^{\prime},D_{s},D_{t})\) is a yes-instance._ Proof.: (\(\Rightarrow\)) Let \(\mathcal{S}\) be a \(\mathsf{R}\)-sequence in \(G\) between \(C_{s}\) and \(C_{t}\). Since any minimum VC of \(G\) is also a minimum DrDS of \(G^{\prime}\), the sequence \(\mathcal{S}^{\prime}\) obtained by replacing each move \(u\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{R}}v\) in \(\mathcal{S}\) by \(u\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{R}}v\) is also a \(\mathsf{R}\)-sequence in \(G^{\prime}\) between \(D_{s}=C_{s}\) and \(D_{t}=C_{t}\). (\(\Leftarrow\)) Let \(\mathcal{S}^{\prime}\) be a \(\mathsf{R}\)-sequence in \(G^{\prime}\) between \(D_{s}\) and \(D_{t}\). From the construction of \(G^{\prime}\), observe that no token can be moved to a vertex in \(V(G^{\prime})-V(G)\); otherwise some degree-\(1\) endpoint of either a \(P_{ij}\) or a \(Q_{\ell}\) would not be \(r\)-dominated by the resulting token-set. Therefore, any move \(u\stackrel{{ G^{\prime}}}{{\longrightarrow}}_{\mathsf{R}}v\) in \(\mathcal{S}^{\prime}\) satisfies that both \(u\) and \(v\) are in \(V(G)\), and thus can be replaced by the move \(u\stackrel{{ G}}{{\longrightarrow}}_{\mathsf{R}}v\) to construct \(\mathcal{S}\)--our desired \(\mathsf{R}\)-sequence between \(C_{s}=D_{s}\) and \(C_{t}=D_{t}\) in \(G\). Figure 3: An example of constructing \(G^{\prime}\) from a graph \(G\) in the proof of Theorem 3. Vertices in \(V(G^{\prime})-V(G)\) are marked with the gray color. Vertices in the yellow box form a clique. Each red path is of length exactly \(r-1\). Polynomial-Time Algorithms ### Graphs and Their Powers An extremely useful concept for studying distance-\(r\) dominating sets is _graph power_. For a graph \(G\) and an integer \(s\geq 1\), the _\(s^{\text{th}}\) power of \(G\)_ is the graph \(G^{s}\) whose vertices are \(V(G)\) and two vertices \(u,v\) are adjacent in \(G^{s}\) if \(\mathsf{dist}_{G}(u,v)\leq s\). Observe that \(D\) is a D\(r\)DS of \(G\) if and only if \(D\) is a DS of \(G^{r}\). The following proposition is straightforward. **Proposition 4**.: _Let \(\mathcal{G}\) and \(\mathcal{H}\) be two graph classes and suppose that for every \(G\in\mathcal{G}\) we have \(G^{r}\in\mathcal{H}\) for some fixed integer \(r\geq 1\). If DSR under \(\mathsf{TJ}\) on \(\mathcal{H}\) can be solved in polynomial time, so does D\(r\)DSR under \(\mathsf{TJ}\) on \(\mathcal{G}\)._ Proof.: Since \(D\) is a D\(r\)DS of \(G\) if and only if \(D\) is a DS of \(G^{r}\), any \(\mathsf{TJ}\)-sequence in \(G\) between two D\(r\)DSs can be converted to a \(\mathsf{TJ}\)-sequence in \(G^{r}\) between two corresponding DSs and vice versa. Recall that the power of any interval graph is also an interval graph [20, 21] and DSR under \(\mathsf{TJ}\) on interval graphs is in \(\mathtt{P}\)[14]. Along with Proposition 4, we immediately obtain the following corollary. **Corollary 5**.: D\(r\)DSR _under \(\mathsf{TJ}\) on interval graphs is in \(\mathtt{P}\) for any \(r\geq 1\)._ ### Graphs With Bounded Diameter Components **Proposition 6**.: _Let \(G\) be any graph such that there is some constant \(c>0\) satisfying \(\mathsf{diam}(C_{G})\leq c\) for any component \(C_{G}\) of \(G\). Then, D\(r\)DSR on \(G\) under \(\mathsf{R}\in\{\mathsf{TS},\mathsf{TJ}\}\) is in \(\mathtt{P}\) for every \(r\geq c\)._ Proof.: When \(r\geq c\), any size-\(1\) vertex subset of \(G\) is a D\(r\)DS. In this case, observe that any token-jump (and therefore token-slide) from one vertex to any unoccupied vertex always results a new D\(r\)DS. The problem becomes trivial: under \(\mathsf{TJ}\), the answer is always "yes"; under \(\mathsf{TS}\), the answer depends on the number of tokens in each component. Since any connected cograph has diameter at most \(2\), the following corollary is straightforward. **Corollary 7**.: D\(r\)DSR _under \(\mathsf{R}\in\{\mathsf{TS},\mathsf{TJ}\}\) on cographs is in \(\mathtt{P}\) for any \(r\geq 2\)._ ### Split Graphs In this section, we assume that for any split graph \(G\), the set \(V(G)\) is partitioned into two subsets \(K=\{v_{1},\ldots,v_{p}\}\) and \(S=\{v_{p+1},v_{p+2},\ldots,v_{p+q}\}\) which respectively induce a clique and an independent set of \(G\). For convenience, we write \(G=(K\uplus S,E)\). **Lemma 8**.: _Let \(D\) be a D\(2\)DS of a split graph \(G=(K\uplus S,E)\)._ 1. _For every pair_ \(u\in D\cap K\) _and_ \(v\in K-D\)_, the set_ \(D-u+v\) _is a D_\(2\)_DS of_ \(G\)_._ 2. _For every pair_ \(u\in D\cap S\) _and_ \(v\in K-D\)_, the set_ \(D-u+v\) _is a D_\(2\)_DS of_ \(G\)_._ 3. _For every pair_ \(u\in D\cap K\) _and_ \(v\in S-D\)_, the set_ \(D-u+v\) _is a D_\(2\)_DS of_ \(G\) _if_ \((D\cap K)-u\neq\emptyset\)_._ Proof.: By definition, \(\mathsf{dist}_{G}(v,w)\leq 2\) for any \(w\in V(G)\). Consequently, in both (a) and (b), \(\{v\}\) is a D\(2\)DS of \(G\), and therefore so is \(D-u+v\supseteq\{v\}\). In (c), since \((D\cap K)-u\neq\emptyset\), there must be a vertex \(x\in D\cap K\) such that \(x\neq u\). Again, since \(\{x\}\) is a D\(2\)DS of \(G\), so is \(D-u+v\supseteq\{x\}\). **Theorem 9**.: D\(r\)DSR _under \(\mathsf{TS}\) on split graphs is in \(\mathtt{P}\) for any \(r\geq 2\). In particular, when \(r=2\), for any pair of size-\(k\) D\(2\)D\(s_{k},D_{t}\) of a split graph \(G=(K\uplus S,E)\), there is a \(\mathsf{TS}\)-sequence in \(G\) between \(D_{s}\) and \(D_{t}\)._ Proof.: Proposition 6 settles the case \(r\geq 3\). It remains to consider the case \(r=2\). We claim that for any pair of size-\(k\) D\(2\)DSs \(D_{s},D_{t}\) of a split graph \(G=(K\uplus S,E)\), there is a \(\mathsf{TS}\)-sequence in \(G\) between \(D_{s}\) and \(D_{t}\). Suppose that \(p=|K|\geq 1\) and \(q=|S|\geq 1\). We first show how to construct a size-\(k\) D\(2\)DS \(D^{\star}\) and then claim that for any size-\(k\) D\(2\)DS \(D\) of \(G\), there exists a \(\mathsf{TS}\)-sequence between \(D\) and \(D^{\star}\). Suppose that vertices of \(G\) are arranged as \(v_{1},v_{2},\ldots,v_{p},v_{p+1},v_{p+2},\ldots,v_{p+q}\) where \(K=\{v_{1},\ldots,v_{p}\}\) and \(S=\{v_{p+1},\ldots,v_{p+q}\}\). We take \(D^{\star}=\{v_{1},\ldots,v_{k}\}\). We describe how to construct a TS-sequence between \(D\) and \(D^{\star}\). Our construction is based on the observations described in Lemma 8. Intuitively, in each iteration \(i\in\{1,\ldots,k\}\) of our algorithm, we will move a token in \(D\) to \(v_{i}\) and considered it as "settled". We note that once a vertex in \(D\) is "settled" in some iteration, it always remains "settled" after each next iteration. For each \(i\in\{1,\ldots,k\}\), * Assign \(D^{\prime}\gets D-\{v_{1},\ldots,v_{i-1}\}\). Note that if \(i=1\) then \(D^{\prime}=D\). Intuitively, \(D^{\prime}\) contains "unsettled" vertices in \(D\) at that time. * Let \(v^{i}\in D^{\prime}\) be such that \(\mathsf{dist}_{G}(v^{i},v_{i})=\min_{x\in D^{\prime}}\mathsf{dist}_{G}(x,v_{i})\). * If \(v_{i}\in K\) and \(v^{i}\in K\), by Lemma 8, we can directly slide the token on \(v^{i}\) to \(v_{i}\) and update \(D\gets D-v^{i}+v_{i}\). * If \(v_{i}\in K\) and \(v^{i}\in S\), let \(P_{i}\) be a shortest \(v^{i}v_{i}\)-path in \(G\). Observe that \(V(P_{i})\cap D=\{v^{i}\}\) and \(P_{i}\) is of length at most 2. If \(P_{i}\) is of length 1 (i.e., \(P=v^{i}v_{i}\)), again by Lemma 8, we can directly slide the token on \(v^{i}\) to \(v_{i}\) and update \(D\gets D-v^{i}+v_{i}\). If \(P_{i}\) is of length 2, \(v^{i}\) and \(v_{i}\) must have a common neighbor \(w\in K\). Since \(V(P_{i})\cap D=\{v^{i}\}\), we have \(w\notin D\). By Lemma 8, we can immediately slide the token on \(v^{i}\) to \(w\) and then from \(w\) to \(v_{i}\), and update \(D\leftarrow(D-v^{i}+w)-w+v_{i}\). * If \(v_{i}\in S\), we must have \(i>p=|K|\) (i.e., all vertices in \(K\) are already filled with tokens) and therefore \(v^{i}\in S\) (since all vertices in \(K\) are already "settled"). Thus, a shortest \(v^{i}v_{i}\)-path \(P_{i}\) must be of length either 2 or 3 and all of its vertices except \(v_{i}\) contain tokens (i.e., they are in \(D\)). If \(P_{i}\) is of length 2, \(v^{i}\) and \(v_{i}\) must have a common neighbor \(w\in K\) and since \(i>p\) we also have \(w\in D\). Thus, by Lemma 8, we can directly slide the token on \(w\) to \(v_{i}\) and then from \(v^{i}\) to \(w\) and update \(D\leftarrow(D-w+v_{i})-v^{i}+w\). If \(P_{i}\) is of length 3, let \(P_{i}=v^{i}xyv_{i}\) and we must have \(\{x,y\}\subseteq K\) and since \(i>p\) we also have \(\{x,y\}\subseteq D\). Again, by Lemma 8, we can directly slide the token on \(y\) to \(v_{i}\), then the token on \(x\) to \(y\), and then the token on \(v^{i}\) to \(x\), and update \(D\leftarrow((D-y+v_{i})-x+y)-v^{i}+x\). Since any TS-sequence in \(G\) is also a TJ-sequence, a direct consequence of Theorem 9 and Proposition 6 is as follows. **Corollary 10**.: D\(r\)DSR _under_ TJ _on split graphs is in_ P _for any \(r\geq 2\)._ We now consider _shortest_ reconfiguration sequences in split graphs when \(r=2\). Observe that each R-sequence (\(\mathsf{R}\in\{\mathsf{TS},\mathsf{TJ}\}\)) between two DrDSs \(D_{s},D_{t}\) induces a bijection \(f\) between them: the token on \(u\in D_{s}\) must finally be placed on \(f(u)\in D_{t}\) and vice versa. **Theorem 11**.: _Suppose that \(D_{s}=\{s_{1},\ldots,s_{k}\}\) and \(D_{t}=\{t_{1},\ldots,t_{k}\}\) are two size-\(k\) D2DSs of a split graph \(G=(K\uplus S,E)\). Let \(M^{\star}_{\mathsf{TS}}=\min_{f}\sum_{i=1}^{k}\mathsf{dist}_{G}(s_{i},f(s_{i}))\) where \(f\) is a bijection between vertices of \(D_{s}\) and \(D_{t}\). Then,_ 1. \(\mathsf{OPT}_{\mathsf{TS}}(G,D_{s},D_{t})\geq M^{\star}_{\mathsf{TS}}\)_._ 2. _There exists a graph_ \(G\) _and two size-_\(k\) _D2DSs_ \(D_{s},D_{t}\) _of_ \(G\) _such that_ \(\mathsf{OPT}_{\mathsf{TS}}(G,D_{s},D_{t})=M^{\star}_{\mathsf{TS}}+1\)_._ Proof.: 1. In order to slide a token from \(s_{i}\in D_{s}\) to \(f(s_{i})\in D_{t}\) for some \(i\in\{1,\ldots,k\}\), one cannot use less than \(\mathsf{dist}_{G}(s_{i},f(s_{i}))\) token-slides. 2. We construct a split graph \(G=(K\uplus S,E)\) as follows. (See Figure 4.) * \(K\) contains 3 vertices labelled \(s_{1},a,b\). Vertices of \(K\) form a clique in \(G\). * \(S\) contains \(2k+1\) vertices labelled \(s_{2},\ldots,s_{k},t_{1},\ldots,t_{k},c,d\). Vertices of \(S\) form an independent set in \(G\). * Join \(s_{1}\) to \(s_{2},\ldots,s_{k},t_{1},\ldots,t_{k}\). Join \(a\) to \(c\) and \(t_{1}\). Join \(b\) to \(t_{2},\ldots,t_{k}\) and \(d\). Let \(D_{s}=\{s_{1},\ldots,s_{k}\}\) and \(D_{t}=\{t_{1},\ldots,t_{k}\}\). One can readily verify that both \(D_{s}\) and \(D_{t}\) are D2DSs of \(G\). Observe that \(M^{\star}_{\mathsf{TS}}=\sum_{i=1}^{k}\mathsf{dist}_{G}(s_{i},t_{i})=2(k-1)+1=2k-1\). Additionally, any TS-sequence \(\mathcal{S}\) in \(G\) between \(D_{s}\) and \(D_{t}\) must begin with sliding the token on \(s_{1}\) to one of its unoccupied neighbors. A TS-sequence of length exactly must slide \(s_{1}\) to one of \(t_{1},\ldots,t_{k}\) but that is not possible, otherwise either \(c\) (if sliding to \(t_{2},\ldots,t_{k}\)) or \(d\) (if sliding to \(t_{1}\)) would not be 2-dominated by the resulting token-set. Thus, such a sequence does not exist. On the other hand, a TS-sequence of length exactly \(M^{\star}_{\mathsf{TS}}+1\) can be constructed: first sliding the token on \(s_{1}\) to \(a\), then sliding the token on \(s_{i}\) to \(t_{i}\) along the \(s_{i}t_{i}\)-path for \(2\leq i\leq k\), and finally sliding the token on \(a\) to \(t_{1}\). Since a token is always placed on \(a\in K\) (whose distance to any other vertex is at most 2) after the first token-slide and before the final one, the above sequence of token-slides is indeed a TS-sequence in \(G\). **Theorem 12**.: _Suppose that \(D_{s}=\{s_{1},\ldots,s_{k}\}\) and \(D_{t}=\{t_{1},\ldots,t_{k}\}\) are two size-\(k\) D2DSs of a split graph \(G=(K\uplus S,E)\). Let \(M^{\star}_{\mathsf{TJ}}=\frac{|D_{s}\Delta D_{t}|}{2}\). Then,_ 1. \(\mathsf{OPT}_{\mathsf{TJ}}(G,D_{s},D_{t})\geq M^{\star}_{\mathsf{TJ}}\)_._ 2. _There exists a graph_ \(G\) _and two size-_ \(k\) _D2DSs_ \(D_{s},D_{t}\) _of_ \(G\) _such that_ \(\mathsf{OPT}_{\mathsf{TS}}(G,D_{s},D_{t})=M^{\star}_{\mathsf{TJ}}+1\)_._ Proof.: 1. Trivial. 2. We construct a split graph \(G=(K\uplus S,E)\) as follows. * \(K\) contains \(2k\) vertices labelled \(v_{1},\ldots,v_{k},w_{1},\ldots,w_{k}\). Vertices in \(K\) form a clique in \(G\). * \(S\) contains \(k^{2}+k\) vertices labelled \(v_{11},\ldots,v_{1k},v_{21},\ldots,v_{2k},\ldots,v_{k1},\ldots,v_{kk},w_{11}, \ldots,w_{k1}\). Vertices in \(S\) form an independent set in \(G\). * For each fixed \(i\in\{1,\ldots,k\}\), * for \(j\in\{1,\ldots,k\}\), * join \(v_{i}\) to every \(v_{ij}\), * join \(w_{i}\) to every \(v_{ji}\), * join \(w_{i}\) to \(w_{i1}\). Let \(D_{s}=\bigcup_{i=1}^{k}\{v_{ii}\}\) and \(D_{t}=\bigcup_{i=1}^{k}\{w_{i1}\}\). To see that \(D_{s}\) is a D2DS of \(G\), note that each \(v_{ii}\) 2-dominates every vertex in \(K\cup\bigcup_{j=1}^{k}\{v_{ij}\}\cup\{w_{ii}\}\). To see that \(D_{t}\) is a D2DS of \(G\), note that each \(w_{i1}\) 2-dominates every vertex in \(K\cup\bigcup_{j=1}^{k}\{v_{ji}\}\). Moreover, any TJ-sequence of length exactly \(M^{\star}_{\mathsf{TJ}}\) must begin with a direct token-jump from some \(v_{ii}\) to some \(w_{j1}\) for some \(1\leq i,j\leq k\) but that is not possible, otherwise no vertices in \(\bigcup_{\ell=1}^{k}\{v_{i\ell}\}-v_{ij}\) are 2-dominated by the resulting token-set. On the other hand, a TJ-sequence of length exactly \(M^{\star}_{\mathsf{TJ}}+1\) can be constructed: first jumping the token on \(v_{11}\) to \(v_{1}\), then for \(2\leq i\leq k\), directly jumping the token on \(v_{ii}\) to \(w_{i1}\), and finally jumping the token on \(v_{1}\) to \(w_{11}\). As before, since a token is always placed at \(v_{1}\in K\) after the first token-jump and before the final one, the above sequence of token-jumps is indeed a TJ-sequence in \(G\). Figure 4: Construction of a split graph \(G=(K\uplus S,E)\) satisfying Theorem 11(b). ### Trees **Theorem 13**.: D\(r\)DSR _under \(\mathsf{TJ}\) on trees is in \(\mathsf{P}\) for any \(r\geq 2\)._ To prove this theorem, we extend the idea of Haddadan et al. [14] for \(r=1\) under \(\mathsf{TAR}\) and the linear-time algorithm of Kundu and Majumder [15] for finding a minimum D\(r\)DS on trees. In particular, we employ a simpler implementation of Kundu and Majumder's algorithm presented by Abu-Affash, Carmi, and Krasin [2]. More precisely, based on the minimum D\(r\)DS \(D^{\star}\) obtained from the implementation of Abu-Affash, Carmi, and Krasin, we construct a partition \(\mathbb{P}(T)\) of \(T\) consisting of \(\gamma_{r}(T)\) vertex-disjoint subtrees, each of which contains exactly one vertex of \(D^{\star}\). (Haddadan et al. called such a set \(D^{\star}\) a _canonical_ dominating set.) For convenience, we denote by \(C_{x}\) the member of \(\mathbb{P}(T)\) whose intersection with \(D^{\star}\) is the vertex \(x\). We claim that \(\mathbb{P}(T)\) satisfies the following property: for any D\(r\)DS \(D\) of \(G\), each member of \(\mathbb{P}(T)\) contains at least one vertex in \(D\). Using this property, it is not hard to design a linear-time algorithm for constructing a \(\mathsf{TJ}\)-sequence between any pair of size-\(k\) D\(r\)DSs \(D_{s},D_{t}\) of \(G\). The key idea is one can transform both \(D_{s}\) and \(D_{t}\) into some D\(r\)DS \(D\) that contains \(D^{\star}\). For instance, to transform \(D_{s}\) into \(D\), for each subtree \(C_{x}\in\mathbb{P}(T)\) for \(x\in D^{\star}\), we move any token in \(D_{s}\cap V(C_{x})\) to \(x\). If we handle each subtree \(C_{x}\) based on the order of subtrees added to \(\mathbb{P}(T)\) in our modified implementation, such a transformation will form a \(\mathsf{TJ}\)-sequence in \(T\). After this procedure, we obtain a set of tokens \(D^{\prime}\) that contains \(D^{\star}\) and since \(D^{\star}\) is a minimum D\(r\)DS of \(G\), transforming \(D^{\prime}\) into \(D\) under \(\mathsf{TJ}\) can now be done easily: until there are no tokens to move, repeatedly take a token in \(D^{\prime}-D\), move it to some vertex in \(D-D^{\prime}\), and update both \(D\) and \(D^{\prime}\). We now define some notations and, for the sake of completeness, describe the algorithm of Abu-Affash, Carmi, and Krasin [2]. In a graph \(G\), for a vertex subset \(D\subseteq V(G)\) and a vertex \(u\in V(G)\), we define \(\delta_{D}(u)=\min_{v\in D}\mathsf{dist}_{G}(u,v)\) and call it the _distance_ between \(u\) and \(D\). Observe that a vertex \(u\) is \(r\)-dominated by \(D\) if \(\delta_{D}(u)\leq r\) and therefore \(D\) is a D\(r\)DS of \(G\) if for every \(u\in V(G)\) we have \(\delta_{D}(u)\leq r\). For a \(n\)-vertex tree \(T\), let \(T_{u}\) be the _rooted form_ of \(T\) when regarding the vertex \(u\in V(T)\) as the root. For each \(v\in V(T_{u})\), we denote by \(T_{v}\) the subtree of \(T_{u}\) rooted at \(v\). In other words, \(T_{v}\) is the subtree of \(T_{u}\) induced by \(v\) and its descendants. We also define \(h(T_{v})=\max_{u\in V(T_{v})}\mathsf{dist}_{T_{u}}(v,w)\) and call it the _height_ of \(T_{v}\). In other words, \(h(T_{v})\) is the largest distance from \(v\) to a vertex in \(T_{v}\). The set of children of \(v\) in \(T_{u}\) is denoted by \(\mathit{child}(v)\). The algorithm is described in Algorithm 1. In short, in each iteration, it finds a subtree \(T_{v}\) of height exactly \(r\), adds \(v\) to \(D^{\star}\), and removes all the leaves of \(T_{u}\) that are in \(N_{T_{u}}^{T}[v]\). To implement the algorithm in \(O(n)\) time, a modified version of the depth-first search (DFS) algorithm was used in [2] (Function ModifiedDFS in Algorithm 1). The procedure ModifiedDFS visits the vertices of \(T_{u}\) starting from the root \(u\) and recursively visits each of its children, which means vertices in \(D^{\star}\) would be added in a "bottom-up" fashion. In each recursive call ModifiedDFS\((v)\), if \(h(T_{v})=r\) then \(v\) is added to \(D^{\star}\) and since all vertices in \(T_{v}\) is \(r\)-dominated by \(v\), we remove them from \(T_{u}\) and return \(\delta_{D^{\star}}(v)=0\). Otherwise (\(h(T_{v})\neq r\)), we call ModifiedDFS\((w)\) for each child \(w\) of \(v\), and we update \(\delta_{D^{\star}}(v)\) and \(h(T_{v})\) according Figure 5: Construction of a split graph \(G=(K\uplus S,E)\) satisfying Theorem 12(b). Vertices in the yellow box are in \(K\). to these calls. When these calls return, we have \(h(T_{v})\leq r\). Then we check whether \(\delta_{D^{\star}}(v)+h(T_{v})\leq r\). If so (which means the current \(D^{\star}\)\(r\)-dominates \(v\) and all its descendants in the original rooted tree \(T_{u}\)), we remove all the vertices of \(T_{v}\) from \(T_{r}\) and return \(\delta_{D^{\star}}(v)\). Otherwise \((\delta_{D^{\star}}(v)+h(T_{v})>r)\), we check again whether \(h(T_{v})=r\) (in case the descendants reduced the height of \(T_{v}\) to \(r\)). If so, we add \(v\) to \(D^{\star}\), remove all the vertices of \(T_{v}\) from \(T_{u}\), and return \(\delta_{D^{\star}}(v)=0\). Otherwise \((h(T_{v})<r)\), we return \(\infty\). Finally, when \(\delta_{D^{\star}}(u)=\infty\), we add \(u\) to \(D^{\star}\). ``` Input: A tree \(T_{u}\) rooted at \(u\). Output: A minimum distance-\(r\) dominating set \(D^{\star}\) of \(T_{u}\). 1\(D^{\star}\leftarrow\emptyset\) 2for each \(v\in V(T_{u})\)do 3 compute \(h(T_{v})\) 4\(\delta_{D^{\star}}(v)\leftarrow\infty\) 5\(\delta_{D^{\star}}(u)\leftarrow\texttt{ModifiedDFS}(u)\) 6if\(\delta_{D^{\star}}(u)=\infty\)then 7\(D^{\star}\gets D^{\star}+u\) 8return\(D^{\star}\) 9FunctionModifiedDFS(\(v\)): 10if\(h(T_{v})=r\)then 11\(D^{\star}\gets D^{\star}+v\) 12\(T_{u}\gets T_{u}-T_{v}\) 13\(h(T_{v})\leftarrow-1\) 14\(\delta_{D^{\star}}(v)\gets 0\) // \(h(T_{v})>r\) 15 16else 17for each \(w\in\textit{child}(v)\)do 18if\(h(T_{w})\geq r\)then 19\(\delta_{D^{\star}}(v)\leftarrow\min\{\delta_{D^{\star}}(v),\texttt{ModifiedDFS}(w)+1\}\) 20\(h(T_{v})\leftarrow\max\{h(T_{w})+1:w\in\textit{child}(v)\}\)// updating \(h(T_{v})\) 21if\(h(T_{v})+\delta_{D^{\star}}(v)\leq r\)then 22\(T_{u}\gets T_{u}-T_{v}\) 23\(h(T_{v})\leftarrow-1\) 24if\(h(T_{v})=r\)then 25\(D^{\star}\gets D^{\star}+v\) 26\(T_{u}\gets T_{u}-T_{v}\) 27\(h(T_{v})\leftarrow-1\) 28\(\delta_{D^{\star}}(v)\gets 0\) 29else 30\(\delta_{D^{\star}}(v)\leftarrow\infty\) 31 32return\(\delta_{D^{\star}}(v)\) ``` **Algorithm 1**MinDrDSTree(\(T_{u}\)) To illustrate Algorithm 1, we consider the example from [2] for \(r=2\) with the tree \(T_{u}\) rooted at \(u=1\) as described in Figure 6. The first vertex added to \(D^{\star}\) is 7 in ModifiedDFS(7), since \(h(T_{7})=2\). In this call, we remove \(T_{7}\) from \(T_{u}\), update \(h(T_{7})=-1\) and return \(\delta_{D^{\star}}(7)=0\) to ModifiedDFS(4). In ModifiedDFS(4), we update \(\delta_{D^{\star}}(4)=1\) and, after traversing vertices 6 and 8, \(h(T_{4})=1\), and since \(h(T_{4})+\delta_{D^{\star}}(4)=2=r\) and 7 is the latest vertex added to \(D^{\star}\), we remove \(T_{4}\) from \(T_{u}\) and return \(\delta_{D^{\star}}(4)=1\) to ModifiedDFS(2). In ModifiedDFS(2), since \(h(T_{2})=3>r\), we call ModifiedDFS(5) which adds 5 to \(D^{\star}\), removes \(T_{5}\) from \(T_{u}\), and returns \(\delta_{D^{\star}}(5)=0\). Then, we update \(\delta_{D^{\star}}(2)=1\) and \(h(T_{2})=0\), and since \(h(T_{2})+\delta_{D^{\star}}(2)=1<r\) and 5 is the latest vertex added to \(D^{\star}\), we remove \(T_{2}\) from \(T_{u}\), and return \(\delta_{D^{\star}}(2)=1\) to ModifiedDFS(1). In ModifiedDFS(1), since \(\delta_{D^{\star}}(1)=2\) and, after traversing 3, \(h(T_{1})=1\), we return \(\delta_{D^{\star}}(1)=\infty\) to Algorithm 1, and therefore, we add 1 to \(D^{\star}\). We now describe how to construct our desired partition \(\mathbb{P}(T_{u})\). Recall that \(\mathbb{P}(T_{u})\) is nothing but a collection of vertex-disjoint subtrees whose union is the original tree \(T_{u}\). Suppose that \(D^{\star}\) is the minimum D\(r\)DS of \(T_{u}\) obtained from Algorithm 1 and furthermore assume that vertices of \(D^{\star}\) are ordered by the time they were added to \(D^{\star}\). For each \(v\in D^{\star}\), we define \(C_{v}\) (the unique member of \(\mathbb{P}(T_{u})\) containing \(v\)) as \(T_{v}\) (the subtree of \(T_{u}\) rooted at \(v\)) and then delete \(T_{v}\) from \(T_{u}\). Figure 6 illustrates how to construct \(\mathbb{P}(T_{u})\) in the above example. From the construction, it is clear that each member of \(\mathbb{P}(T_{u})\) contains exactly one vertex from \(D^{\star}\). We say that two subtrees \(C_{x},C_{y}\) in \(\mathbb{P}(T_{u})\) are _adjacent_ if there exists \(v\in V(C_{x})\) and \(w\in V(C_{y})\) such that \(vw\in E(T_{u})\). If a subtree contains the root \(u\) then we call it the _root subtree_. Otherwise, if a subtree has exactly one adjacent subtree then we call it a _leaf subtree_ and otherwise an _internal subtree_. We now claim that the constructed partition \(\mathbb{P}(T_{u})\) satisfies the following property. **Lemma 14**.: _Let \(D\) be any DrDS of \(T_{u}\). Then, \(D\cap V(C_{v})\neq\emptyset\) holds for every \(v\in D^{\star}\)._ Proof.: We claim that for each \(v\in D^{\star}\), one can find a vertex \(v^{\prime}\in V(C_{v})\) such that \(N_{T}^{r}[v^{\prime}]\subseteq V(C_{v})\). For each \(v\in D^{\star}\), let \(D^{\star}_{v}\) be the set of all vertices added to \(D^{\star}\) before \(v\). If \(C_{v}\) is a leaf subtree, we take any leaf in \(C_{v}\) of distance exactly \(r\) from \(v\) and regard it as \(v^{\prime}\). Clearly, \(v^{\prime}\) is also a leaf of \(T_{u}\) and is not \(r\)-dominated by any vertex outside \(C_{v}\), i.e., \(N_{T}^{r}[v^{\prime}]\subseteq V(C_{v})\). If \(C_{v}\) is an internal subtree, we describe how to find our desired \(v^{\prime}\). From Algorithm 1, since \(v\) is the next vertex added to \(D^{\star}_{v}\), it follows that there must be some vertex in \(V(C_{v})\) not \(r\)-dominated by any member of \(D^{\star}_{v}\); we take \(v^{\prime}\) to be the one having maximum distance from \(v\) among all those vertices. By definition, \(v^{\prime}\) is clearly not \(r\)-dominated by any vertex in a \(C_{w}\) where \(w\in D^{\star}_{v}\). Since \(C_{v}\) is an internal subtree, by Algorithm 1, the distance between \(v\) and \(v^{\prime}\) must be exactly \(r\) and therefore no vertex in a \(C_{w}\), where \(w\in D^{\star}-D^{\star}_{v}-v\), \(r\)-dominates \(v\). (Recall that by Algorithm 1, since \(v\) is added to \(D^{\star}_{v}\), the current subtree \(T_{v}\) must have height exactly \(r\).) Thus, \(N_{T}^{r}[v^{\prime}]\subseteq V(C_{v})\). If \(C_{v}\) is the root subtree, again we can choose \(v^{\prime}\) using exactly the same strategy as in the case for internal subtrees. The main difference here is that, by Algorithm 1, the distance between \(v^{\prime}\) and \(v\) may not be exactly \(r\). However, since \(C_{v}\) contains the root \(u\), \(v\) is the last vertex added to \(D^{\star}\). (Intuitively, this means \(C_{v}\) has no "parent subtree" above it.) Therefore, in order to show that \(N_{T}^{r}[v^{\prime}]\subseteq V(C_{v})\), it suffices to show that no vertex in a \(C_{w}\), where \(w\in D^{\star}_{v}\), \(r\)-dominates \(v^{\prime}\). Indeed, this clearly holds by definition of \(v^{\prime}\). We are now ready to prove the lemma. Suppose to the contrary that there exists \(v\in D^{\star}\) such that \(D\cap V(C_{v})=\emptyset\). Then, \(D\) does not \(r\)-dominate \(v^{\prime}\)--a vertex in \(C_{v}\) with \(N_{T}^{r}[v^{\prime}]\subseteq V(C_{v})\). This contradicts the assumption that \(D\) is a DrDS. Our proof is complete. The following lemma is crucial in proving Theorem 13. **Lemma 15**.: _Let \(D\) be an arbitrary DrDS of \(T_{u}\). Let \(D^{\prime}\) be any DrDS of \(T_{u}\) that contains \(D^{\star}\), i.e., \(D^{\star}\subseteq D^{\prime}\). Then, in \(O(n)\) time, one can construct a \(\mathsf{TJ}\)-sequence \(\mathcal{S}\) in \(T_{u}\) between \(D\) and \(D^{\prime}\)._ Figure 6: A tree \(T_{u}\) rooted at \(u=1\). For \(r=2\), Algorithm 1 returns \(D^{\star}=\{7,5,1\}\). A partition \(\mathbb{P}(T_{u})=\{C_{7},C_{5},C_{1}\}\) of \(T_{u}\) is also constructed. Proof.: We construct \(\mathcal{S}\) as follows. Initially, \(\mathcal{S}=\emptyset\). **Step 1:**: For each \(v\in D^{\star}\), let \(x\) be any vertex in \(D\cap V(C_{v})\). From Lemma 14, such a vertex \(x\) exists. We append \(x\stackrel{{ T_{v}}}{{\longrightarrow}}_{\mathsf{TJ}}v\) to \(\mathcal{S}\) and assign \(D\gets D-x+v\). (After this step, clearly \(D^{\star}\subseteq D\cap D^{\prime}\).) **Step 2:**: Let \(x\in D-D^{\prime}\) and \(y\in D^{\prime}-D\). We append \(x\stackrel{{ T_{v}}}{{\longrightarrow}}_{\mathsf{TJ}}y\) to \(\mathcal{S}\) and assign \(D\gets D-x+y\). Repeat this step until \(D=D^{\prime}\). For each \(v\in D^{\star}\), let \(D^{\star}_{v}\) be the set of all vertices added to \(D^{\star}\) before \(v\). Since any vertex \(r\)-dominated by \(x\) and not in \(C_{v}\) is \(r\)-dominated by either \(v\) or a member of \(D^{\star}_{v}\), any move performed in **Step 1** results a new DrDS of \(T_{u}\). Note that after **Step 1**, \(D^{\star}\subseteq D\cap D^{\prime}\). Thus, any move performed in **Step 2** results a new DrDS of \(T_{u}\). In short, \(\mathcal{S}\) is indeed a \(\mathsf{TJ}\)-sequence in \(T_{u}\). In the above construction, as we "touch" each vertex in \(D\) at most once, the running time is indeed \(O(n)\). Using Lemma 15, it is not hard to prove Theorem 13. More precisely, let \((T,D_{s},D_{t})\) be an instance of \(\mathsf{D}r\mathrm{DSR}\) under \(\mathsf{TJ}\) where \(D_{s}\) and \(D_{t}\) are two \(\mathsf{D}r\mathrm{DS}\)s of a tree \(T\). By Lemma 15, one can immediately decide if \((T,D_{s},D_{t})\) is a yes-instance by comparing the sizes of \(D_{s}\) and \(D_{t}\): if they are of the same size then the answer is "yes" and otherwise it is "no". Moreover, in a yes-instance, Lemma 15 allows us to construct in linear time a \(\mathsf{TJ}\)-sequence (which is not necessarily a shortest one) between \(D_{s}\) and \(D_{t}\). ## 5 Concluding Remarks In this paper, we provide an initial picture of the computational complexity of \(\mathsf{D}r\mathrm{DSR}\) (\(r\geq 2\)) under \(\mathsf{TS}\) and \(\mathsf{TJ}\) on different graph classes. We extended several known results for \(r=1\) and provided a complexity dichotomy of \(\mathsf{D}r\mathrm{DSR}\) on split graphs: the problem is \(\mathsf{PSPACE}\)-complete for \(r=1\) but can be solved in polynomial time for \(r\geq 2\). The following questions remain unsolved: **Question 1:**: What is the complexity of \(\mathsf{D}r\mathrm{DSR}\) (\(r\geq 2\)) under \(\mathsf{TS}\) on trees? **Question 2:**: What is the complexity of \(\mathsf{D}r\mathrm{DSR}\) (\(r\geq 2\)) under \(\mathsf{TS}\) on interval graphs? ## Acknowledgment Niranka Banerjee is funded by JSPS KAKENHI Grant Number JP20H05967 and Duc A. Hoang is funded by University of Science, Vietnam National University, Hanoi under project number TN.23.04.
2310.20191
Quantum Subspace Correction for Constraints
We demonstrate that it is possible to construct operators that stabilize the constraint-satisfying subspaces of computational problems in their Ising representations. We provide an explicit recipe to construct unitaries and associated measurements given a set of constraints. The stabilizer measurements allow the detection of constraint violations, and provide a route to recovery back into the constrained subspace. We call this technique ''quantum subspace correction". As an example, we explicitly investigate the stabilizers using the simplest local constraint subspace: Independent Set. We find an algorithm that is guaranteed to produce a perfect uniform or weighted distribution over all constraint-satisfying states when paired with a stopping condition: a quantum analogue of partial rejection sampling. The stopping condition can be modified for sub-graph approximations. We show that it can prepare exact Gibbs distributions on $d-$regular graphs below a critical hardness $\lambda_d^*$ in sub-linear time. Finally, we look at a potential use of quantum subspace correction for fault-tolerant depth-reduction. In particular we investigate how the technique detects and recovers errors induced by Trotterization in preparing maximum independent set using an adiabatic state preparation algorithm.
Kelly Ann Pawlak, Jeffrey M. Epstein, Daniel Crow, Srilekha Gandhari, Ming Li, Thomas C. Bohdanowicz, Jonathan King
2023-10-31T05:23:50Z
http://arxiv.org/abs/2310.20191v2
# Subspace Correction for Constraints ###### Abstract We demonstrate that it is possible to construct operators that stabilize the constraint-satisfying subspaces of computational problems in their Ising representations. We provide an explicit recipe to construct unitaries and associated measurements for some such constraints. The stabilizer measurements allow the detection of constraint violations, and provide a route to recovery back into the constrained subspace. We call this technique "subspace correction". As an example, we explicitly investigate the stabilizers using the simplest local constraint subspace: Independent Set. We find an algorithm that is guaranteed to produce a perfect uniform or weighted distribution over all constraint-satisfying states when paired with a stopping condition: a quantum analogue of partial rejection sampling. The stopping condition can be modified for sub-graph approximations. We show that it can prepare exact Gibbs distributions on \(d-\)regular graphs below a critical hardness \(\lambda_{4}^{\ast}\) in sub-linear time. Finally, we look at a potential use of subspace correction for fault-tolerant depth-reduction. In particular we investigate how the technique detects and recovers errors induced by Trotterization in preparing maximum independent set using an adiabatic state preparation algorithm. ## I Introduction Neutral atoms are an emerging platform for quantum computation. This architecture can feature exceptionally long coherence times on the order of 40 seconds[1], high connectivity, and mid-circuit qubit rearrangement, native multi-qubit gates,[2; 3; 4; 5], and has recently yielded demonstrations of mid-circuit measurement[6; 7; 8; 9], promising near-term feed-forward and error-correction capabilities. Given the abilities of these new systems, we are motivated to study algorithms suited to their strengths in both the fault-tolerant (FT) and pre-FT regimes. To date, development of pre-FT algorithms has largely focused on variational algorithms, which aim to maximize the utility of noisy qubits by classically optimizing parameterized circuits[10]. Variational algorithms typically require extensive sampling, which presents a particular challenge for neutral atom QPUs with relatively long readout times[6]. Recent work attempts to address this challenge in terms of reducing readout times[11; 9] or optimization with limited sampling[12], but here we focus on an alternative approach that takes particular advantage of the already-demonstrated long coherence times and mid-circuit measurement available in neutral atom quantum computers. More recently, a small wave of algorithm development has targeted ion-trap computers, which can feature design elements such as all-to-all connectivity or feed-forward (also called adaptive) circuit capability. In particular, the work presented in this paper has been inspired by the use of adaptive circuits to prepare ground states of long-range entangled states in shorter depth than possible with unitaries alone [13; 14]. Especially in the case of the toric code ground state of [13], the perspective can be understood as retooling of error correction techniques to solve computational, rather than quantum control, problems. In this publication, we detail a new class of hybrid algorithms intended for neutral atom quantum computers (NAQC) centered on a technique we call "subspace correction" (SSC) that explicitly checks the subspace of a quantum register via strong projective measurements and performs a recovery on violating portions of the state. SSC is built from the ideas of Quantum Error Correction (QEC). For a given subspace, _e.g._ a constrained subspace, one constructs a set of generalized stabilizers whose syndromes determine if the qubit wavefunction re Figure 1: On atomic quantum computing hardware, coherence time (QEC cycle times), qubit count, rearrangement, and native multiqubit gates can deeply expand the non-trivial classical computations possible within the coherent execution of a circuit. These abilities have been, thus far, underutilized in many popular algorithm development approaches in the past decade where such capabilities were not practical. sides in the desired subspace or not. The information obtained from the readout of all stabilizer measurements can be aggregated and collectively used to prepare a recovery back into the intended subspace. We believe that there are a number of promising uses of this technique. Most importantly, from a practical standpoint, SSC specifically utilizes the theoretical framework and practical workflow of full error correction: in this way, the search for early use-cases and new algorithm development becomes intrinsically aligned with long term FTQC engineering goals in NAQCs, rather than a detour. Moreover this technique, as we show, leads to new and potentially interesting early-FT and possibly enduring FT algorithms. As a first application of SSC, we specifically consider the topic of constraint satisfaction, either in effort to construct satisfying solutions to a set of constraints or as part of a larger optimization problem or dynamical simulation. In Section II, we focus on the problem of Independent Set (IS), in having the simplest constraint structure, providing explicit stabilizers for this problem in the main text. Some additional examples of stabilizers for other common constraints are to be provided in A. We outline two use-cases. In Section III, we explore exact distribution preparation using SSC as a tool to perform Quantum Partial Rejection Sampling. We investigate the features of this algorithm in depth-bounded and pre-fault tolerant use-cases. In Section IV we briefly look at some potential applications to constrained optimization by using SSC in tandem with a clever adiabatic algorithm for maximum independent set first developed by Ref. [15]. Finally, in Section III.4 we review the merits and limitations of some of these techniques, and chart out a path for future work. ## II Subspace Correction for the Independent Set Constraint Due to its simplicity and ubiquity, in this section we will construct the tools for subspace correction using the independent set (IS) constraint: **Definition (Independent Set):**_Given an undirected graph \(\mathcal{G}=(V,E)\), an independent set comprises a subset of vertices \(I\subseteq V\) where no two vertices within \(I\) share an edge. Formally, for every \(u,v\in I\), \((u,v)\notin E\)._ The widely-accepted Ising form of the independent set problem is detailed in [16], along with other common constraints that we consider in Appendix A. Here, each vertex in the graph corresponds to a qubit. When measured in the computational basis, the state of this qubit, which belongs to the set \(\{0,1\}\), signifies either exclusion (\(0\)) or inclusion (\(1\)) of the vertex in the set. For any two qubits that represent vertices connected by an edge, a measurement of the \(|11\rangle\) state indicates a violation of the independence constraint. In this way, we define an edge subspace that satisfies the independent set constraint, \(\mathcal{V}_{e}\) as any superposition over the basis states \(\{|00\rangle,|01\rangle,|10\rangle\}\). The orthogonal complement of \(\mathcal{V}_{e}\) on each edge is \(\tilde{\mathcal{V}}_{e}\) and supported by the \(|11\rangle\) basis vector. For each edge, one can construct a operator, \(\hat{S}_{e}\) such that all \(|\psi\rangle\in\mathcal{V}_{e}\) are in the \(+1\) eigenspace of \(\hat{S}_{e}\), and all \(|\psi\rangle\in\tilde{\mathcal{V}}_{e}\) are in the \(-1\) eigenspace. This operator has the explicit form \(\hat{S}_{(i,j)}=\frac{1}{2}(I+Z_{i}+Z_{j}-Z_{i}Z_{j})\). These operators have the following properties: 1. The operators \(\{\hat{S}_{e}\}\) form a group: \[\text{if }\hat{S}_{e}|\psi\rangle=|\psi\rangle\text{ and }\hat{S}_{e^{\prime}}|\psi\rangle=|\psi\rangle\] \[\text{then }\hat{S}_{e}\hat{S}_{e^{\prime}}|\psi\rangle=|\psi\rangle\] 2. The operators \(\{\hat{S}_{e}\}\) are Abelian: \[\text{if }\hat{S}_{e}|\psi\rangle=|\psi\rangle\text{ and }\hat{S}_{e^{\prime}}|\psi\rangle=|\psi\rangle\] \[\text{then }[\hat{S}_{e},\hat{S}_{e^{\prime}}]|\psi\rangle=0\] The joint \(+1\) eigenspace under all \(\hat{S}_{e}\) for \(e\in E\) is: \[\prod_{e\in E}\hat{S}_{e}|\psi\rangle=+1|\psi\rangle\qquad\text{for }\;|\psi\rangle\in\mathcal{V} \tag{1}\] Given the properties of \(\{\hat{S}_{e}\}\), we can identify this set of operators as the stabilizer of \(\mathcal{V}\)[17; 18], which represents the global subspace of all valid independent sets on a graph \(\mathcal{G}=(V,E)\). For a given graph \(\mathcal{G}\) for which we are interested in states obeying IS, SSC should be applied the relevant qubits. The result of measuring these stabilizers creates a violation graph which labels all edges where a violation of IS exists. Depending on the goal of the algorithm, a recovery can then be classically computed and implemented as unitary gates. Measuring this stabilizer in practice is a straightforward problem of finding the correct controlled unitary to implement onto an ancilla for a given set of constraints. Remarkably, the stabilizer for the independent set problem corresponds to _a single Toffoll gate controlled on the edge qubits, acting on an ancilla_. A primitive of this gate (CCZ) is available as a native gate on neutral atom quantum computers due to the Rydberg interaction[2; 19; 20; 21]. Furthermore, in a restricted Clifford\(+T\) FTQC, Toffoll gates are a low level building block that require only \(7\)\(T-\)gates and \(8\) Clifford gates. For a fully parallel implementation, the number of ancillae should be equivalent \(|E|\), potentially making use of the large numbers of qubits available on NAQC hardware. However, it is possible to perform syndrome extraction with this stabilizing code using as few as one ancilla with repeated measurements and resets. In a FTQC, the stabilizing measurement rounds for SSC can be mixed in with the regular QEC cycle times, given that the error correction cycle is long enough to allow for additional classical computation. In the next sections, we give some examples of what to do with these stabilizers. For a general objective, such as optimization, since the stabilizer only checks the constraint violation, and it is complicated to generate a list of all operations that could lead to an violation, it is typical that a recovery operation is not uniquely specified as in the case with QEC codes. Otherwise, due to the structure of the constraint space, a code on it is usually highly degenerate. For example, in the construction of maximum independent set (MIS) using an adiabatic algorithm, one might apply SSC to identify parts of the graph that have left the satisfying subspace. One then needs to recover into \(\mathcal{V}\) with a particular ansatz - whether exactly known or computed, else determined by other methods - that maximizes the probability algorithm success to the largest degree possible. This recovery is always specifically dependent the intent of the algorithm. ## III Exact distribution preparation Preparing, manipulating, and extracting information from distributions is computationally expensive for many graph-based problems, making these tasks an important focus in QC applications research. In this section we demonstrate that the existence of a stabilizer on the Independent Set subspace immediately provides a simple subspace correcting algorithm that allows one to exactly prepare probability distributions (or, partition functions) over independent sets of a graph \(\mathcal{G}\). First we will look at the case for preparing a state \(\ket{+^{\mathcal{V}}}_{\mathcal{G}}\) that encodes a perfectly uniform distribution over all IS of a graph \(\mathcal{G}=(V,E)\). The algorithm immediately extends to the case of preparing Gibbs distributions, denoted by \(\ket{\lambda_{+^{\mathcal{V}}}}_{\mathcal{G}}\). Furthermore, we will identify that this algorithm is the exact quantum analogue of perfect partial rejection sampling. A relatively new algorithm framework first developed in 2016, partial rejection sampling provably produces samples from exact distributions using a stopping condition [22; 23]. The quantum-classical correspondence of this algorithm allows us to make strong statements about run times, as-well as classically simulate run times on any class of graphs up to large sizes, a luxury in quantum algorithm development, and a useful tool for application planning. ### Algorithm Description (Uniform Distribution) In this section we outline a novel quantum approach to constructing the uniform distribution over all independent sets of a graph using only SSC and Hadamard gates. The algorithm is visually demonstrated in Fig. 2 For a given graph \(\mathcal{G}=(V,E)\) we assume classical access to an immutable list of edges (E) and vertices (V). We assume a primary quantum register containing \(N=|V|\) qubits \(\{q_{i}\}\), whose state is represented by \(|\psi\rangle\). WLOG we also assume an ancilla register with \(N_{A}=|E|\) qubits \(\{a_{e}\}\), represented by state \(|\phi\rangle\). We require a mutable list of tuples A to track the edges containing violations and a list of integers B to track the vertices to be corrected. The algorithm is then given here by Algorithm 1: ``` 1:\(|\psi\rangle\leftarrow|\mathbf{0}\rangle\)\(\triangleright\) initialize quantum register 2:\(A\leftarrow\) E \(\triangleright\) initialize list with all edges 3:\(B\leftarrow\) V \(\triangleright\) initialize list with all vertices 4:while\(\text{len}(A)>0\)do 5:\(|\phi\rangle\leftarrow|\mathbf{0}\rangle\)\(\triangleright\) reset ancillae 6:for\(e\in A\)do 7:\(\text{Toff}(q_{e[0]},q_{e[1]},a_{e})\)\(\triangleright\) extract syndrome 8:endfor 9:A\(\leftarrow\) List[\(\text{e for }e\in A\) if Measure(\(a_{e}\)) is 1] 10:B\(\leftarrow\) List[\(q_{v}\) for \(v\) in \(A\)\(\bigcup\) Neighbor(\(A\))] 11:for\(q_{v}\in B\)do 12:\(|q_{v}\rangle\leftarrow|+\rangle\) 13:endfor 14:endwhile 15:Returns:\(|\psi\rangle\) in the state \(\ket{+^{\mathcal{V}}}_{\mathcal{G}}\) ``` **Algorithm 1** Uniform Distribution Preparation for Independent Set Figure 2: Syndrome extraction depiction with a violation and the recovery sequence. A layer of toffoli gates is applied to the edges of a graph prepared in the \(\ket{+}^{\otimes N}\) state. On read-out of the ancilla, the state is projected into \(\tilde{\mathcal{V}}_{e}\), while the vertices not dominated by the violation locations (identified as blue with \(+^{\mathcal{V}}\) label) are in the \(\ket{+^{\mathcal{V}}}_{\mathcal{G}}\) state for the violation free subgraph \(g\). The syndrome graph, i.e. the location of violations, is constructed. Wherever a violation is found, both the vertices directly adjacent to edge violation and their neighbors must be recovered. If two violations are within distance-1, they can be merged into a larger cluster. The algorithm is then repeated on all edges with an unknown state (black and dashed edges) ### Final State Guarantees Algorithm 1 always results in a perfect uniform distribution over independent sets, a fact we provide a formal proof of (and its extension to Gibbs distributions) in Appendix B. In fact, the intermediate state is always a uniform distribution at the completion of each step. We imagine starting our quantum register in the Hadamard basis \(|\!+\rangle=|\!+\rangle^{\otimes N}\) and the ancillas in the \(|\mathbf{0}\rangle=|0\rangle^{\otimes N_{a}}\) state. In the bitstring basis, this is represented as: \[|\!+\rangle|\mathbf{0}\rangle_{a}=\frac{1}{2^{N/2}}\sum_{\mathbf{k}=0}^{2^{N}} |\mathbf{k}\rangle|\mathbf{0}\rangle_{a} \tag{2}\] The stabilizing operators are implemented by controlling a set of ancilla-targeting Toffolis on unknown edge states. First consider a graph with \(N\) vertices and a single edge, this results in a state entangled across the registers: \[\text{Toff}_{S_{e}}|\!+\rangle|\mathbf{0}\rangle_{a}=\frac{1}{2^{N/2}}\left( \sum_{\mathbf{k}\in\mathcal{V}_{1}}|\mathbf{k}\rangle|0\rangle_{a}+\sum_{ \mathbf{k}\in\mathcal{\tilde{V}}_{1}}|\mathbf{k}\rangle|1\rangle_{a}\right) \tag{3}\] Both the left and right sums are uniform superpositions over a set of bitstrings, with the left (right) being a uniform distribution of all states satisfying (violating) independence on the edge. Measuring the ancilla qubit applies projective measurements \(\Pi_{0}=|0\rangle\langle 0|\) and \(\Pi_{1}=|1\rangle\langle 1|\). Hence, the state is always collapsed into a uniform distribution. In the case that a violation is found on the edge, we have collapsed the state into: \[\sum_{\mathbf{k}\in\mathcal{\tilde{V}}_{1}}|\mathbf{k}\rangle|1\rangle_{a}=| +^{\mathcal{V}}\rangle_{g}|1\rangle_{e}|1\rangle_{a} \tag{4}\] Where \(|+^{\mathcal{V}}\rangle_{g}\) is a uniform superposition of IS on the subgraph \(g=G/(A\cup\partial_{A})\) which is G with the vertices of the violating edge and the neighboring vertices (\(\partial_{A}\)) removed and \(|1\rangle_{e}=|11\rangle_{ij}\) for the vertices \((i,j)=e\). The ancilla and violating qubits are returned to the single qubit \(|0\rangle\) states and a Hadamard is applied to the violating qubits, preparing the state \(|+^{\mathcal{V}}\rangle_{g}|+\rangle_{e}|0\rangle_{a}\), where the procedure is repeated on the edge until success preparing the state \(|+^{\mathcal{V}}\rangle_{\mathcal{G}}\) on the primary register. For the more general case of \(|E|\) edges, a similar analysis shows that the distribution is always uniform. In this case the distributions conditioned on the state of the ancilla register measurement still remain uniform. In the event syndrome extraction projects into a state with violations on a set of edges provided by list \(A\), in the primary register, one obtains the state \(|\psi\rangle=|+^{\mathcal{V}}\rangle_{g}|1\rangle_{A}|0\rangle_{\partial_{A}}\) where \(|\mathbf{0}\rangle_{\partial_{A}}\) are all vertices neighbors to vertices in \(A\). The form of this result is important for later discussions. #### ii.2.1 Extension to Gibbs Distributions The classical version of this algorithm has a natural extension to Gibbs distributions. This is obtained by replace the even sampling of each vertex from the \(\{0,1\}\), with drawing from a Bernoulli\((\frac{\lambda}{\lambda+1})\) Distribution, where \(\lambda\) is the so-called hardness parameter1. As is known in the literature, it can be shown that the final probability distribution from this process assigns a weight, up to an overall normalization, \(p(s)\propto\lambda^{|s|}\) to a choice of bitstring from the set of independent sets, \(s\in IS\), where \(|s|\) is the cardinality of the set. Footnote 1: In physics inspired descriptions, this paramter is related to chemical potential \(\mu\) and temperature \(T\) as \(\ln(\lambda)=\mu/T\) The extension of 2 to prepare Gibbs distributions over IS similarly requires replacing the Hadamard gates, \(H\), which prepare single qubits states \(|+\rangle\) with rotated gates, \(H_{\lambda}\),that prepare the single qubit states given by: \[|\lambda_{+}\rangle=\sqrt{\frac{\lambda}{1+\lambda}}|0\rangle+\sqrt{\frac{1}{ \lambda+1}}|1\rangle \tag{5}\] This state is equivalent to \(|+\rangle\) when \(\lambda=1\). The resulting state prepared is given by: \[|\lambda_{+^{\mathcal{V}}}\rangle\langle\lambda_{+^{\mathcal{V}}}|_{\mathcal{ G}}=\frac{1}{Z_{\mathcal{G},0}(\lambda)}\sum_{s\in IS}\lambda^{|s|}|s\rangle \langle s| \tag{6}\] where the normalization factor is: \[Z_{\mathcal{G},0}(\lambda)=\sum_{s\in IS}\lambda^{|s|} \tag{7}\] A proof of this claim is also provided in Appendix B. The Algorithm 1 is then generalized to the following Algorithm 2: ``` \(|\psi\rangle\leftarrow|\mathbf{0}\rangle\)\(\triangleright\) initialize quantum register \(A\leftarrow\) E\(\triangleright\) initialize list with all edges \(B\leftarrow\) V\(\triangleright\) initialize list with all vertices while len\((A)>0\)do \(|\phi\rangle\leftarrow|\mathbf{0}\rangle\)\(\triangleright\) reset ancillae for\(e\in A\)do \(\text{Toff}(q_{e[0]},q_{e[1]},a_{e})\)\(\triangleright\) extract syndrome endfor \(\text{A}\leftarrow\) List\([e\) for \(e\in A\) if Measure\((a_{e})\) is 1] \(\text{B}\leftarrow\) List\([q_{v}\) for \(v\) in \(A\)\(\bigcup\) Neighbor\((A)]\) for\(q_{v}\in B\)do \(|q_{v}\rangle\leftarrow|\lambda_{+}\rangle\) endfor endwhile Returns: \(|\psi\rangle\) in the state \(|\lambda_{+^{\mathcal{V}}}\rangle_{\mathcal{G}}\) ``` **Algorithm 2** Gibbs \(\lambda\) Distribution Preparation for Independent Set ### Runtime analyses The runtime of Algorithm 1 can be carefully bounded analytically, or simulated classically without the need for quantum simulation techniques, using the exact classical mapping to partial rejection sampling of the previous section. To consider the runtime, we generalize to the case of the Gibbs distribution in hardness-parameter \(\lambda\), where \(\lambda=1\) corresponds to the uniform distribution case. We can then directly quote the results of runtime analyses from the classical process. For graphs of bounded degree-\(d\), [22] showed, analytically, that hardness parameters bounded by the inequality \[\lambda<\frac{1}{2\sqrt{e}d-1}\] had a worst-case runtime that was, on average \(O(n)\), and with a high probability of \(O(n\log n)\). This is a lower-bound on \(\lambda_{d}^{*}\), which represents the largest \(\lambda\) for which all graphs of bounded degree \(d\) converge with this expected runtime. It asymptotically coalesces with the bound \(\bar{\lambda}_{d}^{*}\sim O(d^{-1})\) for which it becomes provably NP-hard to sample from this class of graphs [24; 25; 26]. The runtime estimate for the classical algorithm was tightened for \(d=3\) in Ref. [23] to \(\lambda_{5}^{*}\geq\)0.150, where slack still exists in the analysis. The strict upper-bound on this quantity is given by \(\lambda_{3}^{*}<0.5-\epsilon\). For this reason, the preparation of uniform states (\(\lambda=1\)) will in such classes of graphs in \(d=3\) will always be exponential for the average case. When looking at the preparation of Gibbs states on 3-regular graphs, we find that for the average case one obtains sublinear preparations for \(\lambda\lesssim 0.7\) With the knowledge of these analytical bounds, in the next section we explore numerically some graph classes up to size \(n=80\). We do this by directly applying the classical algorithm for partial rejection sampling to graphs chosen from random using either an appropriate random sampling function from the networkX package when available, or through careful construction of a unbiased graph sampling algorithm. Mean and median data is calculated from at least 100 samples for each graph size point, with all collected data plotted in a semi-transparent color. In particular, we look at scaling for planar graphs, sub-graph approximations, and then finally look at empirical results for Gibbs state preparation, where we find sub-linear scaling on average for larger \(\lambda\) values than the provided bounds on average case instances. #### iii.3.1 Empirical runtime for uniform preparation on regular planar graphs The cited papers on partial rejection sampling in independent sets provided extensive analytical and simulation-based evidence for runtime bounds in graphs of bounded degree-\(d\). Because planar graphs imbue additional structure, we considered that runtimes might be improved in this class of bounded degree graph. As shown in the \(\log-\log\) plot in Fig. 3, we found that the average runtime for planar graphs of bounded degree \(d=3\) asymptotically limit to weak exponential scaling to the best of our fits. The plots were made by running the algorithm for 200 random graphs per data point and recording the number of rounds required to reach the algorithm termination condition. The random graphs were drawn from a custom script that uniformly samples planar graphs of a given bounded degree. The histogram of runtimes is heavily skewed toward fewer rounds, as indicated visually 2 and by the median (dashed points), with a long tail of low probability-high rounds events. Planar graphs of bounded degree \(d=4\) are clearly exponential beyond a graph size of \(n=30\), and \(d=2\) has clear logarithmic scaling in graph size. Footnote 2: Note that this is a lin-log plot, so the approximately normal sample appearance about the line of fit corresponds to a skewed log-normal distribution in the raw data Despite weakly exponential behavior, the number of SSC rounds for graph sizes required for many real-life applications (as few as \(O(100)\)) would not be prohibitive to using the algorithm to prepare states for further processing if a uniform distribution was required for the application. It is also important to remember that the number Figure 3: Plots of uniform preparation runtime vs. graph size in planar graphs of bounded-degree \(d\), obtained by running the classical dual algorithm up to graph size \(n=80\). Triangular data points represent mean runtimes for 200 random graphs for each \(n\). The dash marker represents the median. Each semi-transparent point on the plot represents a single run. Red ‘x’ marks indicate that the runtime exceeded the maximum number of tried (\(5\times 10^{5}\) for \(d=4\) and \(5\times 10^{4}\) for \(d=3\)). For planar graphs of size \(d=3\) we observe weakly exponential runtimes asymptotically for the average case. The dashed lines represent the best and worst possible runtimes for any graph of size \(n\). of rounds required is not the number of repeated circuit executions, but is instead proportional to the total (logical) circuit depth. #### iii.3.2 Halting states on \(\alpha-\)subgraphs Rather than run the algorithm to completion, one may alternatively choose an early halting condition based on the fraction of edges, \(\alpha=|A|/|E|\), that are permitted contain violations. This results in a termination into the state: \[|\psi_{\alpha}\rangle=|+^{\mathcal{V}}\rangle_{g_{\alpha}}|1\rangle_{A}| \mathbf{0}\rangle_{\partial_{A}} \tag{8}\] where \(g_{\alpha}=G\backslash A\bigcup\partial_{A}\) is the subgraph less the vertices associated with violating edges. Similar to before, each data point represents runs from 100 random 3-regular graphs generated using the networkX.random_regular_graph() function for all allowable graph sizes up to \(n=80\) for \(d=3\). From our numerical evidence in Fig 4, it seems that for graph sizes up to \(n\approx 80\), approximations with \(\alpha\approx 0.1\) can be prepared sub-linearly in the average case. These results should be continued out to larger graph size using a more streamlined simulation technique, or verified analytically. From such a state, one can apply a unitary operation on \(|1\rangle_{A}|\mathbf{0}\rangle_{\partial}\) to prepare a target uniform state which spans a larger fraction of the states. For example, one might prepare \(|+^{\mathcal{V}}\rangle_{e}\) sequentially on each violating edge \(e\) to obtain a uniform distribution on the subgraph induced by \(G\backslash B\), where the boundary\(|0\rangle_{\partial}\) vertex states are collapsed. This, in practice on graphs small enough to simulate quantum circuits on (\(n\leq 15\)),as results in uniform distributions that span a reasonable subset of the IS. Due to how small the graphs are, and the likelihood of making analytical progress on this question in an upcoming publication, we do not include these small numerical studies here. In principle, one is free to construct a clever oracle involving a nontrivial classical computation of a unitary \(U_{repair}\) which is capable of transforming violation and boundary regions into an acceptable state for the intended application. #### iii.3.3 Sublinear preparation of Gibbs distributions As mentioned, the algorithm can be adapted to prepare Gibbs distributions over the independent sets, as states of the form of Eq. (6), rather than uniform distributions. These distributions fully span the independent sets, but have a weight dependent on the chosen hardness parameter \(\lambda\). We numerically evaluated the expected runtime for preparing Gibbs distributions on \(3-\)regular graphs for various \(\lambda\) in Fig. 4. Each data point in the plot corresponds to 100 random graphs generated by the networkX.random_regular_graph() function for all allowable graph sizes up to \(n=80\) for \(d=3\). While the limits worked out analytically refer to average runtimes of worst-case graphs in their respective classes, we find that up to about \(\lambda\sim 0.7\) we are able to prepare distributions for random graphs efficiently, on average. This result is quite remarkable when we consider the value of perfectly prepared q-sampling state when used as an input to other algorithms such as Grover Search, Quantum Counting, or potential distribution comparison algorithms[27; 28; 29]. We remark on future work towards this goal in the discussion of Section III.4. ### Discussion The mapping to classical runtimes is a double-edged sword as, for the case of _direct_ sampling from perfect distributions, this algorithm can not produce a single sample more efficiently than a classical computer. We note that the output of this algorithm, however, is the _entire distribution_ amplitude encoded into the quantum register, while the output of the classical sampling algorithm is a single bitstring. We note that some algorithms for approximately uniform sampling have been previously devised using adiabatic[30] and Markov methods[31], although these are approximate preparations. We argue that the ability to produce _perfect_ distributions, \(|\psi\rangle=\sum_{s\in IS}\sqrt{p_{s}}|s\rangle\), with known distribution properties can be considered a resource, especially when these distributions can be prepared in sub-linear time like the Gibbs distributions for sub-critical \(\lambda\). Such a resource state, which spans the entire basis of independent sets, may serve as the first step in an algorithm that probes quantitative proprieties of the graph. For example, using amplitude amplification[27; 32], one could find a set of independence number \(k\) within \(O(N\sqrt{Z_{G,0}(\lambda)/m_{k}\lambda^{k}})\) time, where \(m_{k}\) is the multiplicity of sets of independence \(k\), for \(\lambda<\lambda^{*}\). One may also use such states as the inputs to distribution tests, for example using orthogonality tests as in Ref [29]. Another interesting property of the distribution algorithms is that if the phases of the states in the distribution do not matter for the follow up to the algorithm, then one only needs to correct for bit-flip errors during the state preparation. The result from a bit-flip corrected implementation of this code will still have a classical probability distribution encoded into the amplitude of states, but the phase from those states will be randomly distributed according to the mechanism of phase errors. This code is also robust to leakage errors due to qubit loss, since detection of leakage and atom replacement can be built into the SSC protocol. ## IV Applications to Constrained Optimization: Adiabatic Depth Reduction In addition to the distribution preparation protocol, one can use SSC as a way to stabilize constraint subspaces during the execution of an optimal state preparation algorithm. The typical way constrained optimization problems are distilled for quantum computation is to map them to an Ising-like Hamiltonian, most famously demonstrated in Lucas' _Ising forms of many NP Problems_[16]. The Hamiltonian of these problems have a common structure, namely, they are typically the sum of two component Hamiltonians: \[H=\lambda H_{A}+H_{B}, \tag{9}\] where \(H_{A}\) encodes a set of _constraints_ on the qubits as its lowest energy state, and \(H_{B}\) encodes the _objective_ as its lowest energy states, and \(\lambda\) is some Lagrange multiplier that roughly signifies the importance of the constraints in the energy landscape. An adiabatic, or adiabatic-inspired algorithm is then run to find the low-energy states of this problem. An obstacle for such approaches is that, often, such algorithms return invalid -- _i.e._ constraint violating states -- with high probability. The reason for this is either due to expressibility of parameterized variational ansatz, such as in QAOA, or due to errors from non-adibatic terms in adiabatic evolution or the Trotterization of it. Given this, it is reasonable to ask if SSC can improve adiabatic algorithms by explicitly constraining it to run in the intended subspace. A related study in Ref [33] used the quantum Zeno effect to monitor constraints without recovery, in tandem variational algorithms such as QAOA, and found evidence for solution quality improvement in very small problem sizes. As we are not interested in variation algorithms, and rather coherent algorithms of the form of shown in Fig. 1, we instead look at how SSC could help reduce the depth needed for optimization problems using adiabatic preparation. In this section, we demonstrate an example of this idea using SSC to reduce the circuit depth needed to approximate solutions to Maximum Independent Set (MIS) for a given graph \(\mathcal{G}=(V,E)\). We provide details for the full algorithm, including the evolution unitary and a recovery strategy. We are only able to simulate this method on small graph sizes due to the computational intensity of adaptive circuits. ### Non-Abelian Adiabatic Mixing First we describe the Non-Abelian adiabatic evolution algorithm for preparing MIS. In the notation of (9), the problem of MIS is given by the constraint Hamiltonian: \[H_{A}=-\Delta\sum_{\langle ij\rangle}Z_{i}+Z_{j}-Z_{i}Z_{j} \tag{10}\] where the sum runs over all edges \(\langle ij\rangle\) in \(\mathcal{G}\), \(Z_{i}\) are Pauli-Z matrices acting on the \(1-\)qubit subspace of the \(i\)th, and \(\Delta\) is an energy scale. The objective Hamiltonian is given by: Figure 4: Triangular data points represent mean runtimes for 100 random graphs for each \(n\). The dash marker represents the median. Each semi-transparent point on the plot represents a single run. (Left) Scaling for preparation of \(g_{\alpha}\)- subgraphs. Our fits reveal that, at least for graph sizes up to \(n\sim 80\), halting states for \(\alpha\geq 0.10\) have runtimes that scale efficiently with graph size. (Right) Gibbs preparation of various \(\lambda\) states on \(3-\)regular graphs. While the loose analytical bound for efficient scaling for worst-case graphs are around \(\lambda_{3}^{*}\sim 0.15\), we find that, empirically, average case graphs scale sublinearly in rounds up to \(\lambda\approx 0.7\). The inset provides the linear-linear view of the plot to show the phase-transition in hardness more obviously. \[H_{B}=-\sum_{i}Z_{i} \tag{11}\] such that maximizing the number of qubits in the single-qubit \(|1\rangle\) state minimizes the value of this term. We utilize a non-abelian adiabatic state preparation algorithm, \(U_{A}(t)\), first detailed by Yu, Wu, and Wilczek [15; 30]. We chose this algorithm due to the fact that there is a large gap protecting the independent set subspace at all points in the evolution. This method of state preparation differs from standard annealing which relies on the ramping-out of a transverse field. Rather, the constraint Hamiltonian \(H_{A}\) is taken to be the problem Hamiltonian, and the optimization is carried out by a slow global rotation applied the the quantum register. Prior to the start of the algorithm, the quantum register is prepared in a _least optimal_ state -- _i.e._ a state that maximizes \(H_{B}\) -- within \(\mathcal{V}\) is prepared. In the case of maximum independent set, this is simply \(|\mathbf{0}\rangle\). The combined action of \(H_{A}\) and the slow global rotation induces mixing in the ground-state subspace only, evolving the register toward the most optimal state while staying withing \(\mathcal{V}\) for a sufficiently slow rotation. One should note that the form of Eq. (10) is, up to a rescaling and shift, identical to the stabilizer for independent set. The sum of this Hamiltonian runs over edges, and on each edge, the Hamiltonian has the spectrum: \[E_{\langle ij\rangle}=\begin{cases}-\Delta&|00\rangle_{ij},\,|01\rangle_{ij}, \,|10\rangle_{ij}\\ 3\Delta&|11\rangle_{ij}\end{cases}, \tag{12}\] The Hamiltonian contains a \(\Delta E=4\Delta\) gap between the ground state and the first excited state. The ground-state subspace of this Hamiltonian, and hence the \(\mathcal{V}\) of the full optimization problem, is the set of independent sets of a graph \(\mathcal{G}\). We use the same complex representation of the global SO(3) rotation3 over \(N\) qubits presented in the original paper, namely: Footnote 3: The sign differences in our rotation matrix are due to identifying the \(|1\rangle\) state as set inclusion, while the previous authors considered \(|0\rangle\) to indicate inclusion. \[U_{B}(\theta,\varphi)=\begin{pmatrix}-\cos(\theta/2)&e^{i\varphi}\sin(\theta/ 2)\\ e^{-i\varphi}\sin(\theta/2)&\cos(\theta/2)\end{pmatrix}^{\otimes N}, \tag{13}\] where \(\theta=\theta(t)\) and \(\varphi=\varphi(t)\) are parameterized functions of time. The ideal adiabatic evolution then takes the form \[U_{A}(t)|\psi(0)\rangle=e^{-i\int_{0}^{t}H(t^{\prime})dt^{\prime}}|\psi(0) \rangle=|\psi(t)\rangle \tag{14}\] for \(H(t)=U_{B}(t)HU_{B}^{-1}(t)\). At the end of the evolution, a bit flip operation is applied to all qubits in the register, and the register is read out in the computational basis. If the evolution was sufficiently slow enough, _e.g._ slow enough to not close the exponentially small gap in the effective gauge Hamiltonian4, then the measured state will, with a very high probability, be a maximum independent set. The authors of Ref [15] showed that for parameters \(T=N^{2}\), \(\hat{\theta}=\pi t/T\), \(\dot{\varphi}=t\), where \(T\) is the total runtime, the algorithm generally produces MIS with a high probability. Footnote 4: This is different than the gap in the constraint Hamiltonian, and is instead is the gap separating states respresenting sets of maximum independence #### iii.2.1 Gate-based Algorithm To target a universal gate-based platform, we must Trotterize this unitary evolution and convert it to gates, which introduces additional errors that lead to the quantum register to returning invalid states when measured in the computational basis. We use a first-order Trotterization to convert \(U_{A}(t)\) into a form appropriate to transp Figure 5: Exact instantaneous spectrum of the Non-Abelian Adiabatic algorithm for MIS, calculated for a single edge. The \(|-\rangle\) Bell State always has constant energy in the moving frame. The ground state of the system is a time-dependent function of the remaining two constraint-satisfying basis states. The frequency of oscillation between the two states is dependent on \(\dot{\varphi}\). \[U_{A}(t) \approx\prod_{n=0}^{N_{T}}U_{B}(n\,dt)e^{-idtH_{A}}U_{B}^{-1}(n\,dt) \tag{15}\] \[\approx U_{B}(N_{t})\prod_{n=1}^{N_{T}}e^{-idtH_{A}}U_{B}^{-1}(n\,dt)U _{B}\left((n-1)\,dt\right)\] \[=U_{B}(N_{t})\prod_{n=1}^{N_{T}}\delta U_{A}(n)\] where in the second line we have shuffled the terms so that after every application of \(\delta U_{A}(n)\), the quantum register is in the basis that the constraint \(H_{A}\) is applied in. The leftover \(U_{B}(t)\) is a full \(\theta=\pi\) rotation of the register and can be conveniently omitted to prevent having to classically apply bit-flips to each element of the bitstring as is done in [15]. Using parameter selections guided by Ref.[15], - hence restricting to a good approximation of adiabatic evolution in continuous time - Trotter step size becomes the free parameter in this algorithm that controls depth. Trotter errors should play a dominant role in the creation of excitations out of the ground state manifold, unless the step size chosen is below the Trotter transition. This transition has been identified by multiple authors [34; 35] and is reproduced empirically here. ### SSC for Non-Abelian MIS SSC is applied in the Trotterized algorithm of (15) by interleaving the stabilizer extraction and recovery gates between successive \(\delta U_{A}(n)\) - hence, while the quantum register is in the correct basis to respect constraints. SSC need not be applied every step. An analysis to understand the circuit-depth trade space, similar to when choosing the correct product formula order, should be carried out. #### iv.2.1 Recovery Operation Up until now, we haven't discussed the use of a tailored recovery operation for SSC. We are now in the context of an optimization problem and must consider an appropriate ansatz based on our evolution unitary. Because our state preparation is adiabatic, we expect our quantum register to, ideally, remain in the instantaneous ground-state (or at least the low-energy manifold) of a time dependent Hamiltonian, which eventually evolves to have MIS as the ground state. In the moving frame of a Hamiltonian dependent on parameters \(\Phi\), a state evolves as: \[i\partial_{t}|\bar{\psi}\rangle=\left(U^{\dagger}HU-i\dot{\Phi}U^{\dagger} \partial_{\Phi}U\right)|\bar{\psi}\rangle \tag{16}\] We define: \[\bar{A}_{\Phi}=iU^{\dagger}\partial_{\Phi}U \tag{17}\] so that in the moving frame: \[\bar{H}=H+\dot{\Phi}A_{\Phi} \tag{18}\] \(\bar{H}\) with \(\Phi\) and its eigenvectors and eigenvalues can be calculated exactly for a graph with two vertices and a single edge between them after a first-order expansion in slow parameters \((\dot{\theta},\dot{\varphi})\). In particular, the eigenstates are \(|11\rangle\) with an energy \(E_{11}=4\Delta\), the Bell state \(|\Phi^{-}\rangle=\frac{1}{\sqrt{2}}(|01\rangle-|10\rangle)\) with an energy \(E_{-}=0\) and a time dependent mixture of the remaining two eigenstates which are combinations of the \(\{|00\rangle,|01\rangle,|10\rangle\}\) basis states. A plot of the instantaneous energies of the subspace for the single edge problem is provided in Fig. 5. When running the ideal algorithm on a single edge, the exact recovery upon projecting into the violating state \(|11\rangle\) should be applying a unitary to prepare the lowest energy state at time \(t\). This can be done by numerically computing the form of the lowest energy eigenstate at time \(t\), and applying a gate sequence to prepare that two-qubit state. The general parameterized sequence is provided in C for applying this in practice. In a larger graph, this method of determining the correct recovery faces two issues. The first is that finding the optimal target state of a recovery in the same way would require classically simulating the entire quantum system, which is self-defeating. Moreover, once we have a graph with multiple adjacent edges, syndrome extraction of violations result in projecting boundary verticies into the \(|0\rangle\) state, similar to the sampling algorithm. This results in an overall degradation of the solution since the objective is to find MIS, or an approximation. Despite these points, in the next section, we will consider using a simple recovery strategy inspired by the isolated edge case. Empirically, we tested multiple local recovery strategies into the independent set subspace of general graphs under adiabatic evolution, including preparing violating edges into \(|00\rangle\), \(|\Phi^{-}\rangle\), and \(|+\rangle\). By far, the best results were obtained when we replaced the two qubit state of any detected violation with the wavefunction of the isolated edge system under the same evolution at the appropriate time \(t\). This approximation appears to work well in small graph sizes with low connectivity, and leaves room for further improvements. Due to the limitations of the software used for simulation, it was not viable to apply recovery strategies that were functionally dependent on the read-out of multiple ancilla qubits. ### Numerical Results Empirically, we find that that SSC protects against Trotter errors by exploring how algorithm outcomes behave as a function of Trotter step size with all other algorithm parameters held fixed. To limit computational resources expended in this preliminary study, we constrained ourselves to \(S_{n}\) graphs described in Ref [15], which feature some nice properties and an exponentially small gap in the instantaneous frame. We chose to apply SSC once every 10 Trotter steps, or once every \(N_{T}/4\), whichever is smaller. For the recovery, we applied a recovery inspired by the isolated edge wavefunction described, limited by the functionality of commercially available simulators. We performed syndrome extraction in a serial fashion on each edge. If during extraction we detected a violation of the constraint subspace, we reset the two edge qubits and prepared the state that an isolated edge graph would be in under the same evolution protocol. We then move to the next edge in the graph. This algorithm is not perfectly identical to reading out all ancillas and then determining a recovery. Fig. 6 shows our limited numerical results. The behavior of the adiabatic algorithm is complicated even in the exact case. The small region of high probability to find a good approximation at \(O(10)\) steps appears to be an artifact that vanishes with increasing problem size and seems to be dependent on the numerical value of \(\hat{\varphi}\), and merits further investigation. We instead focus on the behavior at increasing step sizes, where we see that at about half the depth of the transition in Trotter error for the exact case, we see a transition in solution behavior in the SSC version. In particular, we see that the next best approximation to MIS becomes the most probable bitstring. The two solutions asymptotically converge as Trotter error vanishes. ### Discussion The numerical results of this study should be taken with caution as the graphs investigated are very small. Moreover, we explored only a limited class of graphs for this study. Due to the structure of the \(S_{n}\) graphs, it is possible that the isolated edge ansatz performs unusually well. We did see similar behavior in the handful of very small (up to size 12) 3-regular graphs simulated, however, we did not do exhaustive testing of random graphs due to the computational expense of simulating this algorithm and have omitted these results at present. In future work, we hope to construct a tailored simulation pipeline to speed-up explorations of random graph classes, and also to explore recovery techniques with more complexity. Unfortunately, feed-forward circuits are heavily resource intensive. That being said, it is quite surprising how well the isolated edge ansatz performs in practice. This inspires optimism that more detailed recovery strategies that involve harder classical computations and larger recovery clusters could potentially future reduce depths needed for adiabatic algorithms. ## V Conclusion Taking inspiration from error correcting codes, we developed the idea of constructing sets of operators to stabilize constraint subspaces that arise in Ising forms of Figure 6: Metrics for solution quality in graphs \(S_{4}\) (left) and \(S_{5}\) (right) compared between the bare non-Abelian adiabatic algorithm and the same algorithm using SSC with the isolated edge recovery applied at regular intervals. In this simulation, we evolve to time \(T=N^{2}\) with increasing numbers of Trotter steps. The top plots of both images show the figures of merit, which is the average size of the independent sets over the size of the maximum independent set. The bottom plots compare the probability of finding each independent set size for the exact algorithm (left) and the algorithm with SSC (right). The figure of merit plots show that solution quality dramatically improves in in much fewer Trotter steps as compared to the exact solution. An inspection of the probability of finding different IS solutions (bottom plots) shows that the figure of merit is boosted by a high probability of finding a great approximation to MIS. Asymptotically the solutions converge to the same probabilities as the rate of errors vanish through the Trotter transition. many computational problems. We provide a recipe for constructing these stabilizers in Appendix A. We explicitly explored these stabilizers for the problem of independent set, and demonstrated two techniques to utilize them for constrained problems: distribution preparation and depth reduction. The tools implemented in this work make use of new functionality available on NAQC hardware, and we believe that they will serve as useful building blocks for next generation algorithm development. Using these tools, we were able to construct a distribution preparation algorithm that can exactly prepare Gibbs distributions over independent sets of a graph when coupled with a stopping conditions. We found that for sub-critical values of the hardness parameter \(\lambda\), one may prepare some of these distributions in sub-linear time. We were also able to reduce the depth required to adiabatically prepare a good approximation to maximum independent set using SSC in conjunction with a specialized adiabatic algorithm. In future work, we plan to provide more a more rigorous analysis for many of the observations in this paper. We are also investigating deeper links between the connections that exist between classical and quantum partial rejection sampling, and possible algorithms to make use of distribution preparation. For SSC, a promising direction for application includes using it to prepare or maintain constrained subspaces that are not difficult to prepare, but are nevertheless embedded in optimization problems of industrial or technological value. ###### Acknowledgements. We wish to acknowledge Eliot Kapit for discussion and comments during the early development of this technique. KP would like to acknowledge Tim Hseih for general discussions about adaptive circuits and Eric Jones for prior helpful discourse regarding the non-abeleian adiabatic unitary that was ultimately used as an example in this paper.
2309.04150
Recovering Obstacles from their Travelling Times
Noakes and Stoyanov (2021) introduced a method of recovering strictly convex planar obstacles from their set of travelling times. We provide an extension of this construction for obstacles on Riemannian surfaces under some general curvature conditions. It is required that no smooth geodesic intersect more than two obstacles.
Tal Gurfinkel, Lyle Noakes, Luchezar Stoyanov
2023-09-08T06:27:47Z
http://arxiv.org/abs/2309.04150v1
# Recovering Obstacles from their Travelling Times ###### Abstract Noakes and Stoyanov (2021) introduced a method of recovering strictly convex planar obstacles from their set of travelling times. We provide an extension of this construction for obstacles on Riemannian surfaces under some general curvature conditions. It is required that no smooth geodesic intersect more than two obstacles. **Consider a strictly convex obstacle \(K\) on a Riemannian surface \(M\). \(K\) may be a disjoint union of finitely many strictly convex submanifolds of \(M\) of dimension 2 with smooth boundary. The set of travelling times of \(K\) are determined via scattering of geodesics in \(M\), reflecting on \(\partial K\), which emanate from an arbitrary strictly convex smooth curve \(\partial S\), which bounds \(K\) in \(M\). These geodesics may approximate light, pressure or other kinds of waves in a uniform medium, which reflect elastically on the body \(K\). The lengths of those geodesics beginning and ending in \(\partial S\) are called their travelling times, and they form the data which we are given about \(K\). In this paper we give a constructive method to recover \(K\) from its set of travelling times. We do so by constructing envelopes of smooth geodesics tangent to \(K\), which can be found by closely inspecting the set of travelling times. The components of \(K\) have to be arranged such that no smooth geodesic intersects more than two of the components. Otherwise ensuring the uniqueness of the recovered obstacles becomes prohibitively difficult, as determining which component of \(K\) the constructed envelope belongs to is not possible in general.** ## I Introduction For the past 50 years, chaotic mathematical billiards have been studied extensively as an intriguing formulation of physical processes involving hard-ball elastic collisions. These are idealised systems involving a point particle moving at constant speed within a container, which reflects upon contact with the boundary of the container, according to the natural law "angle of reflection is equal to angle of incidence". The most famous example is Sinai's dispersing billiards on the torus, defined in his seminal paper (cf. Sinai (1970)). Since then there have been numerous to the shape of the containers of these billiards, which alter the dynamics wildly while still exhibiting similar chaotic properties, as outlined by Chernov and Markarian (2006). Generally these involve strictly convex boundaries which are the crucial element in inducing the chaotic behaviour of the dynamics. More recently variations on the equations of motion of the particle have also been considered, via introducing curvature to the space in which the particle moves, or in some cases introducing a magnetic field orthogonal to the space, acting on the particle (Berglund and Kunz (1996); Voros _et al._ (2003)). In this paper we consider an inverse problem in mathematical billiards. Suppose, for example, we are tasked with extracting an ore body from a uniform medium, while minimising the amount of excess matter taken along with the valuable ore body. We may set off a series of small charges in an boundary around the ore body, recording the time the pressure waves from each explosion take to return to the boundary, as well as the location. Using this data we aim to recover the shape of the ore body, including the number of components it is made of, as well as their sizes and distances separating them from one another. We formulate this problem mathematically, approximating the pressure waves as point particles, and show that under certain conditions one can recover the ore body (or more generally obstacle) exactly. A more precise formulation of the problem is as follows. Let \(M\) be a geodesically complete, 2-dimensional Riemannian manifold with injectivity radius \(\rho>0\). We say that a 2-dimensional submanifold \(W\) of \(M\) is strictly convex in \(M\) if the following two conditions are satisfied: * Given any two points \(p,q\in W\), the smooth geodesic \(\gamma\) from \(p\) to \(q\) is contained entirely in \(W\). * The curvature of the boundary \(\partial W\) is positive. Let \(S\) be a 2-dimensional, strictly convex, compact submanifold of \(M\) with smooth boundary and diameter smaller than \(\rho\). This implies that between any two points in \(S\) there is a unique smooth geodesic in \(S\). Suppose \(K=K_{1}\cup\dots\cup K_{n}\) is a union of \(n\geq 2^{1}\) disjoint 2-dimensional, strictly convex submanifolds \(K_{i}\) of \(M\) with smooth boundary, contained within \(S\). Denote \(S_{K}=\overline{S\backslash K}\). Let \(\kappa_{S}\) and \(\kappa_{K}\) be the maximal and minimal (respectively) sectional curvatures of \(S\) and \(\partial K\) respectively. Suppose that either \(\kappa_{S}<0\) or, \[\kappa_{S}>0,\quad\rho\sqrt{\kappa_{S}}<\frac{\pi}{2},\text{ and }\sqrt{\kappa_{S}} \tan(\rho\kappa_{S})<\kappa_{K}.\] These conditions ensure that convex fronts will remain convex after propagation via the billiard flow, see Vetier (1984); Kramli, Simanyi, and Szasz (1989). We consider (generalised) geodesics to be piecewise-smooth, constant speed curves in \(S_{K}\) which reflect on \(K\) according to the usual reflection law, and such that any smooth arc of the curve is a Riemannian geodesic. Namely, a Riemannian geodesic is a critical point of the energy functional \(E\), where \(\gamma\) does not intersect \(\operatorname{Int}\,K\). \[E(\gamma)=\int_{a}^{b}\frac{1}{2}\left|\left|\dot{\gamma}(t)\right|\right|^{2} \ dt.\] This defines the billiard dynamical system on our curved billiard table \(S\). We denote the set of travelling times for \(K\) by \(\mathcal{T}\). That is, the set of all triples \((x,y,t)\) such that \(x,y\in\partial S\) and \(t\) is the length of some geodesic \(\gamma\) between \(x\text{ and }y\). In our curved billiard setting we consider the inverse problem of recovering \(K\) from its set of travelling times, as given by the billiard flow. The travelling times uniquely determine the obstacle \(K\) when \(M\) is Euclidean space of dimension \(3\) or larger, as shown by Noakes and Stoyanov (2015a). Moreover, one is able to recover the volume of \(K\) using a generalisation of Santalo's formula (cf. Stoyanov (2017)) when \(M\) is a Riemannian manifold of any dimension at least \(2\). Recently an approach by Bunimovich and Katz (2022) using the so called layered-scattering technique allows one to determine the volume and curvature of obstacles with codimension \(2\) or greater. However, none of these approaches allows us to constructively recover the obstacle \(K\) from the travelling times \(\mathcal{T}\). A novel constructive method of recovering obstacles in Euclidean \(2\)-space was given in Noakes and Stoyanov (2021), provided the the connected components \(K_{i}\) are in _general position_ (Figure 1). That is, no smooth geodesic in \(M\) will intersect more than two of the components \(K_{i}\). We note that the general position condition is equivalent to requiring that Ikawa's no-eclipse condition is satisfied (cf. Ikawa (1988)). The most clear and immediate consequence of this condition is the following fact: **Remark.** If \(\gamma\) is a geodesic tangent to \(K\) at some point \(\gamma(t^{*})\), then \(\gamma(t^{*})\) is either the first or last reflection point of \(\gamma\). To show that this holds, consider the case where \(\gamma(t^{*})\) is not the first or last reflection point of \(\gamma\) (Figure 2). Let \(\gamma(t^{*}_{-1})\) and \(\gamma(t^{*}_{+1})\) be the points of reflection before and after \(\gamma(t^{*})\). Then the segment \(\gamma|_{[t^{*}_{-1},t^{*}_{+1}]}\) is a smooth geodesic which intersects three obstacles, a contradiction to the general position condition. In this paper we extend the results of Noakes and Stoyanov (2021) and outline how to reconstruct an obstacle in general position in the more general setting of a curved billiard table. We do so via constructing envelopes of smooth geodesics tangent to the obstacle. Relying on the aforementioned fact in the remark above, we can distinguish which component the geodesics are tangent to. This allows us to piece together the envelopes, ensuring we are correctly reconstructing the obstacle. Given a point \(q=(x,\omega)\in T_{1}S_{k}\), let \(\gamma_{q}\) be the geodesic uniquely determined by \(q\). We say that \(\gamma_{q}\) is _non-trapped_ if there exists two distinct times \(t_{0},t^{\prime}_{0}\in\mathbb{R}\) such that \(\gamma_{q}(t_{0}),\gamma_{q}(t^{\prime}_{0})\in\partial S\). Otherwise we say that \(\gamma_{q}\) is _trapped_. We denote the set of all points \(q\in T_{1}S_{K}\) such that \(\gamma_{q}\) is trapped by \(Trap(S_{K})\), called the trapping set of \(K\). We cannot hope to recover an obstacle \(K\) where the trapping set has nonzero measure, since by definition there will be an open set of points on the boundary of \(K\) which cannot be detected in the set of travelling times of \(K\). It should be noted that the convexity condition on the obstacle \(K\) is sufficient to ensure that each connected component has an empty trapping set. Obstacles with non-empty trapping sets are known to exist in Euclidean billiard tables of dimension \(m\geq 2\). We refer the reader to see Noakes and Stoyanov (2016) for explicit constructions of such obstacles. One could reasonably conjecture that such trapping obstacles will also exist in a Riemannian manifold. In Example 1 we construct a trapping obstacle in a \(2\)-dimensional Riemannian manifold. Since the trapping set of an obstacle is known to be stable under \(C^{k}\) (\(k\geq 3\)) perturbations Stoyanov (2017), one can smoothly perturb one of the obstacles constructed in Noakes and Stoyanov (2016) to create a trapping obstacle in the Riemannian manifold derived from perturbing the metric in the same manner. There are no known constructions of trapping obstacles for a given Riemannian metric in dimensions higher than \(2\). **Example 1.** Suppose \(M\) is a Riemannian manifold of dimension \(2\) with injectivity radius \(\rho>0\). Pick two points \(F_{1},F_{2}\in M\) such that \(d_{g}(F_{1},F_{2})<\rho\). Define the following ellipse-like curve: \[E=\{x\in M:d_{g}(x,F_{1})+d_{g}(x,F_{2})=r\},\] where \(r\in\mathbb{R}\) is a constant such that \(d_{g}(F_{1},F_{2})<r<\rho\). We will call such curves Riemannian ellipses. Note that such curves are in fact strictly convex. Consider the gradient of \(d_{g}(x,F_{i})\), \(i=1,2:\) \[grad_{x}\ d_{g}(x,F_{i})=-\dot{\gamma}_{(x,F_{i})}(0),\] where \(\gamma_{(x,F_{i})}\) is the geodesic from \(x\) to the focus \(F_{i}\). Now we vary \(x\) along \(E\). Since every point in \(E\) satisfies the equation \[d_{g}(x_{h},F_{1})+d_{g}(x_{h},F_{2})=r,\] we may take the derivative to find the following relation: \[\langle x^{\prime}_{h}(0),\dot{\gamma}_{(x,F_{1})}(0)+\dot{\gamma}_{(x,F_{2})}(0 )\rangle=0.\] Thus, any billiard ray crossing through one of the foci must also cross through the other (Figure 3). To construct a trapping obstacle from \(E\), we adapt Livshits' classical construction using a Euclidean ellipse. Consider the unique smooth geodesic \(\gamma\) from \(F_{1}\) to \(F_{2}\). Extending this geodesic from \(F_{1}\), denote the first (possibly only) intersection of \(\gamma\) with \(E\) by \(A_{1}\). Similarly extending \(\gamma\) from \(F_{2}\) we find the first intersection with \(E\) by \(A_{2}\). Now con Figure 1: Qualitatively, general position ensures that any two obstacles cannot ’obscure’ a third obstacle. sider any ray \(\eta\) which crosses \(\gamma\) between \(F_{1}\) and \(F_{2}\), let \(x\in E\) be the point of reflection of \(\eta\) on \(E\). Consider the billiard ray \(\beta\) from \(F_{1}\) to \(x\). By our previous argument, it follows that \(\beta\) must cross through \(F_{2}\) after reflecting at \(x\). Taking a normal coordinate chart about \(x\), large enough to contain \(E\), we see that \(\beta\) is composed of two straight lines, from \(F_{1}\) to \(x\) to \(F_{2}\), and \(\eta\) is a straight line from \(x\) intersecting \(\gamma\) between the foci. Since the angle of reflection of \(\eta\) at \(x\) is smaller than the angle of reflection of \(\beta\) it follows that the reflected ray \(\eta^{\prime}\) of \(\eta\) will also fall between the two straight lines of \(\beta\). Now since the exponential map is injective within our chart, \(\eta^{\prime}\) cannot intersect \(\beta\) and hence must intersect \(\gamma\) between the two foci \(F_{1}\) and \(F_{2}\). Since it is homeomorphic to \(\mathbb{S}^{1}\), the Riemannian ellipse \(E\) can be split into two curves \(l_{1},l_{2}\), from \(A_{1}\) to \(A_{2}\). Extend \(l_{1}\) (Figure 4) from \(A_{1}\) to \(A_{2}\) to create a closed curve homeomorphic to \(\mathbb{S}^{1}\), such that: * \(l_{1}\) does not intersect \(\gamma\) between \(A_{1}\) and \(A_{2}\) except at \(F_{1}\) and \(F_{2}\), where it is tangent to \(\gamma\) * \(l_{1}\) does not obstruct rays from reaching \(\gamma\) between \(F_{1}\) and \(F_{2}\). It follows that \(l_{1}\) has a trapping set of positive measure, since any ray with a reflection point on \(l_{1}\) between \(A_{1}\) and \(F_{1}\) must then reflect between \(A_{2}\) and \(F_{2}\), and vice versa by our argument above. ## II Recovering the number of components in \(K\) Define the following submanifolds of the unit tangent bundles, \(T_{1}S\) and \(T_{1}S_{K}\), of \(S\) and \(S_{K}\) respectively: \[T_{\partial}S=\{(x,\omega)\in T_{1}S:x\in\partial S\} \tag{1}\] \[T_{\partial}S_{K}=\{(x,\omega)\in T_{1}S_{K}:x\in\partial S_{K}\} \tag{2}\] Let \(\mathcal{F}_{t}:T_{1}S\times\mathbb{R}\to T_{1}S\) be the smooth geodesic flow (see e.g. Paternain (1999) for a definition). Figure 3: A billiard ray through the foci of a Riemannian \(E\) in the Poincare half-plane. Figure 2: A geodesic tangent to \(K\) with points of reflections both before and after the point of tangency. **Lemma 1**.: _Suppose that \(\gamma\) is a smooth geodesic from \(x_{0}\in\partial S_{K}\) to \(y_{0}\in\partial S_{K}\), such that \(\gamma\) is not tangent to \(\partial S_{K}\) at any point and doesn't intersect \(\partial S_{K}\) at any points except \(x_{0}\) and \(y_{0}\). If \(\omega_{0}\) is the initial direction of \(\gamma\) then there are neighbourhoods \(W\) of \((x_{0},\omega_{0})\) in \(T_{1}S_{K}\) and \(V\) of \(y_{0}\) in \(\partial S_{K}\), and unique smooth functions \(y:W\to V\) and \(\tau:W\to\mathbb{R}^{+}\) such that \(y(x,\omega)=\pi_{1}\circ\mathcal{F}_{\tau(x,\omega)}(x,\omega)\) for all \((x,\omega)\in W\)._ Proof.: Let \(\phi\) be a local defining function for \(\partial S_{K}\) defined on a neighbourhood \(U\) of \(y_{0}\). That is, \(\phi^{-1}(0)=U\cap\partial S_{K}\). Let \[Y(x,\omega,t)=\pi_{1}\circ\mathcal{F}_{t}(x,\omega),\] restricted to \(T_{0}S_{K}\times\mathbb{R}\), then \(Y\) is well defined since \(M\) is geodesically complete, and smooth by definition. Finally, let \(t_{0}>0\) be such that \(Y(x_{0},\omega_{0},t_{0})=y_{0}\) and note that \(\phi(y_{0})=0\). Consider the composition \(\phi\circ Y\) and its derivative with respect to \(t\) at \(t_{0}\). Suppose that we fix \(x^{\prime}\), \(\omega^{\prime}\) and \(t^{\prime}\) such that \(\phi\circ Y(x^{\prime},\omega^{\prime},t^{\prime})=0\) and \[\left.\frac{\partial}{\partial t}(\phi\circ Y(x^{\prime},\omega^{\prime},t)) \right|_{t=t^{\prime}}=0.\] Which implies that \[d\phi_{Y(x^{\prime},\omega^{\prime},t^{\prime})}(d_{t}Y(x^{\prime},\omega^{ \prime},t^{\prime}))=0.\] Then \(d_{t}Y\) at \((x^{\prime},\omega^{\prime},t^{\prime})\) is in the kernel of \(d\phi_{Y(x^{\prime},\omega^{\prime},t^{\prime})}\). But \[\left.\ker d\phi_{Y(x^{\prime},\omega^{\prime},t^{\prime})}=T_{Y(x^{\prime}, \omega^{\prime},t^{\prime})}\partial S_{K}.\right.\] So the geodesic \(Y(x^{\prime},\omega^{\prime},t)\) is tangent to \(\partial S_{K}\) at \(Y(x^{\prime},\omega^{\prime},t^{\prime})\). By assumption, \(Y(x_{0},\omega_{0},t)\) is nowhere tangent to \(\partial S_{K}\). So \[\left.\frac{\partial}{\partial t}(\phi\circ Y(x_{0},\omega_{0},t))\right|_{t =t_{0}}\neq 0.\] Therefore we can apply the implicit function theorem to the function \(\phi\circ Y\) at the point \(y_{0}\) to find a unique function \(\tau\) defined on a neighbourhood \(W\) of \((x_{0},\omega_{0})\) such that \(\phi\circ Y(x,\omega,\tau(x,\omega))=0\) for all \((x,\omega)\in W\) and \(\tau(x_{0},\omega_{0})=t_{0}\). Now \(\mathcal{F}_{\tau(x,\omega)}\) is a diffeomorphism from \(W\) onto its image \(\widetilde{V}=\mathcal{F}_{\tau(x,\omega)}(W)\). Let \(V=\pi_{1}(\widetilde{V})\), then \(Y\) maps \(W\) onto \(V\) by definition. Finally, let \(y(x,\omega)=Y(x,\omega,\tau(x,\omega))\), then \(y:W\to V\) is the desired map. **Remark**.: Our assumption that \(\mathrm{diam}(S)<\rho\) allows us to find a normal coordinate neighbourhood \(U\supset S\) centred around any point \(x\in S\). It follows that any two non-reflecting geodesics in \(S\) cannot intersect more than once. We use this fact to prove the results. **Lemma 2**.: _Suppose that \(\alpha:[0,1]\to S\) is a smooth geodesic such that \(\alpha(0),\alpha(1)\in\partial S\). Then \(S\backslash\alpha([0,1])\) has two connected components._ Proof.: Take a normal coordinate chart \(\psi:U\to V\subseteq\mathbb{R}^{2}\) about \(\alpha(0)\) such that \(S\subseteq U\). Then \(\psi(\alpha([0,1]))\) is a straight line in \(U\). Now, \(\partial S\backslash\{\alpha(0),\alpha(1)\}\) has two path components, say \(\partial S_{1}\) and \(\partial S_{2}\). Let \(\beta=\partial S_{1}\cup\alpha([0,1])\). Then \(\psi(\beta)\) is a Jordan curve in \(V\). So by the Jordan curve theorem \(V\backslash\psi(\beta)\) has two connected components. i.e. \(\psi(S\backslash\alpha([0,1]))\) has exactly two connected components, one of which is bounded by \(\psi(\beta)\). **Proposition 3**.: _For any pair \(K_{l}\), \(K_{j}\) of distinct obstacles, there are at most 4 undirected smooth (i.e. non-reflecting) geodesics which are tangent to both \(K_{l}\) and \(K_{j}\)._ Proof.: First we begin with some definitions. Let \(\gamma_{1},\gamma_{2}:[0,1]\to S\) be two distinct, directed smooth geodesics in \(S\), with \(x_{i}=\gamma_{i}(0)\in\partial S\) and \(y_{i}=\gamma_{i}(1)\in\partial S\), for \(i=1,2\). Now suppose that \(\gamma_{1}\) and \(\gamma_{2}\) are both tangent to the same obstacle \(K_{l}\), at two distinct points \(p_{1},p_{2}\in\partial K_{l}\) respectively. Parameterise \(\partial K_{l}\) in the anti-clockwise direction as \(k_{l}:\mathbb{S}^{1}\to S\). Let \(E(s,t)=\mathcal{F}_{t}(k_{l}(s),k_{l}^{\prime}(s))\). Then there is a smooth positive function \(t(s)>0\) such that \(F(s)=\pi_{1}\circ E(s,t(s))\) is a smooth diffeomorphism onto \(\partial S\). For \(\widetilde{E}(s,t)=\mathcal{F}_{-t}(k_{l}(s),k_{l}^{\prime}(s))\), there is a corresponding smooth positive function \(\widetilde{t}(s)>0\) such that \(\widetilde{F}(s)=\pi_{1}\circ\widetilde{E}(s,\widetilde{t}(s))\) is a diffeomorphism on \(\partial S\). Note that \(F\) and \(\widetilde{F}\) are only well-defined when both \(K_{j}\) and \(S\) are strictly convex. There is some \(s_{1}\in\mathbb{S}^{1}\) such that \(k_{l}(s_{1})=p_{1}\). Suppose that \(\langle\dot{\gamma}_{1},k_{l}(s_{1})\rangle>0\), otherwise we can replace \(\gamma_{1}(t)\) with \(\gamma_{1}(1-t)\). Then \(F(s_{1})=y_{1}\in\partial S\). Now, there is some \(a_{0}\in\mathbb{S}^{1}\) such that \(F(a_{0})=x_{1}\). Let \(\alpha:[0,1]\to S\) be the smooth geodesic such that \(\alpha(0)=x_{1}\) and \(\alpha\) is tangent to \(\partial K_{l}\) at \(k_{l}(a_{0})\). Consider the path components, \(S_{1}\) and \(S_{2}\) of \(\partial S\backslash\{x_{1},y_{1}\}\). Let \(S_{1}\) be the path component containing \(\alpha(1)\). Take a normal coordinate chart \(\phi:M\supseteq U\to V\) about \(x_{1}\) large enough so that \(S\subseteq U\). Then \(\phi(\gamma_{1}([0,1]))\) is the straight line in \(V\) from \(\phi(x_{1})\) to \(\phi(y_{1})\). By Lemma 2, \(\phi(S)\) is therefore split by \(\phi(\gamma_{1})\) into two path-components \(\phi(\widetilde{S}_{1})\) and \(\phi(\widetilde{S}_{2})\). Their boundaries are precisely \(\partial\widetilde{S}_{i}=S_{i}\cup\gamma_{1}([0,1])\), for \(i=1,2\). It follows that \(\alpha((0,1])\subseteq\widetilde{S}_{1}\), since \(\alpha\) and \(\gamma_{1}\) can intersect at most once. Since \(\gamma_{1}\) is tangent to \(\partial K_{l}\), and \(\partial K_{l}\) is strictly convex, \(\gamma_{1}\) intersects \(\partial K_{l}\) exactly once. Thus we must have \(\partial K_{l}\subseteq\overline{\widetilde{S}_{i}}\) for either \(i=1\) or \(i=2\). But \(\alpha\) is also tangent to \(\partial K_{l}\), and \(\alpha([0,1])\subseteq\overline{\widetilde{S}_{1}}\), therefore \(\partial K_{l}\subseteq\overline{\widetilde{S}_{1}}\) (Figure 5). For each \(s\in\mathbb{S}^{1}\) let \(\widetilde{G}_{s}(t)=\pi_{1}\circ E(s,t)\). Let \(G_{s}:\mathbb{S}^{1}\times[0,1]\) be a Figure 4: Half of \(E\) (Figure 3, in blue) extended (in red) to form a trapping obstacle. re-parameterisation of \(\widetilde{G}_{s}\) such that \(G(0)=\widetilde{G}_{s}(-\widetilde{t}(s))\), \(G(1)=\widetilde{G}_{s}(t(s))\), and \(\frac{\partial}{\partial t}\widetilde{G}_{s}(t(s))=\lambda_{s}\frac{\partial}{ \partial t}G_{s}(1)\) for some \(\lambda_{s}>0\). For each \(s\in\mathbb{S}^{1}\) there is some \(\omega_{s}\in[0,1]\) such that \(G_{s}(\omega_{s})=\widetilde{G}_{s}(0)\in\partial K_{l}\). Then by construction, \(G_{s}([\omega_{s},1])\subseteq\widetilde{S}_{1}\) for all \(s\in F^{-1}(S_{1})\). On the other hand, \(G_{s}([\omega_{s},1])\cap\widetilde{S}_{i}\neq\emptyset\) for \(i=1\) and \(2\), for all \(s\in F^{-1}(S_{2})\). That is, \(G_{s}\) intersects \(\gamma_{1}\) exactly once, for all \(s\in F^{-1}(S_{2})\). Similarly \(G_{s}([0,\omega_{s}]))\cap\widetilde{S}_{2}\neq\emptyset\) for all \(s\in\widetilde{F}^{-1}(S_{2})\). Define the sets \[G^{+}(\gamma_{1})=\{G_{s}:s\in F^{-1}(S_{2})\}\] \[G^{-}(\gamma_{1})=\{G_{s}:s\in\widetilde{F}^{-1}(S_{2})\},\] and their union \[G(\gamma_{1})=\{G_{s}:s\in F^{-1}(S_{2})\cup\widetilde{F}^{-1}(S_{2})\}.\] This is precisely the set of all geodesics tangent to \(\partial K_{l}\) with the same orientation as \(\gamma_{1}\), which intersect \(\gamma_{1}\) in the interior of \(S\). We will show later that the geodesics in \(G^{+}(\gamma_{1})\) differ to those in \(G^{-}(\gamma_{1})\) in terms of where they intersect \(\gamma_{1}\). Now suppose that \(\gamma_{1}\) and \(\gamma_{2}\) are both tangent to \(\partial K_{l}\) and \(\partial K_{j}\), such that both \(\gamma_{1}\) and \(\gamma_{2}\) intersect \(\partial K_{l}\) before intersecting \(\partial K_{j}\). Let \(k_{j}:\mathbb{S}^{1}\to S\) be an anti-clockwise parameterisation of \(\partial K_{j}\). For \(i=1,2\) let \(\gamma_{i}(t_{i})=k_{l}(s_{i})\) and \(\gamma_{i}(t^{\prime}_{i})=k_{j}(s^{\prime}_{i})\) be the tangential intersection points of \(\gamma_{i}\) with \(\partial K_{l}\) and \(\partial K_{j}\) respectively, for some \(t_{i},t^{\prime}_{i}\in\mathbb{R}\) and \(s_{i},s^{\prime}_{i}\in\mathbb{S}^{1}\). Then \(0<t_{i}<t^{\prime}_{i}<1\) by assumption. Let \(r^{(l)}_{i}\) and \(r^{(j)}_{i}\) be the signs of \(\langle\dot{\gamma}_{i}(t_{i}),k^{\prime}_{l}(s_{i})\rangle\) and \(\langle\dot{\gamma}_{i}(t^{\prime}_{i}),k^{\prime}_{j}(s^{\prime}_{i})\rangle\) respectively. We claim that if \(r^{(1)}_{1}=r^{(l)}_{2}\) then \(r^{(j)}_{1}\neq r^{(j)}_{2}\). It suffices to show that this is true for \(r^{(l)}_{1}=1\), since the alternative case follows by replacing \(\gamma_{i}(t)\) with \(\gamma_{i}(1-t)\). Assume that \(r^{(j)}_{1}=r^{(j)}_{2}\). We begin by defining analogous maps and sets for \(\partial K_{j}\) as follows. Let \(E^{\prime}(s,t)=\mathcal{F}_{t}(k_{j}(s),r^{(j)}_{1}k^{\prime}_{j}(s))\), and \(\widetilde{E}^{\prime}(s,t)=\mathcal{F}_{-t}(k_{j}(s),r^{(j)}_{1}k^{\prime}_{j }(s))\). Then as before, there are positive maps \(\tau(s),\widetilde{\tau}(s)>0\) such that \(F^{\prime}(s)=\pi_{1}\circ E^{\prime}(s,\tau(s))\) and \(\widetilde{F}^{\prime}(s)=\pi_{1}\circ\widetilde{E}^{\prime}(s,\widetilde{ \tau}(s))\) are diffeomorphisms onto \(\partial S\). For each \(s\in\mathbb{S}^{1}\) let \(\widetilde{H}_{s}(t)=\pi_{1}\circ E^{\prime}(s,t)\). Let \(H_{s}:\mathbb{S}^{1}\times[0,1]\) be a re-parameterisation of \(\widetilde{H}_{s}\) such that \(H(0)=\widetilde{H}_{s}(-\widetilde{\tau}(s))\), \(H(1)=\widetilde{H}_{s}(\tau(s))\), and \(\frac{\partial}{\partial t}\widetilde{H}_{s}(\tau(s))=\lambda^{\prime}_{s} \frac{\partial}{\partial t}G_{s}(1)\) for some \(\lambda^{\prime}_{s}>0\). For each \(s\in\mathbb{S}^{1}\) there is some \(\mu_{s}\in[0,1]\) such that \(H_{s}(\mu_{s})=\widetilde{H}_{s}(0)\in\partial K_{j}\). Finally, define the sets \[H^{+}(\gamma_{1})=\{H_{s}:s\in F^{\prime-1}(S_{2})\}\] \[H^{-}(\gamma_{1})=\{H_{s}:s\in\widetilde{F}^{\prime-1}(S_{2})\},\] and their union \[H(\gamma_{1})=\{H_{s}:s\in F^{\prime-1}(S_{2})\cup\widetilde{F}^{\prime-1}(S_{2} )\}.\] We now have two cases, depending on the sign of \(r_{1}^{(j)}\). First, if \(r_{1}^{(j)}<0\) (Figure (b)b), then \(\partial K_{j}\subseteq\overline{S_{2}}\), since both \(\partial K_{l}\) and \(\partial K_{j}\) were given anti-clockwise parameterisations. Now since \(\gamma_{2}\) is tangent to both \(\partial K_{l}\subseteq\overline{S_{1}}\) and \(\partial K_{j}\subseteq\overline{S_{2}}\), it must intersect \(\gamma_{1}\). Therefore \(\gamma_{2}=G_{s_{2}}=H_{s_{2}^{\prime}}\in G(\gamma_{1})\cap H(\gamma_{1})\). Note that this is only true due to the inclusion of the sign term \(r_{1}^{(j)}\) in the definition of \(E^{\prime}\) and \(\widetilde{E}^{\prime}\), which ensures that \(G_{s_{2}}\) and \(H_{s_{2}^{\prime}}\) have the same orientation as \(\gamma_{2}\). Now since \(\gamma_{2}\) intersects \(\partial K_{l}\) prior to \(\partial K_{j}\), it follows that \(G_{s_{2}}[(\omega_{s},1)]\cap\widetilde{S}_{2}\neq\emptyset\). Hence \(G_{s_{2}}\in G^{+}(\gamma_{1})\). Reasoning in a similar manner, \(H_{s_{2}^{\prime}}([0,\mu_{s}])\cap\widetilde{S}_{1}\neq\emptyset\) so \(H_{s_{2}^{\prime}}\in H^{+}(\gamma_{1})\). Let \(z:G(\gamma_{1})\rightarrow\gamma_{1}([0,1])\) be the intersection between \(G_{s^{*}}\) and \(\gamma_{1}\), for any \(G_{s^{*}}\in G(\gamma_{1})\). We claim that, \(z(G^{+}(\gamma_{1}))\subseteq\gamma_{1}([0,t_{1}])\). Consider \(\partial G^{+}(\gamma_{1})\). There are two smooth geodesics in \(\partial G^{+}(\gamma_{1})\), one of which is \(\gamma_{1}\). The other is \(G_{\widetilde{s}}\), where \(\widetilde{s}=F^{-1}(x_{1})\). We remark that therefore \(\partial G^{+}(\gamma_{1})\cap G(\gamma_{1})=\emptyset\). Now for any \(G_{s^{*}}\in G^{+}(\gamma_{1})\), the point of tangency \(G_{s^{*}}(\omega_{s^{*}})\in\partial K_{l}\) is contained in the region bounded by \(k_{l}([\widetilde{s},s_{1}])\), \(\gamma_{1}([0,t_{1}])\) and \(G_{\widetilde{s}}([\omega_{\widetilde{s}},1])\). Furthermore, \(G_{s^{*}}(1)\in S_{2}\), so \(G_{s^{*}}\) must intersect either \(\gamma_{1}([0,t_{1}])\) (in which case the claim is true), or \(G_{\widetilde{s}}([\omega_{\widetilde{s}},1])\). Suppose the latter case holds. \(G_{\widetilde{s}}\) splits \(\overline{S_{1}}\backslash G_{\widetilde{s}}([0,1])\) into two path components, \(S_{1}^{A}\) and \(S_{1}^{B}\) (by Lemma 2). Let \(\overline{S_{1}^{A}}\) be the component containing \(K_{l}\). Then for some \((a,b)\subseteq[\omega_{s^{*}},1]\), we have \(G_{s^{*}}((a,b))\subseteq S_{1}^{B}\). But \(\partial S_{1}^{B}\cap\partial\widetilde{S}_{2}=\{x_{1}\}\), so \(G_{s^{*}}\) cannot reach \(\widetilde{S}_{2}\) without intersecting \(S_{1}^{A}\). That is, \(G_{s^{*}}([b,1])\cap S_{1}^{A}\neq\emptyset\), since \(G_{s^{*}}(1)\in S_{2}\). Therefore \(G_{s^{*}}\) must intersect \(G_{\widetilde{s}}\) twice, a contradiction, which shows our claim holds. Note that a similar argument will show that \(z(G^{-}(\gamma_{1}))\subseteq\gamma_{1}([t_{1},1])\). Hence \(G_{s_{2}}\) intersects \(\gamma_{1}\) prior to the first tangency \(\gamma_{1}(t_{1})\). Similarly, if \(\tilde{z}:H(\gamma_{1})\rightarrow\gamma_{1}([0,1])\) is the intersection between \(H_{s^{*}}\) and \(\gamma_{1}\) for any \(H_{s^{*}}\in G(\gamma_{1})\), then \[\tilde{z}(H^{+}(\gamma_{1}))\subseteq\gamma_{1}([t_{1}^{\prime},1])\text{ and }\tilde{z}(H^{-}(\gamma_{1}))\subseteq\gamma_{1}([0,t_{1}^{\prime}]).\] Note that if \(r_{1}^{(j)}>0\) then \(H^{+}(\gamma_{1})\) and \(H^{-}(\gamma_{1})\) swap. Recalling that \(\gamma_{2}=G_{s_{2}}=H_{s_{2}^{\prime}}\in G^{+}(\gamma_{1})\cap H^{+}(\gamma_ {1})\), we have \(z(\gamma_{2})\in\gamma_{1}([0,t_{1}])\), while \(\widetilde{z}(\gamma_{2})\in\gamma_{1}([t^{\prime}_{1},1])\). That is, \(z(\gamma_{2})\neq\widetilde{z}(\gamma_{2})\), meaning that \(\gamma_{2}\) must intersect \(\gamma_{1}\) twice, a contradiction. We now consider the case where \(r_{1}^{(j)}>0\) (Figure 6a). In this case, both \(\partial K_{l}\) and \(\partial K_{j}\) are in the same connected component \(\widetilde{\overline{S}_{1}}\). Suppose that \(\gamma_{2}\) intersects \(\gamma_{1}\), then as in the previous case we have \(\gamma_{2}=G_{s_{2}}=H_{s_{2}^{\prime}}\in G(\gamma_{1})\cap H(\gamma_{1})\). Also let \(z\) and \(\tilde{z}\) be defined in the same way as in the previous case. Since \(\partial K_{j}\) is in the same component of \(S\) as \(\partial K_{l}\), and \(\gamma_{2}\) can intersect \(\gamma_{1}\) at most once, it follows that the intersection must occur after \(\gamma_{2}\) is tangent to \(\partial K_{j}\). That is, \(z(G_{s_{2}})=\tilde{z}(H_{s_{2}^{\prime}})\in\gamma_{2}([t^{\prime}_{2},1])\). Note that by construction, \(z(G_{s^{*}})\in G_{s^{*}}([0,\omega_{s}])\) for any \(G_{s^{*}}\in G^{-}(\gamma_{1})\). Therefore, \(G_{s}\in G^{+}(\gamma_{1})\), since \(z(G_{s})\notin G_{s}([0,\omega_{s}])\). Now \(G_{s}\in G^{+}(\gamma_{1})\), as in the previous case, implies that \(z(G_{s})\in\gamma_{1}([0,t_{1}])\). Therefore it follows that \(z(G_{s})\in\gamma_{1}([0,t_{1}])\cap\gamma_{2}([t^{\prime}_{2},1])\). Pick \(t^{*}_{1},t^{*}_{2}\in[0,1]\) such that \(\gamma_{1}(t^{*}_{1})=\gamma_{2}(t^{*}_{2})\) is the point of intersection between \(\gamma_{1}\) and \(\gamma_{2}\). Then it follows that \(\partial K_{j}\) must be contained in the region bounded by \(\gamma_{1}([t^{*}_{1},t_{1}]),\gamma_{2}([t^{\prime}_{2},t^{*}_{2}])\) and \(k_{l}([s_{1},s_{2}])\) (Figure 7). But then \(t^{\prime}_{1}\in[t^{*}_{1},t_{1}]\), a contradiction since we assumed that \(t^{\prime}_{1}>t_{1}\). Therefore \(\gamma_{2}\) cannot intersect \(\gamma_{1}\). Finally, suppose that \(\gamma_{2}\) does not intersect \(\gamma_{1}\). Then \(\gamma_{2}\not\in G(\gamma_{1})\cup H(\gamma_{1})\). Note that by assumption \(t^{\prime}_{2}>t_{2}\), hence \(\partial K_{j}\) must be contained within the region bounded by \(\gamma_{2}([t^{\prime}_{2},1]),\gamma_{1}([0,t_{1}]),k_{l}([s_{1},s_{2}])\) and \(\partial S\). But then \(t^{\prime}_{1}\in[0,t_{1}]\), once again a contradiction since we assumed that \(t^{\prime}_{1}>t_{1}\). Thus our claim that \(r_{1}^{(j)}\neq r_{2}^{(j)}\) whenever \(r_{1}^{(l)}=r_{2}^{(l)}\) holds is proved. Therefore there can be at most eight directed smooth geodesics which are tangent to \(\partial K_{l}\) and \(\partial K_{j}\) depending on the pairs of signs \(r_{1}^{(l)}\) and \(r_{1}^{(j)}\), and depending on which obstacle they intersect first. That is, there are at most 4 undirected smooth geodesics which are tangent to both \(\partial K_{l}\) and \(\partial K_{j}\). **Proposition 4**.: _Suppose that \(n\geq 3\). Let \(K_{l}\) and \(K_{j}\) be two distinct obstacles. Then there are at least 4 undirected smooth geodesics which are tangent to both \(K_{l}\) and \(K_{j}\)._ Proof.: First, we show that there exists at least one smooth geodesic tangent to \(\partial K_{l}\) which does not intersect \(\partial K_{j}\). Let \(K_{m}\) be any third obstacle distinct from both \(K_{l}\) and \(K_{j}\). Pick any point \(q\in\partial K_{m}\) and take a normal coordinate chart \(\psi:U\to V\subseteq\mathbb{R}^{2}\) about \(q\) such that \(S\subseteq U\). Then there is some straight line \(L\subset V\) intersecting \(\psi(\partial K_{m})\) at \(\psi(q)\), which is tangent to \(\psi(\partial K_{l})\). This line corresponds to a smooth geodesic \(\gamma=\psi^{-1}(L)\) through \(q\) which is tangent to \(\partial K_{l}\) in \(U\). Note that \(\gamma\) may intersect \(\partial K_{m}\) more than once. Since our obstacles are in general position (by assumption), it follows that \(\gamma\) cannot intersect any other obstacle apart from \(K_{l}\) and \(K_{m}\). Thus \(\gamma\) is a smooth geodesic tangent to \(\partial K_{l}\) which does not intersect \(\partial K_{j}\). Note that by the same argument there is some smooth geodesic \(\gamma^{\prime}\) which intersects \(\partial K_{j}\) and is tangent to \(\partial K_{l}\). Parameterise \(\partial K_{l}\) in an anti-clockwise fashion as \(k_{l}(s):\mathbb{S}^{1}\to\partial K_{l}\). Define the function \(E:T\partial K_{l}\times\mathbb{R}\to TS\) by \((s,t)\mapsto\mathcal{F}_{t}(k_{l}(s),k^{\prime}_{l}(s))\). By the argument in the previous paragraph, there is some \(q\in\partial K_{j}\) and a smooth geodesic \(\gamma\) through \(q\) which is tangent to \(\partial K_{l}\). By the implicit function theorem there are open sets \(W\subseteq\partial K_{l}\) and \(Z\subseteq\partial K_{j}\) such that \(q\in Z\), along with a unique diffeomorphism \(\phi:W\to Z\) such that the smooth geodesic starting at \(k_{l}(s)=w\in W\) in the direction \(k^{\prime}_{l}(s)\) first intersects \(\partial K_{j}\) at \(\phi(w)\). That is, if \(t(s)>0\) is the distance between \(w\) and \(\phi(w)\) then \(\phi(w)=\pi_{1}\circ E(s,t(s))\). Furthermore the intersections of the geodesics \(F_{s}(t)=\pi_{1}\circ E(s,t)\) with \(\partial K_{l}\) at \(\phi(k(s))\) are transversal. Provided that \(F_{s}(t)\) remains transversal to \(\partial K\) for all \(s\in k_{l}^{-1}(W)\), we can extend \(\phi\) by expanding \(Z\) and hence \(W\) (see the proof of Lemma 1 for this construction). Suppose that there are no smooth bitangent geodesics which are tangent to both \(\partial K_{l}\) and \(\partial K_{j}\). It follows that \(Z=\partial K_{j}\), and since \(W\) is diffeomorphic to \(Z\), it follows that \(W=\partial K_{l}\). But then for every point \(w\in\partial K_{l}\) the geodesic \(F_{k_{l}^{-1}(w)}\), tangent to \(\partial K_{l}\) will intersect \(\partial K_{j}\) at \(\phi(w)\) (transversally) by construction. This contradicts our argument above, since there must be some smooth geodesic tangent to \(K_{l}\) which does not intersect \(K_{j}\). Hence there must be at least one bitangent smooth geodesic, tangent to both \(\partial K_{l}\) and \(\partial K_{j}\). Suppose that there is only one bitangent smooth geodesic, \(\alpha^{*}:[0,1]\to S\), tangent to both \(\partial K_{l}\) and \(\partial K_{j}\). Let \(p\in\partial K_{j}\) be the point of tangency of \(\alpha^{*}\), it follows that \(Z=\partial K_{j}\backslash\{p\}\). Parameterise \(W\) in the anti-clockwise direction as \(w:(0,1)\to W\). Let \(w^{*}=\lim\limits_{s\to 1}w(s)\), then \(p=\lim\limits_{w\to w^{*}}\phi(w)\). Note that since there is only one point of tangency, we must also have \(\lim\limits_{s\to 0}w(s)=w^{*}\). Now taking a normal coordinate chart \(\psi\) as before, centred about \(w^{*}\) we have that \(\psi(\alpha^{*}([0,1]))\) is a straight line, which we may extend in both directions until intersecting \(\psi(S)\) at both ends. Denote this extended smooth geodesic by \(\widetilde{\alpha}^{*}:[a,b]\to S\). Then by Lemma 2, \(S\backslash\widetilde{\alpha}^{*}([a,b])\) has two connected components. Denote these two components by \(\psi(\tilde{S}_{1})\) and \(\psi(\tilde{S}_{2})\). Suppose that \(K_{l}\) and \(K_{j}\) are both in the same component \(\tilde{S}_{1}\). Re-parameterise \(\widetilde{\alpha}^{*}\) as \(\gamma_{1}:[0,1]\to S\), so that \(\frac{\partial}{\partial s}\widetilde{\alpha}^{*}(a)=\lambda\gamma_{1}(0)\) for some \(\lambda>0\). Then using the same notation as in Proposition 3, consider the set \(G^{+}(\gamma_{1})\) (Figure 6a). Recall that for each \(s\in\mathbb{S}^{1}\) there is some \(\alpha_{s}>0\) such that \(G_{s}(\omega_{s})\in\partial K_{l}\) is the point of tangency of \(G_{s}\) with \(\partial K_{l}\). Also let \(t_{1}^{\prime}>t_{1}>0\) be such that \(\gamma_{1}(t_{1})\in\partial K_{l}\) and \(\gamma_{1}(t_{1}^{\prime})\in\partial K_{l}\) are the points of tangency of \(\gamma_{1}\) with \(\partial K_{l}\) and \(\partial K_{j}\) respectively. As shown in Proposition 3, for any \(G_{s}\in G^{+}(\gamma_{1})\), the intersection \(z(G_{s})\) between \(G_{s}\) and \(\gamma_{1}\) must lie in \(\gamma_{1}([0,t_{1}])\). Now since \(Z=\partial K_{j}\backslash\{p\}\), it follows that for every \(G_{s}\in G^{+}(\gamma_{1})\), the segment \(G_{s}([\omega_{s},1])\) intersects \(\partial K_{j}\) transversally. Let \(1>\sigma_{s}>\theta_{s}>\omega_{s}\) be such that \(G_{s}(\sigma_{s})\in\partial K_{j}\) and \(G_{s}(\theta_{s})=z(G_{s})\) are the points of intersection of \(G_{s}\) with \(\partial K_{j}\) and \(\gamma_{1}\) respectively. Then \(G_{s}([\omega_{s},\theta_{s}))\subseteq\tilde{S}_{1}\), while \(G_{s}((\theta_{s},\sigma_{s}])\cap\tilde{S}_{i}\neq\emptyset\) for \(i=1,2\). It follows that \(G_{s}(\theta_{s},\sigma_{s}])\) must also intersect \(\gamma_{1}([0,1])\). Thus \(G_{s}\) intersects \(\gamma_{1}\) more than once, leading to a contradiction. Now suppose that \(K_{l}\) and \(K_{j}\) are in two separate components, \(\partial K_{l}\subseteq\overline{\tilde{S}_{1}}\) and \(\partial K_{j}\subseteq\overline{\tilde{S}_{2}}\). Once again using the same notation as Proposition 3, consider the set \(G^{-}(\gamma_{1})\) (Figure 6b). Let \(G_{s}(\omega_{s})\in\partial K_{l}\), \(G_{s}(\sigma_{s})\in\partial K_{j}\) and \(G_{s}(\theta_{s})=z(G_{s})\) denote the same points as in the previous case. Then by construction, \(G_{s}([\omega_{s},1])\cap S_{i}\neq\emptyset\) for \(i=1,2\), since \(\omega_{s}<\sigma_{s}\) and \(\partial K_{l}\) and \(\partial K_{j}\) are in different components. However, as shown in Proposition 3, \(z(G_{s})\in G_{s}([0,\omega_{s}])\). That is, \(0<\theta_{s}<\omega_{s}<\sigma_{s}\). So once again, \(G_{s}\) must intersect \(\gamma_{1}\) at two distinct points, leading to a contradiction. Hence there must be at least two bitangent smooth geodesics when parameterising in the anti-clockwise direction. Taking the parameterisation in the clockwise direction we find at least two bitangents once again. We claim that the pairs of bitangents must be distinct. Suppose that \(\gamma^{+}(t)\) is a bitangent smooth geodesic starting at \(k_{l}(s)\in\partial K_{l}\) with initial direction \(k_{l}^{\prime}(s)\). Let \(\gamma^{-}(t)\) be the smooth geodesic starting at \(k_{l}(s)\) with initial direction \(-k_{l}^{\prime}(s)\). Define \(\gamma(t)=\gamma^{+}(t)\) for \(t\geq 0\) and \(\gamma(t)=\gamma^{-}(-t)\) for \(t<0\). Now since \(K_{j}\) is strictly convex, and \(\gamma(t)\) is tangent to \(\partial K_{j}\) at \(\phi(k_{l}(s))\), the smooth geodesic \(\gamma(t)\) must intersect \(\partial K_{j}\) exactly once. Since \(\gamma(t)\) cannot self intersect, it follows that \(\gamma^{-}(t)\) is not tangent to \(\partial K_{j}\), and in fact does not intersect \(\partial K_{j}\) anywhere. Hence the four bitangent smooth geodesics must be distinct. Let \(\mathcal{T}_{i}^{j}\subset\mathcal{T}\) be the set of travelling times generated by geodesics which reflect off \(\partial K\) exactly \(i\) times and are tangent to exactly \(j\) connected components of \(K\). Since we have assumed that \(K\) is in general position, it follows that \(\mathcal{T}_{i}^{j}=\emptyset\) for all \(j\geq 3\). Consider the set of travelling times \(\mathcal{T}_{0}\) which are generated by smooth geodesics. Then we know \(\mathcal{T}_{0}=\cup_{j=0}^{2}\mathcal{T}_{0}^{j}\). **Corollary 4.1**.: _Suppose that \(n\geq 3\). The set \(\mathcal{T}_{0}^{2}\) contains exactly \(4n(n-1)\) discrete points, and \(\mathcal{T}_{0}^{1}\) is the union of \(4n(n-1)\) open arcs._ Proof.: Each obstacle has exactly \(4(n-1)\) geodesics which are tangent to both it and another obstacle. In total this gives \(4n(n-1)\) such (directed) geodesics. Furthermore, each obstacle has \(4(n-1)\) points of double-tangency. Every pair of successive points determines an open arc of points along the boundary of the obstacle which generate geodesics tangent only to that obstacle. Every such arc determines an open arc of travelling times, and the union of these open arcs is exactly \(\mathcal{T}_{0}^{1}\). Therefore \(\mathcal{T}_{0}^{1}\) is the union of \(4n(n-1)\) disjoint open arcs. Corollary 4.1 allows us to determine the number of obstacles \(n\) directly from the travelling times in a rather practical manner. Computing \(\mathcal{T}_{0}^{2}\) requires a minimal amount of data in comparison to computing the sets \(\mathcal{T}_{i}^{1}\) which are used to recover \(K\). The amount of data required to identify \(\mathcal{T}_{i}^{1}\) increases by an order of magnitude for each \(i\geq 0\) (see Example 2). Note that in the case where there are only two obstacles (\(n=2\)), only the upper bound given in Proposition 3 holds. However, this is sufficient to conclude there are only two obstacles, since there will be at most 8 points in \(\mathcal{T}_{0}^{2}\). ## III The structure of the set of travelling times \(\mathcal{T}\) We define the map \[I:T_{\partial}S_{K}\to T_{\partial}S_{K}\] \[I(x,\omega)=\left(x,\omega-2\left\langle\omega,v_{x}\right\rangle v_{x}\right) \tag{3}\] where \(v_{x}\) is the unit normal to \(\partial S_{K}\) at \(x\). This is the inversion map by symmetry to the normal to the boundary. Note that \(I\) is a diffeomorphism. **Lemma 5**.: _Let \(\gamma\) be a generalised geodesic in \(S_{K}\) generated by \((x_{0},\omega_{0})\in T_{\partial}S_{K}\) with successive reflection points \(x_{1},\ldots,x_{n}\) on \(\partial S_{K}\). Then there are neighbourhoods \(W\) of \((x_{0},\omega_{0})\in T_{\partial}S_{K}\), and \(V_{i}\) of \(x_{i}\) in \(\partial S_{K}\) and unique smooth maps_ \[x_{i}(x,\omega):W\to V_{i}\] _such that any generalised geodesic generated by \((x,\omega)\in W\) will have successive reflection points \(x_{i}(x,\omega)\)._ Proof.: By Lemma 1 there exist a neighbourhood \(W\) of \((x_{0},\omega_{0})\) and a map \(\tau_{1}\) from \(W\) to \(\mathbb{R}^{+}\) such that \(\mathcal{F}_{\tau_{1}(x,\omega)}\) is a diffeomorphism from \(W\) onto a neighbourhood \(U_{1}\) of \((x_{1},\omega_{1})\) along the boundary \(\partial K\). Now by Lemma 1, for each \(i=1,\ldots,n\) there is a neighbourhood \(\widetilde{U}_{i-1}\subseteq I(U_{i-1})\) and a map \(\tau_{i}:\widetilde{U}_{i-1}\to\mathbb{R}^{+}\) such that \(\mathcal{F}_{\tau_{i}(x,\omega)}\) is a diffeomorphism from \(\widetilde{U}_{i-1}\) onto a neighbourhood \(U_{i}\) of \((x_{i},\omega_{i})\) along the boundary \(\partial K\). We shrink each \(U_{i}\) so that \(I(U_{i})=\widetilde{U}_{i}\), and shrink \(W\) so that \(\mathcal{F}_{\tau_{1}(x,\omega)}(W)=U_{1}\). Now define maps \(X_{i}:W\to U_{i}\) recursively as follows: \[X_{1}(x,\omega) =\mathcal{F}_{\tau_{1}(x,\omega)}(x,\omega) \tag{4}\] \[X_{i}(x,\omega) =\mathcal{F}_{\tau_{i}(I\circ X_{i-1}(x,\omega))}(I\circ X_{i-1}( x,\omega)) \tag{5}\] Note that each \(X_{i}\) is a diffeomorphism. Finally, the desired maps are \[x_{i}=\pi_{1}\circ X_{i}:W\to\pi_{1}(U_{i})\qed\] We will use Lemmas 6 to 9 often, see Gurfinkel, Noakes, and Stoyanov (2020) for their proofs: **Lemma 6**.: _Suppose that \(\gamma\) is a non-trapped generalised geodesic in \(S_{K}\) from \(x\in\partial S\) to \(y\in\partial S\). Then \(grad_{x}T=-\dot{\gamma}(t_{0})/\left\lVert\dot{\gamma}(t_{0})\right\rVert\), where \(T(x,y)\) is the length of the geodesic \(\gamma\)._ **Lemma 7**.: _Fix \(x_{0}\in\partial S\). The set of pairs of distinct directions \(\omega_{1},\omega_{2}\in T_{0}S_{x_{0}}\) which generate generalised geodesics with the same endpointst and the same travelling time is countable._ **Lemma 8**.: _Suppose \(c:[a,b]\to M\) is a smooth, unit speed, strictly convex curve. For each \(u_{0}\in[a,b]\) there exists a smooth, strictly convex curve \(y\) on a neighbourhood of \(u_{0}\) such that \(\frac{\partial}{\partial u}y(u)\) is orthogonal to the parallel translate of \(\frac{\partial}{\partial u}c(u)\) along the geodesic from \(c(u)\) in the direction \(\frac{\partial}{\partial u}c(u)\)._ **Lemma 9**.: _Let \(\gamma\) be a generalised geodesic in \(S_{K}\). Suppose there are two convex fronts, \(X\) and \(Y\) such that \(\dot{\gamma}(0)\) is the outward unit normal to \(X\) and for some \(t_{0}>0\), the velocity \(\dot{\gamma}(t_{0})\) is the inward unit normal to \(Y\). Also suppose that \(\gamma\) reflects transversally between \(X\) and \(Y\). Parameterise \(X\) as_ \[x:[a,b]\to S_{K}\] _with \(x(u_{0})=\gamma(0)\) and unit outward normal \(\omega(u)\). Then there exists an open set \(U\subseteq[a,b]\) containing \(u_{0}\) such that \((x(u_{0}),\omega(u_{0}))\) generates a geodesic that hits \(Y\) orthogonally, and \((x(u),\omega(u))\) does not, for all \(u\in U\backslash\{u_{0}\}\)._ Let \(\gamma_{(x,\omega)}\) be the geodesic such that \(\gamma_{(x,\omega)}(0)=x\) and \(\dot{\gamma}_{(x,\omega)}(0)=\omega\), for \((x,\omega)\in T_{0}S\backslash Trap(S_{K})\). Let \(t(x,\omega)\) be the travelling time of \(\gamma_{(x,\omega)}\). Define the endpoint map as follows: \[\mathcal{P}(x,\omega)=(\gamma_{(x,\omega)}(t(x,\omega)),\dot{\gamma}_{(x, \omega)}(t(x,\omega)))\in T_{\partial}S \tag{6}\] Note that if \(\gamma_{(x_{0},\omega_{0})}\) is nowhere tangent to \(\partial K\), then the restriction of \(\mathcal{P}\) to a neighbourhood of \((x_{0},\omega_{0})\) is a diffeomorphism by Lemma 5. **Lemma 10**.: _Suppose that \(\gamma_{(x_{0},\omega_{0})}\) is nowhere tangent to \(\partial K\), and denote \((x^{\prime},\omega^{\prime})=\mathcal{P}(x_{0},\omega_{0})\). Then there exists a neighbourhood \(U\) of \(x_{0}\) in \(\partial S\) and a map \(\tau:U\to\mathbb{R}\) such that for all \(x\in U\), we have \((x,x^{\prime},\tau(x))\in\mathcal{T}\) and_ \[\pi_{1}\circ\mathcal{P}(x,-\nabla\tau(x))=x^{\prime}\] Proof.: We begin by keeping all the definitions as in the proof of Lemma 5. Suppose \(\gamma_{(x_{0},\omega_{0})}\) reflects \(k\) times, then \(x^{\prime}=x_{k+1}(x_{0},\omega_{0})\). We look at the final smooth geodesic section, restricting \(\pi_{1}\circ\mathcal{F}_{t}\) to \(\widetilde{U}_{k}\). Note that \(\pi_{1}\circ\mathcal{F}_{t}\) is a submersion, so taking \(\mathcal{U}=(\pi_{1}\circ\mathcal{F}_{t})^{-1}(x^{\prime})\) gives a codimension 1 submanifold of \(\widetilde{U}_{k}\). Thus \({X_{k}}^{-1}(\mathcal{U})\) is a codimension 1 submanifold of \(W\). Note that the set of directions which give \(x_{k+1}(x,\omega)=x^{\prime}\) for each fixed \(x\in W\) is countable and discrete (by Lemma 7). Hence we can shrink \({X_{k}}^{-1}(\mathcal{U})\) around \((x_{0},\omega_{0})\) such that if both \((x,\omega),(x,\omega^{\prime})\in{X_{k}}^{-1}(\mathcal{U})\) then \(\omega=\omega^{\prime}\). We can now define a function \(\phi:\pi_{1}({X_{k}}^{-1}(\mathcal{U}))\to\pi_{2}({X_{k}}^{-1}(\mathcal{U}))\) by letting \(\phi(x)\) be the unique direction such that \((x,\phi(x))\in{X_{k}}^{-1}(\mathcal{U})\). Also let \(\tau:{\pi_{1}({X_{k}}^{-1}(\mathcal{U}))}\to\mathbb{R}\) be the function defined by setting \(\tau(x)\) as the unique travelling time determined by \((x,\phi(x))\), so that \((x,x^{\prime},\tau(x))\in\mathcal{T}\). Note that both \(\phi\) and \(\tau\) are smooth. It now follows from lemma 5 in Gurfinkel, Noakes, and Stoyanov (2020) that \(\nabla\tau(x)=-\phi(x)\), so \(\pi_{1}\circ\mathcal{P}(x,-\nabla\tau(x))=x^{\prime}\) and the proof is complete. We now define another set, a graph of the set of travelling times for every \(x_{1}\in\partial S\), \[\mathcal{T}_{i}^{j}(x_{1})=\{(x,t):(x,x_{1},t)\in\mathcal{T}_{i}^{j}\}\] \[\mathcal{T}^{j}(x_{1})=\cup_{i=0}^{\infty}\mathcal{T}_{i}^{j}(x_{1})\] That is, the set of travelling times of geodesics which end at \(x_{1}\), reflect exactly \(i\) times and are tangent to \(\partial K\) exactly \(j\) times. The order, \(o(\gamma)\) of a geodesic \(\gamma\) is the number of intersections between \(\gamma\) and \(\partial K\). Let \(d_{K}\) be the minimum distance between obstacles in \(K\), and \(t(\gamma)\) be the travelling time of \(\gamma\). Then we have \[o(\gamma)d_{K}\leq t(\gamma)\leq o(\gamma)diam(S) \tag{7}\] **Proposition 11**.: _For every \(x_{1}\in\partial S\), the set \(\mathcal{T}^{0}(x_{1})\) is a countable union of pairwise-transverse, smooth, bounded open arcs \(\alpha_{i}\) in \(\partial S\times\mathbb{R}\). Furthermore, each arc \(\alpha_{i}\) has a corresponding \(\tau_{i}:U_{i}\to\mathbb{R}\) such that:_ * \(U_{i}\) _is an open arc in_ \(\partial S\) _and_ \(x\mapsto(x,\tau_{i}(x))\) _is a diffeomorphism from_ \(U_{i}\) _onto_ \(\alpha_{i}\)_._ * _For every_ \(x\in U_{i}\) _the geodesic generated by_ \((x,-\nabla\tau(x))\) _is nowhere tangent to_ \(\partial K\) _and intersects_ \(\partial S\) _at_ \(x_{1}\)_._ Proof.: Take \((x_{0},t_{0})\in\mathcal{T}^{0}(x_{1})\), by Lemma 7 there are countably many \(\omega_{i}\) such that \(\gamma_{(x_{0},\omega_{i})}\) starts from \(x_{0}\), ends at \(x_{1}\) and has travelling time \(t_{0}\). By Lemma 10 for every \(\omega_{i}\) there exists a neighbourhood \(U_{i}\) of \(x_{0}\) in \(\partial S\) and a map \(\tau_{i}:U_{i}\to\mathbb{R}\) such that for all \(x\in U_{i}\) we have \(\pi_{1}\circ\mathcal{P}(x,-\nabla\tau(x))=x_{1}\). It now follows that the map \(x\mapsto(x,\tau_{i}(x))\) is a diffeomorphism onto an open arc \(\alpha_{i}\). Furthermore the arcs are pairwise-transverse at \(x_{0}\), since \(\nabla\tau_{i}(x_{0})=\omega_{i}\). We may extend each \(\alpha_{i}\) by applying Lemma 10 again at a point \((x^{\prime},\tau_{i}(x^{\prime}))\in\alpha_{i}\), with \(x^{\prime}\neq x_{0}\). There we get another set of maps \(\tau_{j}:U_{j}\to\mathbb{R}\), only one of which satisfies \(\nabla\tau_{j^{\prime}}(x^{\prime})=\nabla\tau_{i}(x^{\prime})\). By uniqueness of the maps \(\tau\), the two must agree on \(U_{i}\cap U_{j^{\prime}}\) and so \(\alpha_{i}\) is monotony extended to \(U_{i}\cup U_{j^{\prime}}\). Therefore we can extend the \(\alpha_{i}\)'s to unique maximal open arcs which are pairwise tranverse, whose boundary is determined by geodesics tangent to \(\partial K\). Finally, given \(x,y\in\alpha_{i}\), the orders of the geodesics \(\gamma_{(x,\tau_{i}(x))}\) and \(\gamma_{(y,\tau_{i}(y))}\) must be equal by continuity. We can therefore write \(o(\gamma_{(x,\tau_{i}(x))})=o(\alpha_{i})\) for any \(x\in\alpha_{i}\). This gives another set of bounds: \[o(\alpha_{i})d_{K}\leq\inf_{x\in\alpha_{i}}\tau_{i}(x)\leq\sup_{x\in\alpha_{i }}\tau_{i}(x)\leq o(\alpha_{i})diam(S) \tag{8}\] Consider the set of geodesics ending at \(x_{1}\) which reflect exactly \(k\geq 0\) times. By Lemma 9, finitely many of these geodesics are tangent to \(\partial K\). Therefore there are finitely many maximal arcs \(\alpha_{i}\) for each order \(k\). Thus there are countably many maximal arcs in total, with \(\mathcal{T}^{0}(x_{1})=\cup_{i\geq 1}\alpha_{i}\). Note that from Equation (8) we see that the closed arcs \(\overline{\alpha_{i}}\) intersect at most finitely many other closed arcs \(\overline{\alpha_{i^{\prime}}}\), since at a point of intersection, the travelling times are equal, and hence bound the order. There are finitely many \(\alpha_{i}\) of each order, thus bounding the number of possible intersections. **Corollary 11.1**.: \(\mathcal{T}^{0}(x_{1})\) _is open and dense in \(\mathcal{T}(x_{1})\), therefore \(\mathcal{T}^{1}(x_{1})\cup\mathcal{T}^{2}(x_{2})=\cup_{i\geq 1}\partial \alpha_{i}\)_ **Corollary 11.2**.: _The set \(\mathcal{T}^{1}(x_{1})\) is open and dense in \(\mathcal{T}^{1}(x_{1})\cup\mathcal{T}^{2}(x_{1})\) and \(\mathcal{T}^{2}(x_{1})\) is discrete._ Now take some \((x_{0},t)\in\partial\alpha_{i}\) and define \(v_{0}=\lim_{x\to x_{0}}-\nabla\tau_{i}(x)\). Then \(\gamma_{(x_{0},v_{0})}\) will be a geodesic from \(x_{0}\) to \(x_{1}\) which is tangent to \(\partial K\) at least once. The following results follows exactly as in Noakes and Stoyanov (2021). **Proposition 12**.: _Given \((x_{0},t)\in\mathcal{T}^{1}(x_{1})\), there exist unique \(i,i^{\prime}\geq 1\) such that \(\{(x_{0},t)\}=\partial\alpha_{i}\cap\partial\alpha_{i^{\prime}}\), with \(o(\alpha_{i})+1=o(\alpha_{i^{\prime}})\). For all \(x\in U_{i}\cap U_{i^{\prime}}\) we have \(\tau_{i}(x)<\tau_{i^{\prime}}(x)\). Note that both \(U_{i}\) and \(U_{i^{\prime}}\) lie on the same side of \(x_{0}\) in \(\partial S\). Furthermore,_ \[\lim_{x\to x_{0}}\tau_{i}(x)=\lim_{x\to x_{0}}\tau_{i^{\prime}}(x)=t\text{ and}\] \[\lim_{x\to x_{0}}\nabla\tau_{i}(x)=\lim_{x\to x_{0}}\nabla\tau_{i^{\prime}}(x).\] Proposition 12 has the following interesting consequence, since the arcs \(U_{i}\) and \(U_{i^{\prime}}\) lie on the same side of \(x_{0}\) in \(\partial S\). **Corollary 12.1**.: \(\mathcal{T}^{1}(x_{1})\cup\mathcal{T}^{2}(x_{1})\) _is the closure of the set of all isolated cusps in \(\mathcal{T}(x_{1})\)_ This implies that we can detect the points of tangency in the set of travelling times directly. One could do this by observing the cusps in an appropriate embedding of the travelling times, as in the following example. **Example 2**.: Figure 8 displays a so-called echograph of the set \(\mathcal{T}(x_{1})\), up to 3 reflections, of obstacles in the Poincare half-plane. \(x_{1}\) is the point in the bottom right of the outer boundary \(\partial S\) where the two blue arcs (representing geodesics which do not intersect the obstacles) meet. The embedding \(E:\mathcal{T}(x_{1})\to\mathbb{R}^{2}\) is given by \((x_{0},t_{0})\mapsto x_{0}+t_{0}\eta(x_{0})\), where \(\eta(x_{0})\) is the normal to \(\partial S\) at \(x_{0}\). Note that we have chosen to embed \(\mathcal{T}(x_{1})\) in the Euclidean plane to allow for an easier interpretation of the arcs. One could also choose an analogous embedding in the Poincare half-plane, although the important features of the echograph (namely the cusps formed by the arcs) will remain all the same. Each time an arc meets another at a cusp, the order of the arcs must have a difference of exactly one. We denote the different orders by the use of colour, blue for the arcs of order \(0\) and red for those of order \(3\). Note that for increasing the order of an arc will increase the number of data points computed by an order of magnitude. Define the modified arcs \(\alpha_{i}^{*}=\alpha_{i}\cup_{i^{\prime}\neq i}\alpha_{i^{\prime}}^{*}\), to exclude the finite number of points of intersection, and let \(\mathcal{T}^{0}(x_{1})^{*}=\cup_{i\geq 1}\alpha_{i}^{*}\). Then the \(\alpha_{i}^{*}\) partition \(\mathcal{T}^{0}(x_{1})^{*}\) and the generators \(\tau_{i}\) restrict to smooth functions on the open subsets \(\pi_{1}(\alpha_{i}^{*})\subseteq\partial S\). We now define the sets of travelling times with initial directions included. \[\widetilde{\mathcal{T}}^{0}(x_{1})^{*}\ =\{(x,-\nabla\tau_{i}(x),t):(x,t)\in\mathcal{T}^{0}(x_{1 })^{*}\}\] \[\widetilde{\mathcal{T}}^{0*}=\{(x,\omega,x_{1},t):x_{1}\in\partial S,(x,\omega, t)\in\widetilde{\mathcal{T}}^{0}(x_{1})^{*}\}\] Moreover, define \(\widetilde{\mathcal{T}}^{0}\) as the closure of \(\widetilde{\mathcal{T}}^{0*}\) in \[\{(x,\omega,x_{1},t):(x,x_{1},t)\in\mathcal{T}^{0}\text{ and }(x,\omega)\in T_{ \partial}S\},\] and \(\widetilde{\mathcal{T}}^{0}(x_{1})=\{(x,\omega,t):(x,\omega,x_{1},t)\in\widetilde{ \mathcal{T}}^{0}\}\). Similarly, define \(\widetilde{\mathcal{T}}\) as the closure of \(\widetilde{\mathcal{T}}^{0}\) in \[\{(x,\omega,x_{1},t):(x,x_{1},t)\in\mathcal{T}\text{ and }(x,\omega)\in T_{ \partial}S\},\] \(\widetilde{\mathcal{T}}(x_{1})=\{(x,\omega,t):(x,\omega,x_{1},t)\in\widetilde{ \mathcal{T}}\}\) and \(\widetilde{\mathcal{T}}^{q}=\{(x,\omega,x_{1},t)\in\widetilde{\mathcal{T}}:(x,x_{ 1},t)\in\mathcal{T}^{q}\}\). **Proposition 13**.: \(\widetilde{\mathcal{T}}^{1}\) _is a countable union of maximal smooth open arcs \(\beta_{j}\) such that:_ * _Each_ \(\beta_{j}\) _is diffeomorphic to a smooth open arc_ \(V_{j}\subseteq\partial S\)__ * \(\widetilde{\mathcal{T}}^{2}=\cup_{j\geq 1}\partial\beta_{j}\)__ * _Each_ \((x_{0},\omega_{0},x_{1},t)\in\widetilde{\mathcal{T}}^{2}\) _is on the boundary of exactly four distinct arcs_ \(\beta_{j}\)_,_ \(\beta_{j^{\prime}}\)_,_ \(\beta_{j^{\prime\prime}}\)_,_ \(\beta_{j^{\prime\prime\prime}}\) _with three of_ \(V_{j},V_{j^{\prime}},V_{j^{\prime}},V_{j^{\prime\prime\prime}}\) _being on the same side of_ \(x_{0}\)_._ Proof.: Given \((x_{0},\omega_{0},x_{1},t)\in\widetilde{\mathcal{T}}^{1}\) we know that the geodesic \(\gamma_{(x_{0},\omega_{0})}\) is tangent to \(\partial K\) exactly once, at either the first or last point of contact with \(\partial K\) (by general position). Suppose the geodesic is tangent at the first point of contact, say at \(x^{\prime}\in\partial K_{i}\). Parameterise \(\partial K_{i}\) in a neighbourhood \(W\) of \(x^{\prime}\) by the unit speed curve \(k_{i}:(0,1)\to S\). Then by the same argument as Lemma 1, after possibly shrinking \(W\) there is a smooth function \(\tau:W\to\mathbb{R}\) such that \(y(x)=\mathcal{F}_{\tau(x)}(x,\dot{k}_{i}(k_{i}^{-1}(x)))\in T_{\partial}S\) for all \(x\in W\), and \(y(x^{\prime})=(x_{0},-\omega_{0})\). This defines a smooth open arc \[V=\{(v(x),\mathcal{P}(v(x)),t(v(x)):x\in W\}\subseteq\widetilde{\mathcal{T}}^{ 1},\] where \(v(x)=(\pi_{1}\circ y(x),-\pi_{2}\circ y(x))\) and \(t\) is the travelling time function. The symmetric argument would follow if \(\gamma_{(x_{0},\omega_{0})}\) was tangent at the last point of contact. Hence the path components \(\beta_{j}\) of \(\widetilde{\mathcal{T}}^{1}\) are disjoint maximal smooth open arcs. Note that \(\beta_{j}\) is therefore diffeomorphic to the open arc \(V_{j}=\pi_{1}(\beta_{j})\subseteq\partial S\). Now for \((x_{0},\omega_{0},x_{1},t)\in\widetilde{\mathcal{T}}^{2}\), the geodesic \(\gamma_{(x_{0},\omega_{0})}\) is tangent to \(\partial K\) exactly twice, at both the first and last points of contact with \(\partial K\), say \(x^{\prime}\in\partial K_{i}\) and \(x^{\prime\prime}\in\partial K_{i^{\prime}}\). Since the set \(\widetilde{\mathcal{T}}^{2}\) is discrete, we may take a neighbourhood \(U\) of \(x^{\prime}\) such that every \(x\in U\) is contained in a maximal arc \(\beta_{j}\) by the argument above. This defines two maximal arcs \(\beta_{j}\) and \(\beta_{j^{\prime}}\), with \(V_{j}\) and \(V_{j^{\prime}}\) on opposite sides of \(x_{0}\). We can do the same for \(x^{\prime\prime}\) with \(\beta_{j^{\prime\prime}}\) and \(\beta_{j^{\prime\prime\prime}}\) being maximal arcs and \(V_{j^{\prime\prime}}\) and \(V_{j^{\prime\prime\prime}}\) on the same side of \(x_{0}\). Note that the four maximal arcs are all distinct. Thus \((x_{0},\omega_{0},x_{1},t)\) is the endpoint of exactly four open arcs. It now follows that \(\widetilde{\mathcal{T}}^{1}=\cup_{j\geq 1}\beta_{j}\), and \(\widetilde{\mathcal{T}}^{2}=\cup_{j\geq 1}\partial\beta_{j}\). ## IV Figure 8: Echograph of two obstacles Reconstructing \(K\) from \(\mathcal{T}\) Note that by Corollary 4.1 there are2\(4n(n-1)\) arcs \(\beta_{j}\) which are tangent to \(\partial K\) but do not intersect the obstacles elsewhere. We re-order the maximal arcs so that for \(1\leq j\leq 4n(n-1)\) the arcs \(\beta_{j}\) are the aforementioned arcs arcs precisely. For each arc \(\beta_{j}\), take a normal coordinate neighbourhood \(U\) containing \(V_{j}\), and denote the diffeomorphism \(\psi_{j}:V_{j}\to\beta_{j}\). Let \(\xi_{x}:\mathbb{R}\to S\) be the smooth geodesic starting from \(x\in V_{j}\) in the direction \(\psi_{j}(x)\). Locally the smooth geodesics \(\xi_{x}\) intersect pairwise at exactly one point within \(U\). Therefore their envelope, \(\Sigma(\beta_{j})\), is a smooth curve defined locally in \(U\). Footnote 2: Recall that in the case where \(n=2\) there are at most \(8\) arcs. Denote by \(x^{*}\in\partial S\) the clockwise terminal limit of \(V_{j}\), and let \[\eta_{j}=\lim_{x\to x^{*}}\psi_{j}(x)\] Suppose that \(\Sigma(\beta_{j})\) is strictly convex, then we say that \(\beta_{k}\) extends \(\beta_{j}\) if \(\partial\beta_{j}\cap\partial\beta_{k}=\{\eta_{j}\}\) and the closure of \(\Sigma(\beta_{j})\cup\Sigma(\beta_{k})\) is a strictly convex curve in \(S\). Otherwise we say that \(\beta_{j}\) is non-extendible. Note that for each \(\beta_{j}\) there are at most three arcs which could extend it, by Proposition 13. **Proposition 14**.: _Suppose that for every \((x,\omega,x_{1},t)\in\beta_{j}\) the geodesic \(\gamma_{(x,\omega)}\) is tangent at the first point of contact with \(\partial K\). Then precisely one of the following holds:_ 1. \(\beta_{j}\) _is uniquely extendible, by some arc_ \(\beta_{k}\) _such that and_ \(\Sigma(\beta_{j})\cup\Sigma(\beta_{k})\) _is an arc in_ \(\partial K\)_._ 2. \(\beta_{j}\) _is non-extendible with_ \(1\leq j\leq 4n(n-1)\) _and the conjugate arc_ \(\beta_{j^{*}}\) _is extendible, where_ \[\beta_{j^{*}}=\{(x_{1},\omega,x,t):(x,\omega,x_{1},t)\in\beta_{j}\}\] Proof.: Given \(\beta_{j}\), and \(\eta_{j}\) corresponding to the clockwise terminal limit \(x^{*}\) of \(V_{j}\), the geodesic \(\gamma_{\eta_{j}}\) is tangent to \(\partial K\) exactly twice, by Proposition 13. For each \(x\in V_{j}\), let \(\widetilde{x}\) be the point of tangency with \(\partial K\) of the geodesic generated by \(\gamma_{\psi_{j}(x)}\). Let \(\widetilde{x}^{*}\) be the first point of tangency of \(\gamma_{\eta_{j}}\). Then by continuity of \(\psi_{j}\), the tangency \(\widetilde{x}^{*}\) is a limit of the points of tangency \(\widetilde{x}\). There are two possible cases: _Case 1:_ The \(\widetilde{x}\) and \(\widetilde{x}^{*}\) remain on the same connected component \(\partial K_{i}\). First note that by Proposition 13 there are three arcs \(\beta_{j^{\prime}},\beta_{j^{\prime\prime}}\), and \(\beta_{j^{\prime\prime\prime}}\) which could extend \(\beta_{j}\). We suppose that \(V_{j^{\prime}}\) is on the opposite side of \(x^{*}\) from \(V_{j}\), and \(V_{j^{\prime\prime}},V_{j^{\prime\prime\prime}}\) are either on the opposite or the same side as \(V_{j}\). Now since the first point of contact of \(\gamma_{\eta_{j}}\) remains on \(\partial K_{i}\) it follows that \(\beta_{j^{\prime}}\) will generate geodesics which are also tangent to \(\partial K_{i}\) at the first point of contact. Thus \(\beta_{j^{\prime}}\) extends \(\beta_{j}\), and since both arcs generate geodesics which are tangent to \(\partial K_{i}\), the envelopes \(\Sigma(\beta_{j})\cup\Sigma(\beta_{j^{\prime}})\) form an arc in \(\partial K_{i}\). Now to show that \(\beta_{j^{\prime}}\) is the unique arc which extends \(\beta_{j}\), suppose that \(\beta_{k}\) also extends \(\beta_{j}\) where \(k=j^{\prime\prime}\) or \(k=j^{\prime\prime\prime}\). Note that the arc \(\beta_{k}\) generates geodesics which are tangent to some other connected component \(\partial K_{i^{\prime}}\). Construct convex fronts for the envelope \(\Sigma(\beta_{k})\) and \(\partial K_{i^{\prime}}\) around the points of tangency from \(V_{k}\), as in Lemma 8. Then every geodesic which hits one front orthogonally must hit the other front orthogonally as well, by construction. This gives a contradiction by Lemma 9. So \(\Sigma(\beta_{k})\) cannot be a convex arc in \(S\). Thus the extension \(\beta_{j^{\prime}}\) is unique. _Case 2:_ The \(\widetilde{x}\) and \(\widetilde{x}^{*}\) do not remain on the same connected component \(\partial K_{i}\). In this case the second point of tangency of \(\gamma_{\eta_{j}}\) remains on the same connected component as the \(\widetilde{x}\), but this would imply that the first two points of contact of \(\gamma_{\eta_{j}}\) with \(\partial K\) are points of tangency. Hence, by general position, \(\gamma_{\eta_{j}}\) is a smooth geodesic, so \(1\leq j\leq 4n(n-1)\). Note that this also implies that the \(\widetilde{x}\) were the only points of contact between \(\gamma_{\psi_{j}(x)}\) and \(\partial K\). Now to extend \(\beta_{j}\) we look at the conjugate arc \(\beta_{j^{*}}\). By the first case, \(\beta_{j^{*}}\) is extendible, but \(\beta_{j}\) would not be extendible since the arcs which could extend it do not generate geodesics that are tangent to \(\partial K_{i}\). We can now outline how to reconstruct the obstacles from the maximal arcs \(\beta_{j}\). We begin by noting that for any of the first \(4n(n-1)\) arcs, we have \(\Sigma(\beta_{j})=\Sigma(\beta_{j^{*}})\), where \(\beta_{j^{*}}\) is the conjugate arc defined in Proposition 14. Now starting with any of these initial arcs \(\beta_{j}\), we extend \(\beta_{j}\) or \(\beta_{j^{*}}\) according to Proposition 14. Each extension can be further extended by the same method. We continue countably many times, or until the envelope of the extending arc becomes acceptably small. Repeating this process for each of the initial \(4n(n-1)\) arcs will recover \(\partial K\) from the travelling times. ###### Acknowledgements. This research is supported by an Australian Government Research Training Program (RTP) Scholarship.
2309.12732
OpenAi's GPT4 as coding assistant
Lately, Large Language Models have been widely used in code generation. GPT4 is considered the most potent Large Language Model from Openai. In this paper, we examine GPT3.5 and GPT4 as coding assistants. More specifically, we have constructed appropriate tests to check whether the two systems can a) answer typical questions that can arise during the code development, b) produce reliable code, and c) contribute to code debugging. The test results are impressive. The performance of GPT4 is outstanding and signals an increase in the productivity of programmers and the reorganization of software development procedures based on these new tools.
Lefteris Moussiades, George Zografos
2023-09-22T09:31:39Z
http://arxiv.org/abs/2309.12732v1
# OpenAI's GPT4 as coding assistant ###### Abstract Lately, Large Language Models have been widely used in code generation. GPT4 is considered the most potent Large Language Model from Openai. In this paper, we examine GPT3.5 and GPT4 as coding assistants. More specifically, we have constructed appropriate tests to check whether the two systems can a) answer typical questions that can arise during the code development, b) produce reliable code, and c) contribute to code debugging. The test results are impressive. The performance of GPT4 is outstanding and signals an increase in the productivity of programmers and the reorganization of software development procedures based on these new tools. ## 1 Introduction Among other features, Large Language Models (LLM) can generate code in various programming languages [1]. Recently, many publications have recommended and evaluated LLMs specialized in code generation. CodeBERT is a bimodal pre-trained model designed for programming and natural language tasks, like code search and documentation generation. It's developed using a Transformer-based architecture [2] and trained with a unique objective function to effectively use paired and unpaired data from programming and natural language sources [3]. Codex is a GPT language model fine-tuned on public GitHub code, and a version of it powers GitHub Copilot. When evaluated on the HumanEval set, designed to gauge program synthesis from docstrings, Codex solves 28.8% of the tasks, outperforming GPT-3 and GPT-J. The study also uncovers that multiple samplings from Codex enhance problem-solving success rates. Additionally, the paper discusses the challenges and broader implications of advanced code generation technologies [4]. The capabilities of large language models in synthesizing Python programs from natural language prompts using two new benchmarks, MBPP and MathQA-Python, are explored by [5]. The study reveals that as model size increases, synthesis performance also improves, with the largest models being able to correctly generate solutions to nearly 60% of MBPP problems through few-shot learning. The models also benefit from human feedback, cutting error rates in half, but struggle to predict the outputs of the generated programs when provided with specific inputs. Study [6] introduces a novel approach to code completion using an "external" context, emulating human behaviour of referencing related code snippets. The proposed framework combines retrieval techniques with traditional language models to better predict code, factoring in direct copying and semantically similar code references. When tested on Python and Java, this method achieves state-of-the-art performance on the CodeXGLUE benchmark. Paper [7] explores LLMs trained on unlabeled code corpora for code generation. It introduces CERT, a two-step method that creates a basic code outline and then fills in the details. The study also presents two new benchmarks, PandasEval and NumpyEval, for evaluating library-oriented code generation. PanGu-Coder is a pre-trained language model built on the PanGu-Alpha architecture designed to generate code from natural language descriptions. The model is trained using a two-stage strategy, starting with raw programming data, followed by task-focused training using Causal and Masked Language Modelling objectives [8] Li et al. introduced AlphaCode, a deep-learning model built with self-supervised learning and an encoder-decoder transformer, which approximates human-level performance in computer programming competitions on the Codefores platform. Authors argue that this advancement could significantly boost programmers' productivity and reshape programming culture, where humans primarily define problems and machine learning handles code generation and execution [9]. CODEGEN is a family of large language models trained on natural language and programming data to advance program synthesis. The study also explores a multi-step approach to program synthesis, revealing improved performance when tasks are broken down into multiple prompts, and introduces an open benchmark, the Multi-Turn Programming Benchmark (MTPB), for this purpose [10]. Paper [11] investigates the impact of LLMs, like OpenAI Codex, on developers' code security. Through a user study involving 58 student programmers, the research examines the code's security when implementing a specific C-based task with the assistance of LLMs. The findings suggest that using LLMs does not substantially increase the risk of introducing critical security vulnerabilities in such coding tasks. RepoCoder [12] is a framework designed for repository-level code completion that efficiently leverages information scattered across different files in a repository. RepoCoder uses a combination of a similarity-based retriever and a pre-trained code language model, along with an innovative iterative retrieval-generation approach, to improve code completion at various levels of granularity. RepoCoder has been tested on a new benchmark called RepoCEval. Paper [13] thoroughly surveys 27 large language models geared explicitly towards the NL2Code task, which involves generating code from natural language descriptions. The study evaluates these models using the HumanEval benchmark and derives that success in this domain hinges on "Large Size, Premium Data, Expert Tuning". The authors also introduce a dedicated website to monitor ongoing advancements and discuss the gap between model performance and human capabilities in the NL2Code realm. The BigCode community has unveiled StarCoder and StarCoderBase, advanced Large Language Models designed for code generation and infilling, with StarCoderBase trained on a vast dataset called The Stack and StarCoder being a fine-tuned version for Python [14]. WizardCoder is a model that empowers Code Large Language Models (Code LLMs) with complex instruction fine-tuning by adapting the Evol-Instruct method to the domain of code. It has been introduced in a paper [15] and has demonstrated exceptional performance in code-related tasks. Study [16] investigates the use of large language models (LLMs) to aid in deductive coding, a method in qualitative analysis where data is labelled based on predetermined codebooks. The approach reached satisfactory alignment with expert-labelled outcomes by integrating GPT-3 with expert-created codebooks for a specific task related to coding curiosity-driven questions. The paper highlights the potential and challenges of employing LLMs in qualitative data coding and broader applications. One result of all this development is the addition of intelligent assistants to many well-known IDEs. For example, Visual Studio Code is supported by IntelliCode, PyCharm by Code With Me, Eclipse by Code Recommenders, NetBeans by Deep Learning, IntelliJ IDEA by Code With Me, and Xcode by SourceKit-LSP [17]. In March 2023, Openai published the GPT-4 system card, which [18] analyzes the capabilities of GPT-4, including code generation. However, to date, we have not found any publication evaluating the coding capabilities of GPT-4. This paper evaluates GPT-4 and GPT-3.5 as coding assistants. ## 2 Methodology We consider three tasks for which a coding assistant should be helpful: Code development, Code Debugging, and answering questions related to code. Code development and Code debugging are self-explanatory concepts. The human programmer often has questions during code writing, such as details on the syntax of a command. For this reason, we check that GPT-3.5 and 4 can answer questions about the code satisfactorily. There are many source code datasets, several mentioned in the introduction. However, these are geared to check LLMs' code production specifically. In addition, problems of a prototypical nature often arise in the production environment. Although we do not know exactly which data sets GPT-3.5 and 4 are trained on, it is reasonable to assume that they are trained on public data sets whose purpose is to evaluate LLMs' coding capabilities. For the reasons above, our tests do not rely on such data sets. Instead, we have carefully constructed 3 test suites: one for testing code generation capabilities, one for testing debugging capabilities, and one for answering questions. The tests were designed to limit the chances that GPT3.5 and 4 were trained on exactly those requested codes. The tests were submitted through the web interface of GPT3.5 and 4. The prompt engineering of the tests follows the GPT best practices of Openai [19]. The results were evaluated based on an expert human reviewer or compared to another reliable source. As the tests are about checking different capabilities, more details about the test configuration and the evaluation of the results are given with the description of each test. Java was used as the programming language. All code and other answers generated by GPT3.5 and 4 is on GitHub [19]. ## 3 Answering questions In this task, we test the assistants to see if they can answer questions that often arise for developers when developing code. For this purpose, we constructed three questions of relative difficulty. We list the relevant prompts and then evaluate the assistants' answers. * _Question 1 (Prompt)_: Does Java support passing a function as an argument to a function? What is the syntax? * _Question 2 (Prompt)_: Consider the code System.out.print(s==s1+" "+s.equals(s1)); I expected it to display two boolean values, but it displays only one. Explain why? * _Question 3 (Prompt)_: Non-abstract methods have an implementation. The same applies to the default methods. Non-abstract methods are inherited and can be overwritten. The same applies to default methods. What is the difference between default methods and non-abstract ones? Answer briefly. Response GPT3.5 and 4 responses were evaluated by a human expert and found to answer all three questions satisfactorily. Responses can be found on Github [20]. ## 4 Code Development Assistance For code development, we constructed two tests. The first asks for developing a power function, and the second for implementing a tic-tac-toe application with predetermined classes. ### Power function (PF) In this task, we asked GPT3.5 and 4 to implement a function that calculates the power of a real number raised to an integer exponent. Although the task seems simple at first glance, it is demanding when high calculation precision is required. The difficulty arises from the approximate nature of real numbers. Due to the approximate nature of real numbers, the results of operations lack precision. When there are many intermediate operations, the deviations from each operation accumulate, and the final result may present a significant deviation. So, this is a complex implementation when precision is required in the calculations. Moreover, it is a feature, not a concern for application developers, as all languages provide a ready-made power function. Besides, after an exhaustive search on the web, we could not find a high-precision implementation. Evaluation The generated functions were compared with the Java Math.pow function. The Math.pow() function is implemented in Java as a native method, which means that it is implemented in the underlying platform's native code. The implementation of Math.pow() varies depending on the platform and the underlying hardware architecture. The algorithm is optimized for speed and accuracy and is presumed to be relatively accurate. The results were checked based on the following procedure. Let GPT4.pow be the function produced by GPT4 and r(f,b,e) the result of the function f with base b and exponent e. For each b from 500 to 1000 with step 1 and each e from 0 to 9 with step 1, the values r(GPT4.pow,b,e) and r(Math.pow,b,e) are calculated. Assume that for each pair of these values, even one is non-infinite, and they differ from each other by more than 4.9E-324 (the smallest real value represented by Java double type). In that case, the absolute value of their difference is added to an appropriate adder. Then, the adder is divided by the number of terms in the sum and, thus, the average deviation of the GPT4.pow results from the Math.pow results are calculated. The same process is repeated to compare GPT3.5.pow to Math.pow. The whole process is repeated for exponents from -1 to -9. #### PF Prompt #1 Develop a Java function that calculates the power of a real number raised to an integer exponent. Specifications: 1. Interface: public static double pow(double b, int e) 2. Don't use Math.pow or BigDecimal.pow 3. Achieve the maximum possible precision #### Response Both systems responded by providing a satisfactory implementation based on the exponentiation by squaring algorithm. The algorithm has time complexity O(log n), where n is the exponent. The implementations are almost identical, with only two minor differences: * GPT4 checks if the exponent is odd by performing a bitwise and with 1 \(((e\&1)==1)\) while GPT3.5 performs an integer division remainder calculation \((e\%2==1)\) * GPT4 performs a right shift by 1 to divide the exponent by 2 \((e>>=1)\), whereas GPT3.5 performs integer division \((e/=2)\) for the same purpose. The algorithms presented the same average deviation with respect to Math.pow, which was 2.356527240763158E10 for positive exponents and 1.7112490986192953E-22 for negative exponents. #### PF Prompt #2 Can you improve the precision of your function? I checked it against Math.pow and found significant discrepancies. Examples: base = 502, exponent= 9, GPT.pow = 2.0245730632526733E24, Math.pow = 2.024573063252673E24, difference = 2.68435456E8 base = 504, exponent = 9, GPT.pow = 2.098335016107156E24, Math.pow = 2.0983350161071556E24, difference = 2.68435456E8 #### Response GPT3.5 responded with a function that implements the Taylor series expansion [21] algorithm, which increases time complexity to O(e2). GPT4 again used exponentiation by squaring but used the BigDecimal class [22], recommended for cases requiring precision in calculations. The mean deviation of GPT3.5 worsened to 2.2292150579952536E25 for positive exponents and 1.0012331308931004 for negative ones. The mean deviation of GPT4 improved to 2.3037066373333335E9 for positive exponents and 2.1726446876877912E-2 for negative ones. ### Tic-Tac-Toe application (TTT) In this task, we asked GPT to develop a tic-tac-toe application following especial specifications. We set certain specifications to minimize the chance that a tic-tac-toe app would be found ready-made and delivered intact. #### TTT Prompt #1 Develop a command-line tic-tac-toe application consisting of the following classes: Player, Board, LivePlayer, RBPlayer, and Game. * _Player_: Is an Abstract class containing, final char id, abstract method Board move(Board board) * _Class Board_: Represents the game board. It contains the following public function members: void displayBoard(): It displays the game board on its current status char win(): It returns the winner's id. If there is no winner, it returns a white character. * _Class LivePlayer_: Represents a human player. It is a concrete class implementation inherited from Player. * _Class RBPlayer_: Represents an artificial Rule-based Player. It is based on the following rules: A. If there is a movement to win, select it. B. If the opponent has a movement to win, select it to block the opponent from winning. * _Game_: Uses the above-described classes to implement a tic-tac-toe game. #### Response GPT4 respond with a fully functional application that meets all our requirements. The code quality is good, including a warning that the used Board object could have been declared final. GPT3.5 responded with code that contained compile time errors. We performed the following communication to investigate its ability to produce correct code. #### TTT Prompt #1.1 Your code compiles with errors. Examples: * error: cells has private access in Board board.cells[i][j] = id; * error: cannot assign a value to final variable board board = currentPlayer.move(board); Rewrite code to avoid compile-time errors. GPT3.5 replied with code containing logical errors. We prompt it as follows: #### TTT Prompt #1.2 Your code has logical errors. Here is the output of your code after two movements of each player Player X, enter your move (row [0-2] and column [0-2]): **1 1** After the second fix, in the third version of the application, GPT3.5 responded with functional code. GPT4 respond with a fully functional application that meets all our requirements. The code quality is good, including a warning that the used Board object could have been declared final. Next, we requested a new class representing an artificial player based on the minimax [23] algorithm. The minimax implements a perfect player, i.e., a player who never loses. Therefore, the worst possible outcome minimax may give is a draw. ### TTT Prompt #2 Can you add the class MinimaxPlayer representing an artificial player based on the well-known minimax algorithm? #### Response GPT4 responded with a fully functional minimax player. GPT3.5 replayed with an erroneous version of a minimax player. A communication ensued in which we attempted to inform GPT3.5 of its errors, but it failed to present a satisfactory solution. Finally, we prompt GPT3.5 as follows: ### TTT Prompt #2.1 No improvement. It's still straightforward for anyone to win your MinimaxPlayer. I'm giving you the game board if it can help you. Please don't give me the same wrong algorithm again. If you can't do better, just let me know. Player X, enter your move (row [0-2] and column [0-2]): **2 0** Player X wins! Here, GPT3.5 explained the difficulties of implementing the algorithm and suggested that we study the matter more or look for a ready-made solution on GitHub. ## 5 Debugging Assistance (DA) To test the debugging capabilities, we designed two tests. One includes code that throws an exception, and the other includes code containing a logic error. ### Exception (E) In this task, we provided a code that crashes with IndexOutOfBoundsException and asked GPT3.5 and 4 to explain the problem and fix the code. #### DA-E Prompt #1 The Code below fails with IndexOutOfBoundsException. ``` importjava.util.ArrayList; importjava.util.List; publicclassDebug2{ staticArrayList<String>l=newArrayList<>(); staticvoidload(){ l.add("Green"); l.add("Black"); l.add("Blue"); l.add("White"); l.add("Pink"); l.add("Black"); } staticvoiddelAll(List<String>l,Stringtarget){ ``` intsize=l.size(); for(inti=0;i<size;i++) if(target.equals(l.get(i))){ l.remove(i); } } } publicstaticvoidmain(String[]args){ load(); delAll(l,"Black"); } } ``` Explain the error and correct the code. #### Explanation of the error First, the exception is raised in the delAll function, which is responsible for deleting all the target elements from the list l. The function stores the list size in the local variable size and then, in the iterative process, tries to delete every element equal to the target. However, after deleting the first element, the list size is reduced by 1. However, delAll tries to access the list for its original size, which leads to the exception. #### 5.2.1 Responce Both assistants solved the problem successfully. While GPT3.5 proposed a solution based on an Iterator, GPT4 proposed two alternatives. In the first solution, the for control expression replaces the size variable with the function that returns the list size (l.size()); inside the for, decrements i by one each time it deletes an element. The second solution traverses the list from the end (l.size()-1) to the beginning, thus ensuring no IndexOutOfBoundsException issue. ### Logical Error (LE) #### 5.2.2 DA-LE Prompt #1 The code below contains logical errors. Expected Output: [1, 2, 3, 4, 0, 5, 6] Actual Output: [1, 2, 3, 4, 5, 6, 0, 0, 0, 0, 0] Explain the errors and correct the code. ``` //Codecontaininglogicalerror importjava.util.Arrays; publicclassDebugging{ staticint[]resize(int[]input,intnewSize){ returnArrays.copyOf(input,newSize<input.length?newSize:input.length); } staticintadd(int[]array,intdata,intindex){ for(inti=0;i<=index;i++){ if(array[i]==data){ returnindex; } } array[index++]=data; returnindex; } staticint[]generateSet(int...array){ int[]set=newint[array.length]; intidx=0; for(intelement:array){ idx = add(set, element, idx); } resize(set, idx); return set; } staticint[]concat(int[]array1,int[]array2){ int[]rslt = new int[array1.length + array2.length]; System.arraycopy(array1,0,rslt,0,array1.length); System.arraycopy(array2,0,rslt,array1.length,array2.length); return generateSet(rslt); } publicstaticvoidmain(String[]args){ int[]set1 = generateSet(1,2,3,4,0), set2 = generateSet(0,3,4,5,6); int[]union = concat(set1,set2); System.out.println(Arrays.toString(union)); } } ``` #### Explanation of the error There are two bugs in the code. The first one is found in generateSet, which calls the function resize but does not assign the array returned by resize to the set variable. Thus, the set retains its original size and data. So the fix needed here is return resize(set, idx); instead of resize(set,idx); return set; The second error is within the add function, which iterates while i<=index, whereas the correct condition is i<index. #### Response First, GPT3.5 and GPT4 correctly explained the problems in the add and generateSet functions. In addition, they identified a resize problem when there is none. More specifically, GPT3.5 commented: 1. The resize method is not updating the size of the array correctly. It creates a new array of the specified size but doesn't copy the elements from the original array. 2. Use Arrays.copyOf to create a new array of the desired size and copy the elements from the original array to the new one. And GPT4 commented: 1. Resize method: In the current implementation, if newSize is larger than input.length, it would return an array of the same size as input. This does not match the intended behavior of resizing the array to newSize. These comments are wrong. However, the generated codes are functional as they correctly fix both add and generateSet, while the change they make to resize does not affect the specific code. More specifically, both systems converted resize so that it does not support reducing the size of the input table. Indeed, size reduction is not needed in this code. Of course, a resize that helps reduce a table's length (with possible data loss) might be helpful elsewhere. ## 6 Conclusions In this work, we examined the potential of GPT3.5 and 4 as coding assistants for three distinct tasks: Answering questions and providing Development and Debugging assistance. In answering questions, both LLMs proved to be efficient. In Development assistance, GPT4 proved superior to GPT3.5. Both in creating the pow function, it achieved a significant improvement in accuracy, and in the requirements for the tic-tac-toe application, it immediately responded with complete success. Moreover, it added a player based on the Minimax algorithm with ease. This is a requirement, according to our estimation, that is far from easy to implement. GPT3.5 failed to meet this requirement. In testing the debugging capabilities, GPT3.5 and 4 responded promptly and successfully to exception and logical error investigations. These conclude that GPT4 can provide substantial and reliable help as a coding assistant for all three properties tested. As expected, GPT3.5 appeared inferior to GPT4, but its capabilities are still impressive. Recently, a heated debate has been about whether artificial intelligence will replace human programmers. We believe the answer to this question is impossible, as no one can predict the future. However, currently, GPT4 can provide meaningful and reliable assistance to coding and dramatically improve the productivity of human developers. Such a thing is sure to reorganize the software production processes and possibly will not leave the job market of programmers unaffected. Whether its effect will increase the amount of software produced or unemployment in the developer industry remains to be seen.
2310.00255
Identifying Distribution Network Faults Using Adaptive Transition Probability
A novel approach is suggested for improving the accuracy of fault detection in distribution networks. This technique combines adaptive probability learning and waveform decomposition to optimize the similarity of features. Its objective is to discover the most appropriate linear mapping between simulated and real data to minimize distribution differences. By aligning the data in the same feature space, the proposed method effectively overcomes the challenge posed by limited sample size when identifying faults and classifying real data in distribution networks. Experimental results utilizing simulated system data and real field data demonstrate that this approach outperforms commonly used classification models such as convolutional neural networks, support vector machines, and k-nearest neighbors, especially under adaptive learning conditions. Consequently, this research provides a fresh perspective on fault detection in distribution networks, particularly when adaptive learning conditions are employed.
Xinliang Ma, Weihua Liu, Bingying Jin
2023-09-30T05:01:01Z
http://arxiv.org/abs/2310.00255v1
# Identifying Distribution Network Faults Using Adaptive Transition Probability ###### Abstract A novel approach is suggested for improving the accuracy of fault detection in distribution networks. This technique combines adaptive probability learning and waveform decomposition to optimize the similarity of features. Its objective is to discover the most appropriate linear mapping between simulated and real data to minimize distribution differences. By aligning the data in the same feature space, the proposed method effectively overcomes the challenge posed by limited sample size when identifying faults and classifying real data in distribution networks. Experimental results utilizing simulated system data and real field data demonstrate that this approach outperforms commonly used classification models such as convolutional neural networks, support vector machines, and k-nearest neighbors, especially under adaptive learning conditions. Consequently, this research provides a fresh perspective on fault detection in distribution networks, particularly when adaptive learning conditions are employed. Distribution network; early-stage fault; fault identification; feature extraction; adaptive probability learning ## I Introduction Distribution networks play a critical role in the electricity supply system by serving end-users and ensuring power quality, operational efficiency, and innovative customer services [1][2]. Despite their extensive coverage, varied equipment, and relatively low replacement costs, distribution networks are often neglected by power utilities when it comes to ensuring reliable supply [3]. Consequently, the development of fault identification technologies in these networks has been slow [3]. Currently, most faults in distribution networks are only addressed through repairs after they occur, leading to significant service disruptions for users. However, as the power grid evolves and expectations for supply reliability increase, power utilities are now placing more importance on predicting and diagnosing equipment faults in distribution networks. This shift aims to promptly address safety risks and minimize the frequency of power outages. Faults in equipment within distribution networks can occur suddenly or gradually over time [4]. One common fault is a ground fault, where protective devices isolate the faulty portion and restore normal operation once the fault is resolved [5]. However, the electrical arcing during the fault can cause irreparable damage to the insulation. If this process repeats multiple times, it can lead to insulation degradation and eventual breakdown [6]. These initial phase faults, referred to as "early-stage faults" in this article, are often disregarded by power utilities but contain valuable information about the insulation [7]. If effectively utilized, the waveforms associated with these faults can provide early warnings of faults in distribution networks and improve supply reliability [8][9]. Due to the complex nature of distribution network structures, traditional waveform analysis methods based on mechanisms are not efficient [10]. However, the integration of multiple sensors has allowed for the adoption of data-driven models in this field. The identification of faults in distribution networks is a challenging task, particularly when it involves electrical arcing in ground faults, which are characterized by randomness and uncertainty [11]. Obtaining sufficient data for training models is difficult due to the rarity of self-recoverable faults [12]. As a result, many fault identification algorithms rely on simulated or experimental data for developing and testing models. Only a small number of algorithms incorporate real field data, which is more complex and influenced by multiple interfering factors [13][14]. Therefore, it is crucial to assess model performance using real field data [15][16]. Furthermore, modern artificial intelligence techniques such as diverse convolutional neural networks often lack interpretability. The features extracted from these models are not easily understandable to humans, making it challenging to evaluate the quality of features and incorporate prior knowledge. We present a proposed solution to tackle the previously mentioned challenges associated with identifying faults in distribution networks. Our approach incorporates adaptive probability learning, which entails training the model using simulated data and testing it with actual field data. By evaluating feature similarity and extracting universal features, our adaptive probability learning algorithm overcomes the difficulties posed by varying network structures, line parameters, and operating conditions. This is crucial as the distributions of simulated and real data often diverge in these aspects. Our method comprises two stages: waveform decomposition to obtain feature vectors in the first stage, and linear mapping in the second stage for dimension reduction and feature reconstruction. We determine the optimal linear mapping by maximizing the likelihood of consistent reconstruction. Additionally, we utilize clustering in the reduced-feature space to classify events. Compared to other approaches, our model effectively addresses the disparities between simulated and real data and offers strong interpretability. This presents a fresh and efficient strategy for fault identification in distribution networks. ## II Adaptive Probability Learning An objective of adaptive learning is to address the discrepancy in data distributions between simulated and real-world data. One proposed strategy to overcome this problem involves measuring a model's performance across different data contexts using feature similarity as a metric [16][17]. On the other hand, adaptive probability learning utilizes probability to evaluate the similarity between different features by calculating the reconstruction error, providing valuable insights [18][19][20]. ### _Adaptive Learning_ Adaptive learning relies on two kinds of data: simulation data and real-world data [21]. Simulation data (\(D_{s}\)) includes waveforms and event classes from simulated events, while real-world data (\(D_{t}\)) consists of waveforms of events with unknown classes. The main idea behind adaptive learning is that the event categories in the simulation data are similar to those in the real-world data. This similarity allows us to make informed guesses about the event categories in the real-world data by utilizing the knowledge gained from the simulated data [22]. However, it is important to note that these two datasets often have different distributions due to variations in grid structure, line parameters, and operational conditions. In simpler terms, the probability distribution of the simulation data (\(P_{s}\left(x_{i}^{s},y_{i}^{s}\right)\)) may not be the same as that of the real-world data (\(P_{t}\left(x_{i}^{t},y_{i}^{t}\right)\)). Therefore, adapting knowledge from one domain to another presents challenges and complexities. It is important to consider that the performance of a classification model, as measured by the error \(L_{s}\) on simulated data, may not accurately reflect its performance on real-world data, as indicated by the error \(L_{t}\). This discrepancy arises because we need to determine if the features generated by the model have meaning and can be transferred between simulated and actual data. To address this concern, our proposed approach utilizes waveform decomposition to enhance feature congruence and identify the optimal linear mapping. This relationship can be expressed as \(L_{t}=L_{s}+L_{sim}\), illustrating that a reliable classification model should not only perform well on simulated data but should also exhibit a high degree of similarity in the extracted features from simulated and actual datasets. ### _Feature Extraction_ The research utilizes a wavelet decomposition-based approach for feature extraction, which is advantageous compared to deep neural networks. One advantage is that it necessitates less data, making it appropriate for situations with limited samples such as distribution network fault diagnosis [23]. Another advantage is that the extracted features are highly interpretable, facilitating the incorporation of prior knowledge and enhancing accuracy [24]. The method steps are as follows: first, waveforms are decomposed into approximate and detail components through wavelet transformation. The approximate component reflects the overall shape of the waveform, while the detail component reflects distortions. Based on this decomposition, the fundamental component \(Z_{o}\) and bias \(Z_{\text{off}}\) are extracted from the approximate component, and pulse \(z_{p}\), harmonic \(Z_{h}\), and distortion \(Z_{d}\) are extracted from the detail component. For different components, corresponding features are extracted. For example, the fundamental component \(Z_{o}\) corresponds to features Fig. 1: Illustration of Waveform Decomposition Fig. 2: An illustration of principles of reconstruction error like amplitude \(A_{o}\) and frequency \(f_{o}\), bias \(z_{\text{off}}\) corresponds to amplitude \(A_{\text{off}}\), pulse \(z_{p}\) corresponds to peak value \(A_{p}\) and pulse width \(t_{p}\), harmonic \(Z_{h}\) corresponds to amplitude \(A_{h}\) and frequency \(f_{h}\), and distortion \(z_{d}\) corresponds to distortion factor \(w_{d}\). All features are normalized to eliminate scale effects. In addition to the features of the components themselves, the time intervals between components \(t\left(z_{i},z_{i+1}\right)\) are also considered, where \(z_{i}\) represents the \(i\)-th component. Figure 1 gives an illustration of waveform decomposition, and after this decomposition, any waveform \(w=\{I_{\text{A}},I_{\text{B}},I_{\text{C}},U_{\text{A}},U_{\text{B}},U_{C}\}\) can be uniquely determined by a feature vector \(\phi(w)=[A_{o},f_{o},A_{off},A_{p},t_{p},A_{h},f_{h},w_{d},t\left(z_{i},z_{i+1} \right)]\). ### _Adaptive Probabilistic Learning_ The problem involves analyzing both simulated data (\(x_{i}^{s}\)) and real data (\(x_{j}^{t}\)). We know the category of the simulated data (\(y_{i}^{s}\)), but the category of the real data (\(y_{j}^{t}\)) is unknown. We extract features from the data, resulting in feature vectors denoted as \(A_{i}:=\phi\left(x_{i}^{s}\right)\) and \(B_{j}:=\phi\left(x_{j}^{t}\right)\). Since the feature vectors \(A_{i}\) and \(B_{j}\) have high dimensionality, we use a linear mapping \(\varphi\) to reduce their dimensions, resulting in reduced feature vectors \(A_{i}^{\prime}:=\varphi\left(A_{i}\right)\) and \(B_{j}^{\prime}:=\varphi\left(B_{j}\right)\). We then assess the similarity between \(A_{i}^{\prime}\) and \(B_{k}^{\prime}\) by calculating the reconstruction error. This allows us to determine the probability of transforming between \(A_{i}^{\prime}\) and \(B_{k}^{\prime}\) and vice versa. It is important to note that although \(A_{i}^{\prime}\) and \(A_{j}^{\prime}\) may be different, their respective categories (\(y_{i}^{s}\) and \(y_{j}^{s}\)) must be the same. Figure 2 illustrates the concept of the reconstruction error. First, to determine the likelihood of transforming \(A_{i}^{\prime}\) into \(B_{k}^{\prime}\), we compute the inner product of the two vectors, denoted as \(M_{ik}=\langle A_{i}^{\prime},B_{k}^{\prime}\rangle\). Next, we evaluate the correlation probability between the two vectors using a undisclosed formula. \[P_{ik}^{ab}=P(B_{k}^{\prime}|A_{i}^{\prime}):=\frac{\exp(M_{ik})}{\sum_{k^{ \prime}}\exp(M_{ik^{\prime}})}\] Likewise, the likelihood of \(B_{k}^{\prime}\) changing into \(A_{i}^{\prime}\) can be represented as follows: \[P_{kj}^{ba}=P(A_{j}^{\prime}|B_{k}^{\prime}):=\frac{\exp(M_{kj})}{\sum_{k^{ \prime}}\exp(M_{k^{\prime}j})}\] Therefore, converting from \(A_{i}^{\prime}\) to \(B_{k}^{\prime}\) and then to \(A_{j}^{\prime}\) in this cycle has a probability of: \[P_{ij}^{aba}=\left(P^{ab}P^{ba}\right)_{ij}=\sum_{k}P_{ik}^{ab}P_{kj}^{ba}\] If the values \(y_{i}^{s}\) and \(y_{j}^{s}\) are in agreement, then the anticipated distribution of \(P_{ij}^{aba}\) can be expressed as: \[T_{ij}^{aba}=\frac{1}{2}\left(P_{ij}^{aba}+P_{ij}^{aba}\right)\] \[T_{ij}:=\begin{cases}1/N_{c}(y_{i}^{s})&A_{i}^{\prime},A_{j}^{\prime}\text{ have the same category}\\ 0&\text{otherwise}\end{cases}\] The measure \(H\) quantifies the difference between \(P_{ij}^{aba}\) and \(T\). In order to calculate \(P_{ij}^{aba}\), we use the count of instances that belong to category \(y_{i}^{s}\) in dataset \(D_{s}\), denoted as \(N_{c}(y_{i}^{s})\). \[L_{w}:=H(P_{ij}^{aba},T)\] To enhance the integration of real data in the reconstruction process, we consider traversal error. This error evaluates the likelihood of each genuine data element being included in the reconstruction procedure. \[L_{v}=H(P^{visit},V)\] \[P_{k}^{v}=\sum_{x_{i}^{s}}P_{ik}^{ab},V_{k}=1/|D_{t}|\] During the training process of the model, the overall error, represented as \(L_{sim}\), is calculated by combining the errors that arise from measuring feature similarity and analyzing simulation data. These calculations entail evaluating and integrating errors from both categories. ## III Experimental Data and Procedure In order to verify and assess the reliability of the proposed model, a series of experiments are conducted, utilizing both simulated data and real-world data gathered in real-life situations. The key objective of adaptive learning is to initially train the model using simulated data and subsequently enable it to autonomously identify and categorize real data. ### _Simulation system_ The simulation system shown in Figure 3 is based on the IEEE 13 node model and operates at a voltage level of 10 kV and frequency of 50 Hz. A sampling frequency of 4 kHz is used in the simulation. The figure indicates the fault location and load conditions. The simulation system is created using PSCAD software [25]. To simulate faults, the Kizilcay arc model is employed, which captures the dynamic behavior of the arc using control theory principles, Fig. 3: Configuration of the simulation system the energy balance within the arc column. The mathematical expression for this arc model is provided as follows: \[\frac{dg(t)}{dt}=\frac{1}{\tau}\left(\frac{\left|i_{f}(t)\right|}{u_{o}+r_{o} \left|i_{f}(t)\right|}-g(t)\right)\] \[g(t)=\frac{u_{f}(t)}{i_{f}(t)}\] Different variables such as fault impedance, fault starting angle, fault distance, line parameters, load parameters, and noise levels were modified to simulate network fault data under different conditions. These modifications allowed for the creation of simulation data for four types of events: single-frequency early fault, multi-frequency early fault, permanent fault, and transient interference [26]. Figure 3 indicates the potential locations of early faults, permanent faults, capacitors, and loads. A total of 10 sets of simulation data were randomly generated for all possible scenarios. The arc conductance \(g(t)\), arc current \(i_{f}(t)\), and arc voltage \(u_{f}(t)\) are measured in S/m, A, and V, respectively. The arc time constant \(\tau\), arc characteristic resistance \(r_{o}\), and arc characteristic voltage \(u_{o}\) are measured in seconds, ohms, and volts, respectively. The parameter ranges for \(\tau\), \(u_{o}\), and \(r_{o}\) are 0.2-0.4 ms, 300-4000 V, and 0.01-0.015 \(\Omega\), respectively. ### _Actual Data_ Between February and May 2021, data was gathered in Guangdong Province from fault detection devices installed on a 10 kV overhead line. The system utilizes low-current grounding and the fault detection device samples voltage and current signals at a frequency of 4096 Hz. Recording commences when the voltage or current signal surpasses a predefined threshold. The device captures the three-phase voltage and current waveforms before and after a fault occurrence, with a recording duration of sixteen cycles. Events in this classification are categorized based on waveform analysis and onsite confirmation of the fault's cause. There are three types of events: Early faults, Permanent faults, and Transient interferences. Early faults are transient faults that can be recovered and are further divided into single-cycle Fig. 4: Typical waveform for different types of events early faults and multi-cycle early faults, indicating varying levels of severity. Permanent faults cannot be self-recovered and require intervention from protective devices [24]. While fault detection devices can also be triggered by overvoltages caused by operations and lightning, these overvoltages are not considered faults but are categorized as transient interferences [27]. Figure 4 illustrates typical waveforms associated with each event type. In addition, Table 1 provides statistical data on the quantities of each event type. ### _Experimental Procedure_ Two experiments were carried out to assess the adaptability of the model in this study. In the first experiment, the model was trained using all simulated data, while a random sample of actual data was used for validation. The remaining actual data was then used as the test set, with known labels obtained from the validation set. The second experiment followed a similar setup, with all simulated data used for training, a random sample of actual data used for validation, and the remaining actual data used for testing. However, in this case, the labels of the validation set were unknown. It is important to note that each of these three sets serves a specific purpose: the training set is used to train the model, the validation set assesses performance and tunes hyperparameters, and the test set evaluates the final model's performance. The experimental data is divided into three groups [28]: the training set comprising 320 samples, the validation set consisting of 160 samples, and the test set containing 156 samples. To minimize the impact of event type distribution, each experiment is replicated 10 times, and the average performance across these 10 iterations is measured. The F1 score is utilized as the evaluation metric to assess the model's performance. ## IV Experimental Results and Analysis In this section, we will compare the adaptive probabilistic learning model introduced in this study with three commonly employed classifiers: Convolutional Neural Network (CNN), Support Vector Machine (SVM), and K-Nearest Neighbors (KNN) algorithm. The objective is to illustrate the superiority of our proposed method. Unlike traditional classifiers, our adaptive probabilistic learning model considers the variances between simulated and real data distributions and integrates the notion of feature similarity to tackle this problem. As a result, it achieves substantially improved performance. ### _Adaptive Probabilistic Learning_ The adaptive probabilistic concept learning model undergoes a training process that involves breaking down waveforms into different components, such as fundamental wave, bias, pulse, harmonic, and distortion. For each component, feature values and time intervals are calculated, and these feature vectors are then reduced in dimensionality through a linear mapping step. The similarity of features is measured by the reconstruction error. The training of the model involves estimating the actual data error, which combines the feature similarity error and the classification error of simulated data. This estimation helps determine the optimal parameters of the linear mapping. During testing, test waveforms are decomposed and mapped onto the feature space using linear mapping. Once in this space, they are clustered to make predictions about the corresponding event types. In Experiment 1, the validation set contains the actual data labels. This allows the model trained using the aforementioned method to directly predict the validation set. Using these predicted results, the best model can be identified by comparing them with the true labels. Subsequently, this best model is applied to the test set to generate the final test results. The validation set used in Experiment 2 includes unlabeled data. As a result, the model can be directly used on the test set to obtain the final test results. Table 2 presents evidence of how the adaptive probabilistic learning model effectively handles diverse faults. The model achieves this by taking into account the dissimilarities in distribution between simulated and real data and utilizing the similarities in characteristics to establish a correlation between the model's errors on both types of data. The central concept involves identifying a suitable conversion technique that transforms the original waveform into a feature vector space. This enables precise categorization of both simulated and real data within this space, ensuring that similar data is clustered together while dissimilar data is dispersed. ### _Comparing with other models_ The experiment utilized a convolutional neural network model based on the architecture of AlexNet. The model consisted of 5 convolutional layers and 3 fully connected layers. The input layer had dimensions of 1 x N x 6, where N represented the length of the sample including 6 groups of waveforms. The first convolutional layer had a kernel size of 1 x 41, a stride of 20, and 40 convolutional kernels. The second convolutional layer had a kernel size of 1 x 20, a stride of 10, and 20 convolutional kernels. The third, fourth, and fifth convolutional layers had kernel sizes of 1 x 10, strides of 5, and 10 convolutional kernels each. The three fully connected layers had sizes of 512, 512, and 4, respectively, with the last layer representing the number of output classes. The process involved extracting and linearly transforming the input waveform, with the final output indicating the probability of the event belonging to each class. Choosing the appropriate kernel function was crucial for the support vector machine model, and in Experiment 2, the kernel function category with the highest accuracy in the training set was selected. Both Experiment 1 and 2 confirmed the selection of the polynomial kernel function type. In the K-nearest neighbor algorithm, the selection of the hyperparameter K involves testing different values and comparing their classification accuracy on the validation set. The value of K that yields the highest accuracy on the training set is chosen since the labels of the validation set are unknown in Experiment 2. The optimal K value was found to be 5 in Experiment 1 and 10 in Experiment 2. Table 2 presents the F1 scores of different models, with the adaptive probability learning model showing significantly higher classification accuracy compared to the other three models. This is because the model takes into account the dissimilarities between the training and test sets, and the extracted features exhibit high similarity between the two sets, indicating that the model captures general features well. On the other hand, the other three models perform well on the training set but poorly on the test set due to differences in data distribution. Additionally, Experiment 1 demonstrates significantly higher accuracy than Experiment 2, suggesting that the partially known labels in Experiment 1 help the model overcome distribution differences and achieve better classification accuracy. The difference in accuracy between Experiment 1 and Experiment 2 highlights the adaptive learning capability of the model. It is worth noting that the adaptive probability learning model outperforms the other three models in terms of this ability, indicating reduced reliance on actual data labels. Figure 5 illustrates the stability of the average class F1 scores [29] for the four models, indicating whether their classification accuracy varies with changes in event distribution. The adaptive probability learning model demonstrates highly consistent classification accuracy. In experiment 1, its accuracy consistently falls within the range of 0.90 to 0.95, while in experiment 2, it remains concentrated around 0.90. In contrast, the convolutional neural network exhibits more scattered accuracy, suggesting a greater vulnerability to variations in data type distribution. This can be attributed to its numerous parameters and reliance on substantial amounts of data for determining network weights. Consequently, in scenarios with limited samples and significant variations in event types, training results often differ, leading to fluctuating accuracy. While the support vector machine and K-nearest neighbor algorithm also display relatively stable classification accuracy, their overall performance level is low and not suitable for practical real-world scenarios. In conclusion, the proposed method is best suited for identifying faults in distribution networks. ## V Conclusion The scarcity of training samples is a major hurdle when it comes to identifying faults in distribution networks. To address this problem, the adaptive probability learning method utilizes a two-step approach. First, it extracts feature vectors by breaking down waveforms, and then it reduces the dimensionality through linear mapping. The model then solves an optimization problem to establish a relationship between errors in simulated and real data, with the goal of maximizing consistency probability during the reconstruction process. This ultimately results in the achievement of an optimal linear mapping. There are several advantages to using the adaptive probability learning method instead of other methods. One advantage is that it produces extracted features that are easy to interpret, which makes it easier to incorporate prior knowledge. Additionally, this method is able to make good use of simulated data during training, which helps overcome the issue of having limited samples when identifying faults in distribution networks. Finally, by incorporating field actual data, the model's performance can be further improved, allowing maintenance personnel to create a sample library right from the start.
2309.16554
Crewther's relation in different schemes
We examine Crewther's relation at high loop order in perturbative QCD and demonstrate how the relation is accommodated in gauge-parameter dependent schemes where the running of the gauge parameter has to be explicitly considered. Motivated by ensuring that the conformal properties of the relation are preserved at all the critical points of QCD, including the Banks-Zaks and its infra-red stable twin, we demonstrate the necessity of an additional term in the relation for describing gauge running in the minimal momentum subtraction scheme (mMOM) and argue for its inclusion for all gauge-parameter dependent schemes.
R. H. Mason, J. A. Gracey
2023-09-28T16:09:18Z
http://arxiv.org/abs/2309.16554v2
# Crewther's relation in different schemes ###### Abstract We examine Crewther's relation at high loop order in perturbative QCD and demonstrate how the relation is accommodated in gauge-parameter dependent schemes where the running of the gauge parameter has to be explicitly considered. Motivated by ensuring that the conformal properties of the relation are preserved at all the critical points of QCD, including the Banks-Zaks and its infra-red stable twin, we demonstrate the necessity of an additional term in the relation for describing gauge running in the minimal momentum subtraction scheme (mMOM) and argue for its inclusion for all gauge-parameter dependent schemes. ArXiv ePrint: 2309.16554 16th International Symposium on Radiative Corrections: Applications of Quantum Field Theory to Phenomenology (RADCOR2023) 28th May - 2nd June, 2023 Crieff, Scotland, UK Introduction Investigations of Crewther's relation have typically been undertaken in gauge-parameter independent schemes where all \(\beta\)-function coefficients are independent of the gauge parameter, as in \(\overline{\text{MS}}\)[1]. The relation, as stated for these schemes, connects two measurable quantities: the Adler D function (\(D\)) and the Bjorken sum rule (\(C\)), to the \(\beta\)-function (\(\beta\)) and a perturbative series which we will refer to as the Crewther series (\(K\)). This is described by the equations \[C(a)D(a) = d_{R}(1+\Delta_{\text{csb}}(a))\qquad\text{where}\qquad\Delta_{ \text{csb}}(a)=K(a)\frac{\beta(a)}{a}, \tag{1}\] where \(a=\frac{g^{2}}{16\,\pi^{2}}\). In [2] it was recognised there were no \(\mathcal{O}(a)\) corrections to the product and [3] codified the higher order corrections into the above form. This relation has since been verified in \(\overline{\text{MS}}\) to all available loop orders [4] as well as several other gauge-parameter independent schemes e.g. the V-scheme in [5], and arguments have been made as to its validity to all orders [6, 7]. A consequence of this decomposition is that at a fixed point the product reduces to the constant \(d_{R}\). For gauge-parameter independent schemes measurable quantities have a single coupling and therefore fixed points, which we label as \(a^{\infty}\), occur at the roots of the \(\beta\)-function such that \(\beta(a^{\infty})=0\), thus by Eq. (1) we have \(C(a^{\infty})D(a^{\infty})=d_{R}\). In practice this can only be said to be true to the order in truncation of the small coupling constant meaning when it is evaluated numerically this will not be an exact result. If the series \(K(a)\) were entirely unstructured the statement in Eq. (1) could be ensured for any product of two series if one took entirely arbitrary expansion coefficients of the Crewther series. However, additional structure on the coefficients is suggestive of deeper meaning. In particular, they only contain positive powers of the number of fermions \(N_{f}\) as well as of the colour factors, along with other properties described in [3]. When applied to gauge-parameter dependent schemes, such as mMOM, this structure failed except in particular choices of the gauge parameter [8, 9]. In this report we review the extension of Crewther's relation to gauge-parameter dependent schemes, suggested in [10], which adds a second term to \(\Delta_{\text{csb}}\) to include the running of the gauge parameter. In Section 2 we present and briefly derive this form from renormalization group arguments. Following this in Section 3 we consider the resulting pair of Crewther series for the mMOM scheme (defined in [11]). Section 4 is devoted to arguing for the necessity of the additional term through numerical experimentation at the fixed points of gauge-parameter dependent mMOM scheme, provided in [12]. Finally, Section 5 provides a discussion of the perspective gained through this investigation of the Crewther relation. ## 2 Scheme Change \(\Delta_{\text{csb}}\) is related to the product of two measurable quantities by a constant scale and addition and thus is itself a measurable. Its value should therefore be invariant under a scheme change. The \(\beta\)-function however is not and transforms under a change in scheme by the equation \[\beta_{\overline{\text{MS}}}(a) = \beta_{s}(a_{s},\alpha_{s})\frac{\partial a(a_{s},\alpha_{s})}{ \partial a_{s}}+\alpha_{s}\gamma_{\alpha}^{s}(a_{s},\alpha_{s})\frac{\partial a (a_{s},\alpha_{s})}{\partial\alpha_{s}}. \tag{2}\] where \(a\) denotes the coupling in the \(\overline{\text{MS}}\) scheme which is gauge-parameter independent and \(a_{s}\) and \(\alpha_{s}\) are the coupling constant and gauge parameter in the target gauge-parameter dependent scheme \(s\). The running of the gauge parameter is described with \(\alpha_{s}\gamma_{\alpha}^{s}(a_{s},\alpha_{s})=\frac{d\alpha_{s}}{d!}\) where \(l=\ln\left(\frac{\mu^{2}}{\Lambda^{2}}\right)\). Consider now rewriting the conformal symmetry breaking term in terms of the couplings of the new scheme from the original form as given in Eq. (1) we find \[\Delta_{\rm csb}(a)=\Delta_{\rm csb}(a_{s},\alpha_{s})=K_{a}(a) \Bigg{|}_{\overline{\rm MS}\to s}\left[\beta_{s}(a_{s},\alpha_{s})\frac{ \partial a(a_{s},\alpha_{s})}{\partial a_{s}}+\alpha_{s}\gamma_{\alpha}^{s}(a _{s},\alpha_{s})\frac{\partial a(a_{s},\alpha_{s})}{\partial\alpha_{s}}\right] \tag{3}\] where we have defined \(K_{a}=\frac{K}{a}\) which is a perturbative series since \(K={\cal O}(a)\) and \(\overline{\rm MS}\to s\) implies use of the coupling constant conversion function to change from the coupling of the original scheme to the new couplings of the new scheme. The above equation suggests a new decomposition \[\Delta_{\rm csb}(a_{s},\alpha_{s}) = K_{a}^{s}(a_{s},\alpha_{s})\beta_{s}(a_{s},\alpha_{s})+K_{\alpha }^{s}(a_{s},\alpha_{s})\alpha_{s}\gamma_{\alpha}^{s}(a_{s},\alpha_{s}), \tag{4}\] whose series can be found through the relations \[K_{a}^{s}(a_{s},\alpha_{s}) = \frac{\partial a}{\partial a_{s}}\left[K_{a}(a)\right]\Big{|}_{ \overline{\rm MS}\to s}\quad\mbox{and}\quad K_{\alpha}^{s}(a_{s},\alpha_{s}) =\frac{\partial a}{\partial\alpha_{s}}\left[K_{a}(a)\right]\Big{|}_{\overline {\rm MS}\to s}. \tag{5}\] While these arguments are suggestive that the Crewther relation should be extended to include a gauge parameter running term it does not prove the necessity of the additional term since it could be possible that a decomposition of the form given in Eq. (1) is possible for other schemes when purely considering the above derivation. In the proceeding sections we will argue for the necessity of the additional term. ## 3 Crewther Series in the mMOM scheme In [9, 10] it was found that when the original Crewther relation is applied naively to the mMOM scheme it fails at \({\cal O}(a^{3})\), except in the Landau (\(\alpha=0\)), anti-Yennie (\(\alpha=-3\)) and anti-Feynman (\(\alpha=-1\)) gauges, although these latter two fail again at \({\cal O}(a^{4})\). This provides a strong argument against the original Crewther relation for mMOM and therefore for gauge-parameter dependent schemes in general, since we expect physically meaningful relations to be true in all gauges. With this in mind we begin our discussion of the Crewther series in the mMOM scheme by calculating the \(K_{a}\) and \(K_{\alpha}\) term in this scheme from the conversion functions found in Eq. (5), which using results from [8, 9, 14, 15], gives \[K_{a}^{\rm mMOM}(a,\alpha) = 16\zeta_{3}-14+\left[(-15552\zeta_{3}+13608)\alpha^{2}+(-31104 \zeta_{3}+27216)\alpha\right. \tag{6}\] \[+(-20736N_{f}+628416)\zeta_{3}+26784N_{f}-276480\zeta_{5}-483432 \right]\frac{a}{648}\] \[\left[\left[-52488\,\zeta_{3}+45927\right]\alpha^{3}+\left[104976 \,\zeta_{3}{}^{2}+\left(46656\,N_{f}-1768230\right)\zeta_{3}\right.\right.\] \[\left.\left.-60264\,N_{f}+622080\,\zeta_{5}+1317357\right]\alpha^ {2}+\left[-769824\,\zeta_{3}{}^{2}\right.\right.\] \[\left.+\left(93312\,N_{f}-1740204\right)\zeta_{3}-120528\,N_{f}+ 1244160\,\zeta_{5}+1813131\right]\alpha\] \[+(207360\,N_{f}-8215344)\,{\zeta_{3}}^{2}+(48384\,{N_{f}}^{2}-2782 848\,N_{f}+37896114)\zeta_{3}\] \[+(69120\,\zeta_{5}-112896)\,{N_{f}}^{2}+(-2073600\,\zeta_{5}+480418 4)N_{f}+5078400\,\zeta_{5}\] \[+4838400\,\zeta_{7}-43011419\left]\frac{a^{2}}{648}+{\cal O}(a^{3})\] and \[K_{\alpha}^{\rm mMOM}(a,\alpha) = -24(\alpha+1)\left(\zeta_{3}-\tfrac{7}{8}\right)a^{2}+\left[\Big{(} -5832\zeta_{3}+5103\Big{)}\frac{\alpha^{2}}{72}\right. \tag{10}\] \[+\Big{(}7776{\zeta_{3}}^{2}+(3456N_{f}-130980)\zeta_{3}-4464N_{f} +46080\zeta_{5}+97582\Big{)}\frac{\alpha}{72}\] \[\left.-396{\zeta_{3}}^{2}+\Big{(}3456N_{f}-64452\Big{)}\frac{ \zeta_{3}}{72}-62N_{f}+640\zeta_{5}+\frac{67153}{72}\right]a^{3}\] \[+O(a^{4}),\] where we have presented the SU(3) expression to reduce the size of the equations, and \(a\) and \(\alpha\) refer to the coupling in the mMOM scheme. Considering \(K_{\alpha}^{\rm mMOM}(a,0)\) we find the same Crewther series provided in [8, 9] for the case of the Landau gauge which is as expected because at \(\alpha=0\) under linear covariant gauge fixing we can ignore the \(\alpha\gamma_{\alpha}\) term in our Crewther decomposition and therefore we will be left with the \(K_{a}\) term alone. The \(\alpha=-1\) gauge can also be understood as the leading term in \(K_{\alpha}\), at \({\cal O}(a^{3})\) in \(\Delta_{\rm csb}\), has a factor of \(\alpha+1\) which disappears in this selected gauge. However, the next-to-leading order \(K_{\alpha}\) term does not have a similar factorisation. So the \(K_{\alpha}\) term cannot be ignored to \({\cal O}(a^{4})\) in the same gauge. If one were to calculate the series \(K_{a}\) and \(K_{\alpha}\) for the mMOM scheme directly from the product and Eq. (4) it is not certain you would arrive at the above series. Inspection of the equation tells us there is an ambiguity in the choice of the series where \(\bar{K}_{a}\) and \(\bar{K}_{\alpha}\) could be substituted for \(K_{a}\) and \(K_{\alpha}\) respectively, in Crewther's relation provided they obey the relations \[\bar{K}_{a}^{s}(F;a_{s},\alpha_{s}) = K_{a}^{s}(a_{s},\alpha_{s})-F(a_{s},\alpha_{s})\alpha_{s}\gamma_ {\alpha}^{s}(a_{s},\alpha_{s}),\] \[\bar{K}_{\alpha}^{s}(F;a_{s},\alpha_{s}) = K_{\alpha}^{s}(a_{s},\alpha_{s})+F(a_{s},\alpha_{s})\beta_{s}(a _{s},\alpha_{s}). \tag{11}\] where \(F(a_{s},\alpha_{s})\) is a generic perturbative series. We note that we have reverted to noting the general scheme as these relations are not specific to the mMOM. If we were able to preserve the original Crewther relation in this schemes then we would require the existence of a series \(F_{0}\) such that \(\bar{K}_{\alpha}^{s}(F_{0};a_{s},\alpha_{s})=0\). In the mMOM scheme this would require a series such that \[F_{0}(a,\alpha)=-\frac{K_{\alpha}^{\rm mMOM}(a,\alpha)}{\beta^{\rm mMOM}(a, \alpha)}\approx-\frac{24(\alpha+1)(\zeta_{3}-\tfrac{7}{8})}{11-\tfrac{2}{3}N_ {f}}+{\cal O}(a). \tag{12}\] Again we see that this equation could be ensured to each order in general if we were not to enforce additional constraints on the series coefficients such that they are valid perturbative coefficients as was found for the original coefficients in [3]. This suggests we cannot find a perturbative series \(K_{a}\) such that there is no term to describe the gauge running for a general gauge parameter. In the next section we will discuss the importance of the new decomposition through numerical evaluation of the product at fixed points of the running. ## 4 Fixed Points Within Crewther's relation we parametrise the conformal symmetry breaking by \(\Delta_{\rm csb}\) which should disappear when the system becomes invariant of the scale of the problem. These points exist when the running couplings of the theory become stationary which for gauge-parameter independent schemes will be at the roots of the \(\beta\)-function, and for gauge-parameter dependent schemes we add to this condition the requirement that \(\alpha\gamma_{\alpha}=0\)[13]. By inspection of the original form of this quantity given in Eq. (1) we see that it will go to zero for \(\beta=0\). However, our proposed extension given in Eq. (4) goes to zero only under both \(\beta=0\) and \(\alpha\gamma_{\alpha}=0\). We can attempt to identify which form of the Crewther relation is most accurate for gauge-parameter dependent schemes by evaluating the conformal symmetry breaking term at the roots of the \(\beta\)-function alone or at fixed points of both the \(\beta\)-function and \(\alpha\gamma_{\alpha}\). This can be done by evaluating the product of the Adler D function and the Bjorken sum rule at the fixed point. The product is truncated to the same order as the original series. However, due to issues of truncation when evaluated at the fixed point we will not find that \(\Delta_{\rm csb}=0\) exactly; it will only be accurate to the current order in truncation. Therefore in order to get a better idea of the behaviour of this quantity we will consider it at different loop orders to get an idea of its convergence. To begin with we will consider Table 1 which provides the values of the product of the Adler D function and Bjorken sum rule when evaluated at fixed points of the two coupling system which are presented in [12]. The first column provides the loop order the \(\beta\)-function and \(\gamma_{\alpha}\) are considered to when finding the fixed points, and we define the \(\mathcal{O}(a^{n})\) columns as the product of the Adler D function and Bjorken sum rule truncated to order \(n\) and evaluated at the fixed point. To ensure the perturbative nature of the fixed points we have considered them at the top end of the conformal window with sixteen quark flavours, which provides the smallest critical coupling and therefore the most valid perturbative expansion at these points. For the moment we will limit our attention to the Banks-Zaks fixed point [16, 17], which is the closest fixed point to the origin in the Landau gauge found at \(a\sim 0.003\), as well as its twin which is infra-red stable and has \begin{table} \begin{tabular}{|c||c|c||c|c|} \hline L & \(a_{\infty}\) & \(\alpha_{\infty}\) & \(\mathcal{O}(a^{3})\) & \(\mathcal{O}(a^{4})\) \\ \hline 2 & 0.0033112583 & 0.000000000 & 2.9999991596 & 3.0000039877 \\ \hline & 9.1803474173 & 2.4636080795 & 1271156.8083213258 & 17202735.3015072510 \\ \hline & 0.0032001941 & \(-\)3.0301823312 & 2.9999982468 & 3.0000012469 \\ \hline \hline 3 & 0.0031177883 & 0.0000000000 & 2.9999963264 & 3.0000001212 \\ \hline & 0.1279084604 & 1.9051106246 & 6.2952539870 & 10.1893903424 \\ \hline & 0.0031380724 & \(-\)3.0274210489 & 2.9999973439 & 3.0000001217 \\ \hline \hline 4 & 0.0031213518 & 0.0000000000 & 2.9999963720 & 3.0000001843 \\ \hline & 0.1902883419 & 0.0000000000 & 13.5399867931 & 66.1969134786 \\ \hline & 0.1162651496 & 0.5286066929 & 5.3930704057 & 11.8942763573 \\ \hline & 0.0031430130 & \(-\)3.0273541344 & 2.9999974127 & 3.0000002080 \\ \hline \hline 5 & 0.0031220809 & 0.0000000000 & 2.9999963814 & 3.0000001972 \\ \hline & 0.0577103776 & 0.000000000 & 3.2818695828 & 3.7273436677 \\ \hline & 0.0031434144 & \(-\)3.0273765993 & 2.9999974183 & 3.0000002151 \\ \hline & 0.0502252330 & \(-\)3.8653031470 & 3.1912609578 & 3.2787374506 \\ \hline \end{tabular} \end{table} Table 1: Crewther product evaluated at the fixed points of [12] in mMOM at different fixed point loop orders \(L\). Notation used is from [10]. approximately the same coupling constant but with gauge parameter \(\alpha\sim-3\)[18], as these points should provide the smallest truncation error. We note that if the difference of the product from \(d_{R}=3\) is a truncation error, we only expect improved convergence when both the loop order the fixed point is calculated to and the loop order of \(\Delta_{\rm csb}\) is increased. This is reflected in the table where each value in the \({\cal O}(a^{3})\) column provides roughly the same accuracy to 3 of \({\cal O}(10^{-6})\). Whereas with the exception of the two-loop fixed point in the \({\cal O}(a^{4})\) column the accuracy is in general \({\cal O}(10^{-7})\). Note, as the loop-order in \(\Delta_{\rm csb}\) cannot be increased with the fixed point loop order above this, we do not expect increased accuracy beyond this as we increase the fixed point loop order. This consistency is suggestive of the correct truncation error and therefore fixed points of the two coupling theory provide the roots of \(\Delta_{\rm csb}\) to the order in truncation. This does not mean fixed points of the \(\beta\)-function alone will not provide similar accuracy and so we will provide comparison briefly. Before this we should mention the other fixed points further from the Gaussian fixed point. Particularly, we mention that as fixed point loop order is increased, the fixed points move towards the origin and thus the value of \(\Delta_{\rm csb}\) there decreases towards the expected result. However, note that in each case, except at the two loop fixed point, \({\cal O}(a^{3})\) is smaller than \({\cal O}(a^{4})\). This is suggestive of truncation error outside of the region of perturbative reliability and therefore these fixed points have only been included in the table for completeness. Table 2 provides the values of the roots of the \(\beta\)-function to different loop orders evaluated in the Feynman gauge \(\alpha=1\). In doing this we have picked out the point closest to the origin such that the \(\beta\)-function disappears, this has roughly the same coupling as the Banks-Zaks fixed point one and so provides the best comparison. By contrast with Table 1 the values here do not suggest improved convergence between either the \({\cal O}(a^{3})\) and \({\cal O}(a^{4})\), nor between the different fixed point loop orders. They remain at a stable \({\cal O}(10^{-5})\) from 3. It appears therefore that the fixed point of the two coupling system provides better convergence towards zero in the conformal symmetry breaking term than the fixed points of the single coupling system. As a final check on this assumption we have plotted \(\Delta_{csb}\) calculated at \(O(a^{4})\) in Figure 1, at the fixed point closest to the origin; the \(x\)-axis is \(\alpha\). Each line represents the \(\beta\)-function taken to a different loop order. The most obvious feature of the graph is the clear difference between the lines of the two loop fixed point and those of the higher loop orders. Focusing on the higher orders we see a clear cubic structure with the roots at \(\alpha\sim 0,\ -1\) and \(\ -3\) which are the three gauges of interest highlighted in [8, 9]. Figure 0(b) shows the convergence around \(\alpha=-3\), we see it is not absolutely at the anti-Yennie gauge but rather in the vicinity of the Banks-Zaks twin point which was identified \begin{table} \begin{tabular}{|c||c||c|c|} \hline L & \(a_{1}\) & \({\cal O}(a^{3})\) & \({\cal O}(a^{4})\) \\ \hline 2 & 0.0039840637 & 3.0000169021 & 3.0000244250 \\ \hline 3 & 0.0037731278 & 3.0000104128 & 3.0000164646 \\ \hline 4 & 0.0037925523 & 3.0000109619 & 3.0000171393 \\ \hline 5 & 0.0037946540 & 3.0000110219 & 3.0000172130 \\ \hline \end{tabular} \end{table} Table 2: Crewther product evaluated at the zeros of the \(\beta\)-function at different loop orders \(L\) in the mMOM scheme with \(\alpha=1\) such that \(\beta^{\rm mMOM}(a_{1},1)=0\). in [18] and investigated further in [12]. The Banks-Zaks is the \(\alpha=0\) root and \(\alpha\sim-1\) appears to be the approximate zero of the \(K_{\alpha}\) series we found in Section 3. The \(\alpha=-3\) value identified in [8, 9] can thus be understood as the point near the Banks-Zaks twin; in fact to leading order in the gauge parameter \[\gamma_{1}(-3)\ =\ \left[\ -\ \frac{1}{2}\alpha C_{A}\ +\ \frac{13}{6}C_{A}\ -\ \frac{4}{3}N_{f}T_{F}\right]\bigg{|}_{\alpha=-3}=\beta_{0}, \tag{21}\] where \(\gamma_{\alpha}(a,\alpha)=\gamma_{1}(\alpha)a+\mathcal{O}(a^{2})\) and \(\beta(a,\alpha)=-\beta_{0}a^{2}+\mathcal{O}(a^{3})\). The Banks-Zaks twin infra-red stable fixed point at \(\alpha\approx-3\) appears to be the point where the anomalous dimension of the gauge parameter matches that of the coupling constant as we demonstrate in [10]. We can therefore write Crewther's relation in the mMOM scheme to \(\mathcal{O}(a^{3})\) as \[\Delta_{\text{csb}}(a,-3)=-K_{a}^{(0)}\beta_{0}a^{2}-[K_{a}^{(0)}\beta_{1}(-3 )+(K_{a}^{(1)}-3K_{\alpha}^{(2)}(-3))\beta_{0}]a^{3} \tag{22}\] where \(K_{a}^{(i)}\) is the \(\mathcal{O}(a^{i})\) coefficient of \(K_{a}\) and \(K_{\alpha}^{(j)}\) is the \(\mathcal{O}(a^{j})\) coefficient of \(K_{\alpha}\). Relabelling the leading order \(K_{a}\) term \[K_{a}^{(1)}+3K_{\alpha}^{(2)}(-3)\to K_{a}^{(1)}, \tag{23}\] Figure 1: Plots of \(\Delta_{\text{csb}}\) calculated to order \(\mathcal{O}(a^{4})\) in the mMOM scheme for different \(\alpha\) values with \(a\) selected as the minimum real, positive value of the coupling constant such that the \(\beta_{\text{mMOM}}\) is zero for different loop orders. Note in the first graph the \(3\)L, \(4\)L and \(5\)L graphs are virtually indistinguishable. we see the Crewther product obeys the original relation to \(\mathcal{O}(a^{3})\) in mMOM. In fact since \(\gamma_{1}(\alpha)\) and \(\beta_{0}\) are scheme independent in the linear covariant gauge this value will be found for all gauge-parameter dependent schemes with this gauge fixing, although analogous values for the gauge parameter have been found for the Curci-Ferrari and Maximal Abelian gauges using equivalent formal relations [10]. ## 5 Outlook In this report we have focused on the mMOM scheme as an exemplar for analysing Crewther's relation in gauge-parameter dependent schemes, as the renormalization group functions of this scheme are known to the five-loop level, the key points of the analysis are applicable to other schemes and in other gauge fixing terms provided their \(\beta\)-function can be related to the \(\overline{\text{MS}}\) one by Eq. (2), as is discussed in more detail in [10]. This highlights the methodology in which we undertook this work: due to truncation each scheme we consider provides an incomplete viewpoint of the underlying structure of the theory, so when we focus too much on a singular scheme we may assume the properties of this particular viewpoint may apply to the theory. In considering properties in generality or else in a variety of schemes we unearth a more complete picture of the theory that may be obfuscated when taking any single finite order calculation. To summarise, the Crewther relation when considered in a gauge-parameter dependent scheme should account for the running of the gauge parameter and the equation is modified accordingly into the form of Eq. (4). We speculate that a natural extension to this relation for systems of \(n\) dynamical variables \(g_{i}\) where \(i=1,...,n\), would be: \[\Delta^{s}_{\text{cSb}}(g^{s}_{i})=\sum_{i}K^{s}_{g_{i}}(g^{s}_{i})\Big{(} \frac{dg^{s}_{i}}{dl}\Big{)}. \tag{23}\] This equation is indicative of the renormalization group equation and so it is worth considering the equation \[\Delta^{s}_{\text{cSb}}(g^{s}_{i})=\frac{d}{dl}\kappa^{s}(g^{s}_{i})=\Big{(} \sum_{j}\partial^{s}_{g_{j}}\kappa(g^{s}_{i})\Big{)}\Big{(}\frac{dg^{s}_{j}}{ dl}\Big{)} \tag{24}\] where \(\partial^{s}_{g_{j}}=\frac{\partial}{\partial g^{s}_{i}}\). For our two-coupling theory this reduces to \[\Delta^{s}_{\text{cSb}}(a_{s},\alpha_{s})=\Big{(}\partial^{s}_{\alpha}\kappa^{ s}(a_{s},\alpha_{s})\Big{)}\beta^{s}(a_{s},\alpha_{s})+\Big{(}\partial^{s}_{ \alpha}\kappa^{s}(a_{s},\alpha_{s})\Big{)}\alpha_{s}\gamma^{s}_{\alpha}(a_{s}, \alpha_{s}). \tag{25}\] Investigating this for the Crewther relation in our two-coupling theory we find that provided the \(\kappa\) series in \(\overline{\text{MS}}\) is gauge-parameter independent, as one would expect for a perturbative series in this scheme, then the above relation reduced trivially to Eq. (1) with \[K^{\overline{\text{MS}}}_{a}(a_{\overline{\text{MS}}})=\partial^{\overline{ \text{MS}}}_{a}\kappa^{\overline{\text{MS}}}(a_{\overline{\text{MS}}}). \tag{26}\] Integrating this equation with respect to the coupling constant can then be used to define \(\kappa\) in the \(\overline{\text{MS}}\) scheme. If Eq. (24) holds then \(\kappa\) is a scheme independent quantity, therefore under a scheme transformation \(\partial^{\overline{\text{MS}}}_{a}\kappa^{\overline{\text{MS}}}\) will transform in the same way as was found for \(K^{\overline{\text{MS}}}_{a}\) and so if the relation holds in \(\overline{\rm MS}\) it should hold in all other schemes. This can be shown by directly applying a scheme transformation to the above equations to ensure consistency. The ambiguity laid out in Eq. (11) could then be understood as resulting from shifting \(\kappa\) by a conformally invariant quantity. Beyond this, our analysis of the Crewther relation is indicative of the requirement for the modification of a wider treatment of running of gauge-parameter dependent schemes. For example, in any instance which codifies the running of theory, or else provides a decomposition of a measurable, in terms of the \(\beta\)-function will likely need to be extended for gauge-parameter dependent schemes to include the running of the gauge parameter, e.g. [5, 19, 20]. ## Acknowledgments This work was carried out with the support of an EPSRC Studentship EP/R513271/1 (RHM) and the STFC Consolidated Grant ST/T000988/1 (JAG). For the purpose of open access, the authors have applied a Creative Commons Attribution (CC-BY) licence to any Author Accepted Manuscript version arising.
2309.09474
In Horizon Penetrating Coordinates: Kerr Black Hole Metric Perturbation Construction and Completion
We investigate the Teukolsky equation in horizon-penetrating coordinates to study the behavior of perturbation waves crossing the outer horizon. For this purpose, we use the null ingoing/outgoing Eddington-Finkelstein coordinates. The first derivative of the radial equation is a Fuchsian differential equation with an additional regular singularity to the ones the radial one has. The radial functions satisfy the physical boundary conditions without imposing any regularity conditions. We also observe that the Hertz-Weyl scalar equations preserve their angular and radial signatures in these coordinates. Using the angular equation, we construct the metric perturbation for a circularly orbiting perturber around a black hole in Kerr spacetime in a horizon-penetrating setting. Furthermore, we completed the missing metric pieces due to the mass M and angular momentum J perturbations. We also provide an explicit formula for the metric perturbation as a function of the radial part, its derivative, and the angular part of the solution to the Teukolsky equation. Finally, we discuss the importance of the extra singularity in the radial derivative for the convergence of the metric expansion.
Fawzi Aly, Dejan Stojkovic
2023-09-18T04:13:37Z
http://arxiv.org/abs/2309.09474v1
# In Horizon Penetrating Coordinates: ###### Abstract We investigate the Teukolsky equation in horizon-penetrating coordinates to study the behavior of perturbation waves crossing the outer horizon. For this purpose, we use the null ingoing/outgoing Eddington-Finkelstein coordinates. The first derivative of the radial equation is a Fuchsian differential equation with an additional regular singularity to the ones the radial one has. The radial functions satisfy the physical boundary conditions without imposing any regularity conditions. We also observe that the Hertz-Weyl scalar equations preserve their angular and radial signatures in these coordinates. Using the angular equation, we construct the metric perturbation for a circularly orbiting perturber around a black hole in Kerr spacetime in a horizon-penetrating setting. Furthermore, we completed the missing metric pieces due to the mass \(M\) and angular momentum \(J\) perturbations. We also provide an explicit formula for the metric perturbation as a function of the radial part, its derivative, and the angular part of the solution to the Teukolsky equation. Finally, we discuss the importance of the extra singularity in the radial derivative for the convergence of the metric expansion. ## I Introduction Most of the astrophysical black holes are expected to be rotating black holes [1; 2]. From the phenomenological side, it is thus of utmost importance to describe perturbations of the Kerr metric which can then be used to study Hawking radiation [3; 4; 5], quasi-normal modes [6; 7; 8; 9], gravitational waves [10; 11; 12], and many other related phenomena in a rotating spacetime in the framework of General Relativity (GR). The Kerr spacetime is a stationary, axially symmetric, and asymptotically flat solution to Einstein's field equations in GR that describes the gravitational field around a rotating, uncharged black hole with two horizons in the non-extremal cases [13; 14; 15; 16; 17]. It also possesses a hidden symmetry encoded in the Killing-Yano tensor, in addition to the time and azimuthal killing vectors [18; 19; 20]. The metric was originally derived by Roy P. Kerr in 1963 as an extension of the Schwarzschild solution for zero-spin [21; 22; 23]. The Kerr metric is usually expressed in an oblate spheroidal-like coordinates known as the Boyer-Lindquist (BL) coordinates, which reduce to the Schwarzschild coordinates in the zero-spin limit. It is also worth mentioning that the geodesic equations can be separated in these coordinates as they form an integrable system with four constants of motion: energy, axial angular momentum, mass, and Carter's constant [22]. Nevertheless, the Boyer-Linguist coordinates are ill-defined at the Kerr black hole's horizons akin to the Schwarzschild coordinates at the Schwarzschild's black hole horizon [22; 23; 24]. Lately, through studying the freely falling observers worldline, Sorge was able to generalize the Lemaitre coordinates to the Kerr spacetime which are well defined at these horizons [25]. However, as we aim to study the massless perturbations here, we find it more convenient to work in a horizon-penetrating coordinate adapted to null geodesics, such as ingoing and outgoing Finkelstein-Eddington (IEF/OEF) coordinates which were constructed for the Kerr case long time before the Lemaitre coordinates [26; 24]. By the time the Kerr solution was derived, the black hole perturbation theory (BHPT) was already mature and often applied to the spherically symmetric spacetimes such as the Schwarzschild one [27; 28; 29; 30]. It was then natural to extend the investigation to the Kerr spacetime [26]. In 2017, Chen and Stein were able to construct metric perturbations up to the first order for the Near Horizon Extreme Kerr (NHEK) [31] directly by following an isometry-based approach [32] analogous to the one employed to perturb the Schwarzschild spacetime [30]. The metric was expressed in a factorized form and then decoupled thanks to the orthogonality of the symmetry-adapted basis. In 2023, Franchini also managed to decouple the linearized Einstein Field equations after employing spherical harmonics decomposition for a slowly rotating Kerr black hole, up to the second order in spin, in a way similar to the Schwarzschild perturbation scheme [33]. Franchini found a generalized version of both the famous Regge-Wheeler equation [28] and of Zerilli equation [29] which describes the odd and even perturbations modes respectively in the Schwarzschild spacetime. Remarkably, the angular mode mixing resulting from the non-zero spin of the black hole was handled by following the scheme provided in [34] for perturbation of spinning stars up to the first order in spin. Unfortunately, the whole Kerr spacetime hasn't been perturbed in an isometry-based fashion so far, to the best of knowledge of the authors [33; 32]. Even in a gauge-dependent and coordinate-dependent settings, it is not clear how to find a symmetry-adapted basis needed to achieve metric perturbation separability, nor how to proceed and find a way to decouple linearized Einstein Field equations of the Kerr spacetime. Nevertheless, the study of gravitational, electromagnetic, and scalar massless perturbations for the Kerr spacetime is typically carried out using the Newman-Penrose (NP) formalism. In this approach, the Weyl tensor \(C_{\mu\nu\alpha\beta}\) is projected onto the four null tetrad legs \(e_{a}^{\mu}\), where the first two are chosen to lie along the repeated null directions of the Weyl tensor. The resulting spinors of the projection are complex and known as the Weyl scalars \(\psi_{n}\) where \(n=0,1,2,3,4\). With ten degrees of freedom, they encode all the information in \(C_{\mu\nu\alpha\beta}\). As the Weyl tensor coincides with the Riemann tensor \(R_{\mu\nu\alpha\beta}\) in vacuum regions, perturbations of these scalars describe perturbations of the spacetime curvature. The projection operation can be applied to the tensors involved in the Einstein field equations to obtain the NP equations in terms of the Weyl scalars [35; 36]. Their first-order perturbation gives rise to coupled equations, but fortunately, they can be decoupled, and the resulting equations admit solutions in a factorized form. Press and Bardeen had applied a perturbation scheme to the Schwarczchild case itself before Teuklosky worked out the Kerr spacetime [37; 38]. Wald later demonstrated that the majority of information about gravitational perturbations is encoded in \(\psi_{0}\) and \(\psi_{4}\) Weyl scalars [26]. Teukolsky's work resulted in a single master partial differential equation that describes all gravitational, electromagnetic, and scalar field perturbations. The equation is separable in the BL/IEF/OEF and any coordinates related to them with Teukolsky's transformation [26]. After the separation of variables, the Teukolsky PDE reduces to two ODEs: the Radial-Teukloksy and the Angular-Teukloksy equations which both belong to the confluent Heun ODE family [39]. Moreover, the Teuklosky equation reduces the equation, in the zero-spin limit; to the well-known Bardeen-Press equations, the master perturbation equation for the Schwarzschild spacetime obtained using the NP formalism [40]. As the Angular and Radial Teukloksy ODEs are confluent Heun ones, they could be solved through using series expansion in the Hypergeometric function and the coulomb wave function [41] resultant a three-term recursive relation. Mano, Suzuki, and Takasugi worked out such solutions known as the MST for both the Teukolsky ODEs [42; 43] and the Regge-Wheeler ODE, which also a confluent Heun equation akin to the Zerilli ODE [44]. The MST solution for the Teukolsky equation is practical for computational purposes in the low-frequency limit. Also, Teukolsky ODEs could also be solved using the Confluent Heun functions obtained using Taylor series and Laurent series expansions similarly resultant three-term recursive relation [39; 45]. Furthermore in [39], Fiziev and Borissov managed to employ a special class of the confluent Heun functions known as the Heun polynomial, at which the power series truncated at finite power, through utilizing the Homotopic transformation of the Huen ODEs. They have reported that for scalar \(s=\frac{1}{2}\) and electromagnetic \(s=1\) perturbation only the separation constant will be constrained in order to obtain those special solutions, allowing for continuous frequency spectrum; however, for gravitational perturbation \(s=2\), there will be an extra constrain on the frequencies themselves, hence those polynomial solutions are only valid for particular frequencies of the gravitational waves. Moreover, as the Teukolsky master equation preserves its singular structure under transformation to IEF/OEF coordinates[24]; the Radial-Teukolsky is expected to still belong to the confluent Heun family. Nonetheless, the solution to this boundary value problem shouldn't require imposing regularity at the horizon compared to the case in the BL coordinates [46]. Yet, it is desirable to obtain the explicit form of the metric perturbation from the curvature perturbation up to a gauge. This problem is known in the literature as the metric reconstruction problem [36]. For instance, in self-force analysis [36], the metric perturbation is needed for further computations. In [47], Lousto and Whiting investigated the influence of radiation on a particle orbiting a massive rotating black hole. The reconstruction could be achieved in the outgoing/ingoing radiation gauge (ORG/IRG) through a procedure developed by Chrzanowski, Cohen, Kegeles, and Wald known as the CCKW procedure [48; 49]. Initially, the CCKW procedure relied on the postulate that the perturbation metric could be obtained using Hertz-like potentials. This postulate was suggested by Chrzanowski and Cohen, and later by Kegeles, for gravitational and electromagnetic perturbations, respectively [48; 49]. Wald showed that the success of this technique follows from the adjoint structure of the Teukolsky equation itself [50]. In [50], Wald also showed that not all possible perturbations are contained within the perturbed \(\psi_{0}\) and \(\psi_{4}\). For instance, the mass \(M\) and angular momentum \(J\) of the black hole itself could be perturbed; those perturbations are not captured by the CCKW. This leads to another problem known as the completion problem [51], where the missing parts of the full metric perturbation are investigated. Luckily, Wald's theorem proves that four missing parts, up to a gauge, can be constructed from the variations in the mass \(M\), angular momentum \(J\), C-metric acceleration \(\alpha_{C}\), or NUT charge \(q_{NUT}\) of the black hole. the CCKW procedure will be well defined for vacuum spacetime; If the perturber is located at a constant radius (e.g. particle in a circular orbit around the Kerr black hole moving), this confines the non-vacuum region to only a hypersurface of \(r=const\) in the spacetime [52]. However, if the perturber is also moving radially (e.g. a particle is moving in an elliptic orbit), then the procedure will run into trouble as the ORG/IRG are singular in non-vacuum regions, consequently, CCKW will be ill-defined there. Nevertheless, the metric construction for the elliptical orbits was tackled in [46; 51], the authors adopted the method of extending the homogeneous solution to confine the singularity on the trajectory of the perturber relying on the fact that this singularity only exists on the trajectory in the \((t,r)\) hypersurfaces, thus it could be as if there only two hypersurfaces with constant radial coordinates in the frequency domain. Still, generically working in radiation gauges, which is obligated by the CCKW; constrains the matter sources the metric could be constructed for. Handling arbitrary matter sources was approached in [53], through the use of a correction term to make sure that the constructed metric perturbation satisfies the linearized Einstein field equations. Moreover, in [54] a parallel method in Lorenz gauge to the CCKW was recently introduced that overcomes those obstacles brought by the radiation gauges. Moreover, this is not the only way around the CCKW, In [55], in ORG the researchers constructed the metric directly without intermediate Hertz-like potentials. Furthermore, taking this as a starting point; they were able to study second-order gravitational perturbation that results in the Teukolsky equation in the second-order perturbation with source term quadratic in the first-order perturbation. There has been recent progress in the construction of the full perturbation metric and in addressing the completion problem in the BL coordinates and Kinnersley tetrads for perturbed orbits [51; 46; 52]. However, this formalism is not suitable to describe perturbations in the near and across-horizon regions. Instead, a null rotation of the tetrads is typically used to impose a regularity condition. The master perturbation equation has also been written in horizon-penetrating coordinates and tetrads, such as the IEF/OEF coordinates [24]. In these coordinates, the tetrad itself is horizon-penetrating, which eliminates the need for imposing regularity conditions. In this work, we will investigate the leading-order correction to the Kerr metric due to a perturber circularly orbiting the black hole; the analysis will be conducted in a horizon penetrating tetrad and coordinates using the CCKW in radiation gauges for simplicity. Consequently, the full metric is expected to be regular at the horizons as the background metric is written in IEF/OEF and the metric perturbation is constructed in a horizon-penetrating setting. The authors believe this to be vital in performing calculations to study near-horizon phenomena in which the full metric is needed. The structure of this paper is as follows. In section II we will go through the underlying mathematical tools needed for the metric construction and metric completion. Also, we will introduce the coordinates we are interested in, as well as the tetrad and NP scalars representation in these coordinates. In Section III, the Teukolsky equation will be introduced, and the separation of the variables will be executed in the new coordinates and tetrads. Then, we will study the radial equation, its radial derivatives, and its boundary conditions. We will solve the inhomogeneous radial equation using the Green's function method. In section IV, we will apply the CCK procedure starting from the Hertz-Weyl scalars defining equations and then algebrizing the angular equation in the IRG. Finally, in Section V we will construct the metric and add the missing pieces up to some constant coefficient. ## II Preliminary For our purpose, we will follow the definitions given in [56; 24] for both the coordinate transformation and tetrads definitions. One may observe that the coordinates in these definitions are slightly different from the usual \(U(V)\) time coordinates for IEF(OEF) coordinates given by Teukolsky [57]. The reason for that is just convenience, while physics remains the same. We will follow the same conventions as in [56; 24], and also adopt the geometric units \(c=G=1\), along with the metric signature \((+,-,-,-)\). ### Coordinates In our analysis, we will use the BL coordinates \((t,r,\theta,\phi)\), IEF coordinates \((\tilde{t},r,\theta,\tilde{\phi})\), and OEF coordinates \((\hat{t},r,\theta,\hat{\phi})\). We follow [24] and define the latter two from the first through the following transformation in its differential form: \[dr^{*}=\frac{r^{2}+a^{2}}{\triangle}dr. \tag{1}\] \[\begin{array}{c}dr=dr,\\ d\theta=d\theta.\end{array} \tag{2}\] \[\begin{array}{c}d\tilde{t}=dt-dr+dr^{*},\\ d\tilde{\phi}=d\phi+\frac{a}{\triangle}dr.\end{array} \tag{3}\] \[\begin{array}{c}d\tilde{t}=dt+dr-dr^{*},\\ d\tilde{\phi}=d\phi-\frac{a}{\triangle}dr.\end{array} \tag{4}\] Where \(\triangle=r^{2}-2Mr+a^{2}\). ### Metric Accordingly, the metric in the BL, IEF, and OEF coordinates respectively will take the following structure \[\begin{split} ds^{2}&=(1-2Mr/\Sigma)dt^{2}-(\Sigma/\triangle) dr^{2}-\Sigma d\theta^{2}\\ &+\left(4Mar\sin^{2}\theta/\Sigma\right)dtd\varphi\\ &-\sin^{2}\theta\left(r^{2}+a^{2}+2Ma^{2}r\sin^{2}\theta/\Sigma\right)d \varphi^{2}.\end{split} \tag{5}\] \[\begin{split} ds^{2}=&(1-2Mr/\Sigma)d\tilde{t}^{2}-(1+2Mr/ \Sigma)dr^{2}-\Sigma d\theta^{2}\\ &-\sin^{2}\theta\left(r^{2}+a^{2}+2Ma^{2}r\sin^{2}\theta/\Sigma \right)d\tilde{\phi}^{2}\\ &-(4Mr/\Sigma)\tilde{d}\tilde{t}dr+\left(4Mra\sin^{2}\theta/\Sigma \right)d\tilde{t}d\tilde{\phi}\\ &+2a\sin^{2}\theta(1+2Mr/\Sigma)d\tilde{r}d\tilde{\phi}.\end{split} \tag{6}\] \[\begin{split} ds^{2}=&(1-2Mr/\Sigma)d\tilde{t}^{2}-(1+2 Mr/\Sigma)dr^{2}-\Sigma d\theta^{2},\\ &-\sin^{2}\theta\left(r^{2}+a^{2}+2Ma^{2}r\sin^{2}\theta/\Sigma \right)d\hat{\phi}^{2},\\ &+(4Mr/\Sigma)d\hat{t}dr+\left(4Mra\sin^{2}\theta/\Sigma\right)d \hat{t}d\hat{\phi},\\ &+2a\sin^{2}\theta(1+2Mr/\Sigma)drd\hat{\phi}.\end{split} \tag{7}\] Where \(\Sigma=r^{2}+a^{2}\cos^{2}\theta\). The invariance of the metric under simultaneous \((t,\phi)\) parity is not manifested in the IEF or OEF coordinates. Instead, using the coordinate transformation equations it can be shown that the IEF and OEF coordinates are mapped to one another, though with negative time and azimuthal angle. \[\begin{split} d\hat{t}&=-d\tilde{t},\\ d\hat{\phi}&=-d\hat{\phi}.\end{split} \tag{8}\] ### Tetrads In the case of BL and OEF coordinates, the Kinnersley tetrads will be used. The tetrads will take the following form for each case respectively: \[\begin{split} l^{\mu}&=\left[\left(r^{2}+a^{2} \right)/\triangle,1,0,a/\triangle\right],\\ n^{\mu}&=\left[r^{2}+a^{2},-\triangle,0,a\right]/ (2\Sigma),\\ m^{\mu}&=\left[ia\sin\theta,0,1,i/\sin\theta \right]/(\sqrt{2}(r+ia\cos\theta)).\end{split} \tag{9}\] \[\begin{split} l^{\mu}&=\left[1,1,0,0\right],\\ n^{\mu}&=\left[\frac{\triangle}{2\Sigma}\left(1+ \frac{4Mr}{\triangle}\right),-\frac{\triangle}{2\Sigma},0,\frac{a}{\Sigma} \right],\\ m^{\mu}&=\left[ia\sin\theta,0,1,\frac{i}{\sin \theta}\right]/(\sqrt{2}(r+ia\cos\theta)).\end{split} \tag{10}\] When working in the IEF coordinates, we will use the usual Kinnersley tetrads after applying a null rotation of the third kind by rescaling the \(l^{\mu}\) by \(\triangle\), and dividing \(n^{\mu}\) by a factor of \(\triangle\). Using these modified Kinnersley tetrads we expect that epsilon, which was set to zero in Teukolsky's paper [26] by the null rotation freedom, will be in general non-zero. In these coordinates, the tetrads will take the following form \[\begin{split} l^{\mu}&=[\triangle+4Mr,\triangle,0,2a],\\ n^{\mu}&=\left[\frac{1}{2\Sigma},-\frac{1}{2\Sigma},0,0\right],\\ m^{\mu}&=\left[ia\sin\theta,0,1,\frac{i}{\sin \theta}\right]/(\sqrt{2}(r+ia\cos\theta)).\end{split} \tag{11}\] ### NP Scalars The non-vanishing NP scalars will take the same form in all coordinates: \[\begin{split}\beta&=\frac{\cot\theta}{2\sqrt{2}(r+ ia\cos\theta)},\\ \pi&=\frac{ia\sin\theta}{\sqrt{2}(r-ia\cos\theta)^{ 2}},\\ \tau&=\frac{-ia\sin\theta}{\sqrt{2}\Sigma},\\ \alpha&=\pi-\beta^{*},\\ \psi_{2}&=\frac{-M}{(r-ia\cos\theta)^{3}}.\end{split} \tag{12}\] Where the complex conjugate is indicated by \(*\) symbol. While in BL and OEF coordinates, the rest of the non-vanishing NP quantities will take this form \[\begin{split}\rho&=\frac{-1}{r-ia\cos\theta},\\ \mu&=\frac{\triangle}{\Sigma}\frac{-1}{2(r-ia\cos \theta)},\\ \gamma&=\mu+\frac{r-M}{2\Sigma}.\end{split} \tag{13}\] On the other hand, in IEF coordinates, the rest of the non-vanishing NP quantities will be \[\begin{split}\epsilon&=r-M,\\ \gamma&=\mu=-\frac{1}{2}\frac{r+ia\cos\theta}{ \Sigma^{2}},\\ \rho&=-(r+ia\cos\theta)\frac{\triangle}{\Sigma}.\end{split} \tag{14}\] We can see that the coordinates, tetrads, and NP quantities in IEF and OEF coordinates are well-behaved at the horizon, except for \(\rho\) in IEF and \(\mu\) in OEF coordinates. ## III Solving Teukolsky master equation ### Teukolsky Master Equation As mentioned in the introduction, Teukolsky equations are derived within the NP formalism which makes it coordinate invariant. Once a set of coordinates and tetrads are chosen, the NP equations are expressed in those coordinates as well as the tetrad in use. The two Teukolsky equations of interest for us are the ones defining \(\psi_{0}\) and \(\psi_{4}\). They have the following form in the NP formalism [58]. \[\begin{split}\left[\left(D-3\epsilon+\epsilon^{*}-4\rho-\rho^{* }\right)\left(\Delta-4\gamma+\mu\right)\\ -\left(\delta+\pi^{*}-\alpha^{*}-3\beta-4\tau\right)\left(\delta ^{*}+\pi-4\alpha\right)-3\psi_{2}\right]\psi_{0},\\ =4\pi T_{0},\end{split} \tag{15}\] \[\left[\left(\Delta+3\gamma-\gamma^{*}+4\mu+\mu^{*}\right)\left(D+4 \epsilon-\rho\right)\right. \tag{16}\] \[-\left(\delta^{*}-\tau^{*}+\beta^{*}+3\alpha+4\pi\right)\left( \delta-\tau+4\beta\right)-3\psi_{2}\big{]}\psi_{4},\] \[=4\pi T_{4},\] where the four-tetrad derivative is: \[D =l^{\mu}\partial_{\mu}, \tag{17}\] \[\Delta =n^{\mu}\partial_{\mu},\] \[\delta =m^{\mu}\partial_{\mu},\] \[\delta^{*} =m^{*\mu}\partial_{\mu}.\] We consider now the master Teukolsky equation for \(|s|=2\). The Teukolsky equation in the BL coordinates is defined as [26] \[\left\{\left[\frac{\left(r^{2}+a^{2}\right)^{2}}{\Delta}-a^{2} \sin^{2}\theta\right]\frac{\partial^{2}}{\partial t^{2}}-2s\left[\frac{M\left( r^{2}-a^{2}\right)}{\Delta}\right.\right. \tag{18}\] \[\left.\left.-r-ia\cos\theta\right]\frac{\partial}{\partial t}+ \frac{4Mar}{\Delta}\frac{\partial^{2}}{\partial t\partial\phi}\right.\] \[\left.-\Delta^{-s}\frac{\partial}{\partial r}\left(\Delta^{s+1} \frac{\partial}{\partial r}\right)-\frac{1}{\sin\theta}\frac{\partial}{ \partial\theta}\left(\sin\theta\frac{\partial}{\partial\theta}\right)\right.\] \[\left.\left.-2s\left[\frac{a(r-M)}{\Delta}+\frac{i\cos\theta}{ \sin^{2}\theta}\right]\frac{\partial}{\partial\phi}\right.\] \[+\left[\frac{a^{2}}{\Delta}-\frac{1}{\sin^{2}\theta}\right]\frac {\partial^{2}}{\partial\phi^{2}}+\left(s^{2}\cot^{2}\theta-s\right)\right\} {}_{s}\psi_{lm}\] \[=4\pi\Sigma T_{s}.\] In IEF and OEF coordinates, the same equation will take the following form [24]: \[\left\{\left[\Sigma+2Mr\right]\frac{\partial^{2}}{\partial t^{2}} \pm[2(s\mp 1)M\right. \tag{19}\] \[\left.+2s(r+ia\cos\theta)]\frac{\partial}{\partial t}\mp 4Mr \frac{\partial^{2}}{\partial t\partial\tau}\mp 2a\frac{\partial^{2}}{ \partial t\partial\phi}\right.\] \[\left.-\Delta^{s}\frac{\partial}{\partial r}\left(\triangle^{-s+1} \frac{\partial}{\partial r}\right)-\frac{1}{\sin\theta}\frac{\partial}{ \partial\theta}\left(\sin\theta\frac{\partial}{\partial\theta}\right)\right.\] \[\left.\mp 2s\left(\frac{i\cos\theta}{\sin^{2}\theta}\right)\frac{ \partial}{\partial\phi}\right.\] \[\left.-\frac{1}{\sin^{2}\theta}\frac{\partial^{2}}{\partial\phi^ {2}}+\left(s^{2}\cot^{2}\theta\pm s\right)\right\}{}_{s}\psi_{lm}\] \[=4\pi\Sigma T_{s}^{\pm}.\] where upper refers to IEF while lower refers to OEF coordinates. The quantity \({}_{s}\psi_{lm}\) is defined as: \[{}_{s}\psi_{lm}=\begin{cases}(r-ia\cos\theta)^{4}\psi_{4}&s=-2,\\ \psi_{0}&s=+2.\end{cases} \tag{20}\] From equations (8) and (16), it could be easily shown that the \(\psi_{0}\) of IEF coordinates gets mapped to \(\psi_{4}\) of OEF coordinates and vice versa. ### Separation In [57] and [26], Teukolsky conjectured that the separability should be granted under the transformation between coordinates which takes the following form \[\bar{r} =h(r), \tag{21}\] \[\bar{\theta} =j(\theta),\] \[\bar{t} =t+f_{1}(r)+f_{2}(\theta),\] \[\bar{\varphi} =\varphi+g_{1}(r)+g_{2}(\theta).\] Using Fourier decomposition for time and azimuthal angle, the master equation can be separated in the IEF and OEF coordinates with the following ansatz \[{}_{s}\psi_{lm}=e^{-i\omega t}e^{im\phi}{}_{s}S_{lm}(\theta)_{s}R_{lm}(r). \tag{22}\] This will yield the same angular equation as the one in BL coordinates \[\left\{\frac{1}{\sin\theta}\frac{d}{d\theta}\left(\sin\theta \frac{d}{d\theta}\right)-\frac{(m+s\cos\theta)^{2}}{\sin^{2}\theta}-2a\omega s\cos\theta\right. \tag{23}\] \[\left.+a^{2}\omega^{2}\cos^{2}\theta+S+{}_{s}A_{lm}\right\}{}_{s}S _{lm}(\theta)=0.\] The radial equation has the following forms in IEF and OEF coordinates respectively: \[\triangle\left\{\frac{[-\bar{\lambda}+\omega^{2}(\triangle+4Mr)+ 2iM(s-1)\omega+2irs\omega-2s]}{\triangle}\right. \tag{24}\] \[\left.+\frac{2(iam+(1-s)(r-M)-2iMr\omega)}{\triangle}\frac{d}{dr} +\frac{d^{2}}{dr^{2}}\right\}{}_{s}R(r)_{lm}\] \[={}_{s}T_{lm}^{+},\] \[\triangle\left\{\frac{[-\bar{\lambda}+\omega^{2}(\triangle+4Mr)+2 iM(s+1)\omega+2irs\omega]}{\triangle}\right. \tag{25}\] \[\left.+\frac{2(iam-(1+s)(r-M)-2iMr\omega)}{\triangle}\frac{d}{dr} +\frac{d^{2}}{dr^{2}}\right\}{}_{s}R(r)_{lm}\] \[={}_{s}T_{lm}^{-},\] where \(\lambda\) and \(\bar{\lambda}\) are defined below \[\lambda\equiv A+a^{2}\omega^{2}-2am\omega, \tag{26}\] \[\bar{\lambda}\equiv\lambda+2am\omega.\] The radial source term is defined by expanding the source terms for the corresponding Teukolsky equation in the spin-weighted angular harmonics. ### Radial Equation It can be easily seen that the radial equation of the Teukolsky master equation has 3 singular points at \(r=\{r_{-},r_{+},r\rightarrow\infty\}\) of rank \(\{1,1,2\}\) respectively. The first two singularities at the horizons are regular, while the one at infinity is irregular. Furthermore, the following transformation will put the radial equation written in IEF/OEF coordinates in the regular form of the confluent Heun differential equation (CHE) \[R_{\pm}=\hat{R}_{\pm}e^{\mp i\omega r}. \tag{27}\] This is not surprising, as the transformation from the BL coordinates to the Kerr-ingoing/outgoing coordinates does not alter the radial coordinate at the same time the reader can check that also the redefinition of the azimuthal and temporal coordinates won't change the singular structure of the radial equation by applying chain rule. It should be noted that all confluent Heun ODEs are interrelated through a radial coordinate transformation, as outlined in [41]. In other words, there exists a radial coordinate transformation that is equivalent to the transformations performed in the azimuthal and temporal coordinates as given by equations (3) and (4). The coefficient for the second derivative of the dependent variable and the coefficient for the dependent variable itself are second-degree polynomials in \(r\), while the coefficient for the first derivative of the dependent variable is a first-degree polynomial. \[P(r)\hat{R}^{\prime\prime}_{+}(r)+\tilde{P}(r)\hat{R}^{\prime}_{+}(r)+\bar{P}( r)\hat{R}_{+}(r)=0, \tag{28}\] Where \[\begin{split}\bar{P}(r)=-\kappa-2s+2ir\omega(2s-1),\\ \tilde{P}(r)=-2ia^{2}\omega+2iam+2M(s-1)+2r(1-s)-2i\omega r^{2}, \\ P(r)=\triangle(r).\end{split} \tag{29}\] Similarly, we will have \[Q(r)\hat{R}^{\prime\prime}_{-}(r)+\tilde{Q}(r)\hat{R}^{\prime}_{-}(r)+\bar{Q} (r)\hat{R}_{-}(r)=0, \tag{30}\] Where \[\begin{split}\bar{Q}(r)=-\kappa+2i\omega r(2s+1),\\ \bar{Q}(r)=2ia^{2}\omega-2iam-2M(s+1)+2r(s+1)+2i\omega r^{2}, \\ Q(r)=\triangle(r).\end{split} \tag{31}\] where all of \(Q,\bar{Q},\bar{Q},P,\bar{P}\) are polynomials of \(r\). Further investigation is needed to solve these equations using the series methods within the region \(r_{-}<r<r\rightarrow\infty\) for a better understanding of the behavior of gravitational perturbations after crossing the outer horizon. ### First Derivative of the Radial Function We need to comprehend the behavior of the first derivative of the radial function to study the perturbed metric expansion. For this purpose, it would be more convenient to put the CHE in the canonical form [41]. For simplicity, we will only tackle the radial equation in the IEF coordinates in this section \[\begin{split}\hat{R}^{\prime\prime}(r)+\hat{R}^{\prime}(r)\left( \frac{\alpha}{r-1}+\frac{\gamma}{r}+\epsilon\right)+\frac{(\xi r-\beta)}{(r- 1)r}\hat{R}(r)=0.\end{split} \tag{32}\] Here, \(r\rightarrow\frac{r-r_{-}}{r_{+}-r_{-}}\). We will rename the independent variable to \(r\) in these coordinates, hoping that it will not be a source of confusion. The parameters in the equation above are defined as follows: \[\begin{split}\epsilon&=4i\sqrt{-a^{2}+M^{2}}\omega,\\ \gamma&=1-s+2iM\omega-\frac{i\left(am-2M^{2}\omega \right)}{\sqrt{-a^{2}+M^{2}}},\\ \alpha&=1-s+2iM\omega+\frac{i\left(am-2M^{2}\omega \right)}{\sqrt{-a^{2}+M^{2}}},\\ \xi&=4\sqrt{-a^{2}+M^{2}}\omega(-i+4M\omega),\\ \beta&=\omega\left(2am+2iM(-1+s)+a^{2}\omega\right)-2M \omega(-i+4M\omega),\\ &+2\sqrt{-a^{2}+M^{2}}\omega(-i+4M\omega)+\kappa.\end{split} \tag{33}\] After some algebra, the equation governing the first derivative of the radial function labeled \(u(r)\) below can be written as \[\begin{split}& u^{\prime\prime}(r)+u^{\prime}(r)\left(\frac{ \alpha+1}{r-1}-\frac{1}{\frac{\beta}{\xi}-r}+\frac{\gamma+1}{r}+\epsilon \right)\\ &+u(r)\left(\frac{A}{r-1}+\frac{B}{r-\frac{\beta}{\xi}}+\frac{C}{ r}\right)=0.\end{split} \tag{34}\] \[\begin{split}& A=\gamma+\epsilon+\beta\left(-1+\frac{\alpha}{ \beta-\xi}\right)+\xi,\\ & B=\alpha-\epsilon-\frac{\gamma\xi}{\beta}+\frac{\alpha\beta}{- \beta+\xi},\\ & C=-\alpha+\beta-\gamma+\epsilon+\frac{\gamma\xi}{\beta}.\end{split} \tag{35}\] In [59], it was shown that this equation has one additional regular singularity point at \(r=\frac{\beta}{\xi}\). If this point didn't coincide with \(\{0,1,\infty\}\) then the radius of convergence of any function written in terms of the radial equation and its derivative will be the intersection between their two radii of convergences. Thus, we need to account for this in the metric expansion if needed. This fact will be crucial once we try to expand the metric perturbation in the radial solution and its derivative, as we will discuss later. Moreover, this new ODE has the irregular singularity of rank 2 at infinity beside three regular singular points at \(r=\left\{0,1,\frac{\beta}{\xi}\right\}\). Hence it doesn't belong to the Heun family which has a similar singular structure but has also regular singularity at infinity. ### Black hole boundary conditions We will use Green's functions method to write the inhomogeneous radial solution in terms of the homogeneous radial solutions obeying the boundary conditions imposed on the radial part. There are two physical boundary conditions that must be satisfied at the Horizon \(r=r_{+}\) and in the asymptotically flat region \(r\rightarrow\infty\). For example, an observer near the Horizon should not see anything special occurring at Horizon. This requires the coordinates, tetrads, and radial part of the Weyl scalars to be regular there. Similarly, an observer at infinity should expect to receive a spherical wave with the same frequency as the frequency of the perturber. We study the homogeneous radial equation which asymptotically has wave solutions by following [26]. We transform the radial equation to a general harmonic oscillator equation by transforming the dependent and independent variables before studying any limits. The independent variables are given by equation (1), while the dependent ones are given below. We include the subscript \(s\) since the functions \(f\) will be \(s\) dependent. \[{}_{s}R(r)_{lm}=Y(r)f(r), \tag{36}\] where the defining equation for \(f(r)\) is \[f_{\pm,r*}+\eta_{\pm}f_{\pm}=0. \tag{37}\] Using those definitions make the equations of \(Y(r)\) as follows \[Y_{r*r*}+\left\{\frac{\beta_{\pm}\triangle}{\left(\triangle-2Mr\right)^{2}}- \eta_{\pm}^{2}-\frac{\triangle\eta_{\pm}^{\prime}}{\triangle-2Mr}\right\}Y=0, \tag{38}\] While \(\eta_{\pm}\) is defined as \[\begin{split}\eta_{\pm}(r)&\equiv\frac{M\left(a^{ 2}-r^{2}\right)}{\left(a^{2}+r^{2}\right)^{2}}\\ &+\frac{\pm iam+(\mp s+1)(r-M)\mp 2iMrw}{a^{2}+r^{2}}.\end{split} \tag{39}\] which will allow us to solve for \(f(r)\) \[f_{\pm}(r)\equiv\frac{\triangle^{\pm\left(\frac{s}{2}+iMw\right)}e^{\pm i \alpha\tanh^{-1}\left(\frac{r-M}{r_{+}-M}\right)}}{\sqrt{\triangle+2Mr}}, \tag{40}\] where \(\alpha\) is defined as \[\alpha\equiv\frac{am-2M^{2}w}{r_{+}-M}. \tag{41}\] #### iv.5.1 At the outer horizon \(r\to r_{+}\) "Blackhole" As the \(r\to r_{+}\) then \(\triangle\to 0\). Given that \(\frac{dr}{dr*}\to 0\), then \(\eta_{\pm}\) can be treated as a constant with respect to \(r*\). Then equation (30) with its solution will take the following form \[\begin{split} Y_{r*r*}(r*\rightarrow-\infty)-\eta_{\pm}^{2}Y(r* \rightarrow-\infty)\approx 0,\\ Y_{r*r*}(r*\rightarrow-\infty)\approx e^{\pm\eta_{\pm}r*}. \end{split} \tag{42}\] A similar argument could be applied to the \(f(r)\) defining equation, which will leave us with the following solution for \(f(r)\) \[f(r*\rightarrow-\infty)\approx e^{-\eta_{\pm}r*}. \tag{43}\] Finally, \(R(r)\) can be evaluated at the Horizon \[R(r)\approx\left\{\begin{array}{ll}1&\text{(+)~{}ingoing,}\quad\text{(-)~ {}outgoing,}\\ e^{-2\eta_{\pm}r*}(+)&\text{outgoing,}\quad\text{(-)~{}ingoing}\end{array}\right\} \tag{44}\] To examine what this means, it would be useful to rewrite \(\eta_{\pm}\) as \[\eta_{\pm}(r)=\frac{r\triangle}{\left(\triangle+2Mr\right)^{2}}\mp\frac{i \omega(2Mr)-iam}{\triangle+2Mr}\mp\frac{s(r-M)}{\triangle+2Mr}. \tag{45}\] Thus, at \(r\to r_{+}\) \[\eta_{\pm}(r_{+})=\mp(i\omega-\frac{iam}{2Mr_{+}})\mp\frac{s(r_{+}-M)}{2Mr_{+ }}. \tag{46}\] \[R(r)\approx\left\{\begin{array}{ll}1&\text{(+)~{}ingoing,}\quad\text{(-)~ {}outgoing,}\\ e^{\pm 2kr*}\triangle^{\mp s/2}(+)&\text{outgoing,}\quad\text{(-)~{}ingoing} \end{array}\right\}. \tag{47}\] In the case of the IEF coordinates, \(\eta_{+}\) is proportional to \(-i\omega\). Thus, from equation (47) we see that only the ingoing solution is well behaved at the horizon in these coordinates, while in the case of the OEF coordinates, \(\eta_{-}\) is proportional to \(i\omega\), so only the outgoing solution is well behaved at the horizon. #### iv.5.2 At Infinity \(r\rightarrow\infty\) At infinity, if we expand equation (38) to first order in \(1/r\), we will obtain the asymptotic behavior in [26] for \(r\rightarrow\infty\). \[\begin{split} Y_{r*r*}(r\rightarrow\infty)+\left(\omega^{2}+ \frac{2i\omega s}{r}\right)Y(r\rightarrow\infty)\approx 0,\\ Y_{r*r*}(r*\rightarrow\infty)\approx r^{\mp s}e^{\pm i\omega r*}. \end{split} \tag{48}\] Now we can evaluate the asymptotic behavior of \(f_{\pm}(r*\rightarrow\infty)\) \[f_{\pm}(r\rightarrow\infty)=r^{-1\pm(s+2iMw)}e^{\pm i\alpha\tanh^{-1}\left( \frac{r-M}{r_{+}-M}\right)}. \tag{49}\] In the IEF coordinates, \[R_{+}(r*\rightarrow\infty)=\frac{r^{+s}}{r^{1\pm s}}e^{\pm i\omega r*}e^{-i \pi}e^{\pm 2iM\omega\ln r}. \tag{50}\] In the OEF coordinates, \[R_{-}(r*\rightarrow\infty)=\frac{r^{-s}}{r^{1\pm s}}e^{\pm i\omega r*}e^{-i \pi}e^{\pm 2iM\omega\ln r}. \tag{51}\] ### Inhomogeneous Radial Equation When we attempt to construct the metric in the Outgoing radiation gauge, we will only study the inhomogeneous radial equation for \(\psi_{4}\) for the reasons that will be obvious in the next section. The source term on the right-hand side of the Teukolsky equation for \(\psi_{4}\) is given by the following equation \[T_{-2}=8\pi\Sigma S_{-2}^{\mu\nu}T_{\mu\nu}. \tag{52}\] \(T_{ab}\), the energy-momentum tensor in the tetrad basis will be given as \[T_{ab}=T_{\alpha\beta}e_{a}^{\alpha}e_{b}^{\beta}. \tag{53}\] The decoupling operator for the linearized Einstein field equations for \(\psi_{4}\) as provided in [60] is \[\begin{split} S_{-2}^{\alpha\beta}=&\{\left(\Delta+ 3\gamma-\gamma^{*}+4\mu+\mu^{*}\right)\left[\left(\delta^{*}-2\tau^{*}+2\alpha ^{*}\right)e_{4}^{\alpha}e_{4}^{\beta}\right.\\ &-\left(\Delta+2\gamma-2\gamma^{*}+\mu^{*}\right)e_{2}^{\alpha}e_ {4}^{\beta}\right]\\ &+\left(\delta^{*}-\tau^{*}+\beta^{*}+3\alpha+4\pi\right)\left[ \left(\Delta+2\gamma+2\mu^{*}\right)e_{2}^{\alpha}e_{4}^{\beta}\right.\\ &-\left(\delta^{*}-\tau^{*}+2\beta^{*}+2\alpha\right)e_{2}^{ \alpha}e_{2}^{\beta}\}.\end{split} \tag{54}\] \(\mathcal{O}_{-2}\) is the second-order linear radial differential operator representing the radial equation in both the IEF and OEF coordinates respectively. A Green's function can be defined for the following operators (in a way similar to [52]) \[{}_{\pm}\mathcal{O}_{+2}(r)G_{\pm}\left(r,r^{\prime}\right)=\delta\left(r-r^{ \prime}\right), \tag{55}\] Then \(G_{\pm}\left(r,r^{\prime}\right)\) can be written using the homogeneous solution: \[G_{\pm lm}\left(r,r^{\prime}\right)=\left\{\begin{array}{ll}c_{lm}^{\pm} \left(r^{\prime}\right)R_{-}^{-}(r\right)R_{\pm}^{+}(r^{\prime})&r_{+}<r<r,\\ c_{lm}^{\pm}(r^{\prime})R_{\pm}^{-}(r^{\prime})R_{\pm}^{+}(r\left.\right.)&r<r< \infty\end{array}\right\}. \tag{56}\] The superscripts \(\{+,-\}\) in \(R(r)\) indicate that this quantity satisfies the boundary conditions at infinity and the outer horizon respectively. The coefficient \(c_{lm}^{\pm}\) is defined below, while \(W[R_{\pm}^{+}(r^{\prime}),R_{\pm}^{-}(r^{\prime})]\) is the Wronskian of the radial equation. \[c_{lm}^{\pm}(r^{\prime})=\frac{1}{\triangle(r^{\prime})W[R_{\pm}^{+}(r^{ \prime}),R_{\pm}^{-}(r^{\prime})]}. \tag{57}\] As shown in [52], the full Green's function could still be generated using the completeness of the spin-weight spheroidal harmonics. \[\mathbf{G\left(x,x^{\prime}\right)}=\sum_{lm}G_{\pm lm}\left(r,r^{\prime}\right)S _{\ell m}(\theta)_{2}S_{\ell m}\left(\theta^{\prime}\right)e^{im\left(\phi- \phi^{\prime}\right)}. \tag{58}\] Finally, we can write \(\psi_{0}\) using Green's function as \[\psi_{4}=\int\mathbf{G\left(x,x^{\prime}\right)}\left[8\pi\Sigma^{\prime}T_{+2} \left(x^{\prime}\right)\right]d^{3}\vec{r}. \tag{59}\] Now we can use the adjoint operator of \(S_{+2}^{\mu\nu}\) to simplify this expression \[\left[\mathbf{G\left(x,x^{\prime}\right)}\,\Sigma\right]S_{-2}^{\mu\nu}T_{\mu\nu }=T_{\mu\nu}S_{-2}^{\mu\nu\dagger},\left[\mathbf{G\left(x,x^{\prime}\right)}\, \Sigma\right]+\partial^{i}k_{i}. \tag{60}\] At this point, we can utilize the fact that the energy-momentum tensor of a point particle will always be written as a tensor multiplied by a Dirac delta function. As we are interested in a perturber moving in an equatorial circular orbit around the Kerr blackhole then, the energy-momentum tensor in the coordinate basis is \[\begin{split}& T^{\mu\nu}=\frac{E}{\gamma^{2}}U^{\mu}U^{\nu} \delta^{3}(\vec{r}-r_{p}\overline{(}t))\equiv\mathcal{F}^{\mu\nu}\delta^{3}( \vec{r}-r_{p}\overline{(}t)),\\ &\delta^{3}(\vec{r}-r_{p}\overline{(}t))=\frac{1}{R^{2}}\delta(r- R)\delta(\cos(\theta))\delta(\phi-\Omega t).\end{split} \tag{61}\] where \(\Omega\) is the angular frequency of the particle, while \(\vec{r_{p}}\) represents the position vector of the perturber. Then the expression given for \(\psi_{4}\) is \[\begin{split}\psi_{4}=&\int\mathcal{F}^{\mu\nu}(x^{ \prime\mu})\delta^{3}(\vec{r^{\prime}}-r_{p}\overline{(}t))S_{+2}^{\mu\nu\dagger }\left[\mathbf{G\left(x,x^{\prime}\right)}\,\Sigma\right]d^{3}\vec{r^{\prime}}\\ +&\int\partial^{i}k_{i}d^{3}\vec{r^{\prime}}.\end{split} \tag{62}\] Since \(S_{-2}^{\mu\nu\dagger}\) is a second-order linear differential operator, \(k^{i}\) is a function of the particle Dirac delta and its derivatives. Thus, the contribution from the second integral will be zero. Furthermore, as \(\mathbf{G\left(x,x^{\prime}\right)}\) is a factorized function in \(x\) and \(x^{\prime}\), we can write \[\begin{split}&\mathbf{G\left(x,x^{\prime}\right)}=\sum_{lm}\mathcal{G}_{ lm}(x)\tilde{\mathcal{G}}_{lm}(x^{\prime}),\\ &\mathcal{G}_{lm}(x)={}_{2}\mathcal{R}_{lm}(r){}_{2}S_{lm}(\theta) e^{im\phi},\\ &\tilde{\mathcal{G}}_{lm}(x^{\prime})={}_{2}\tilde{\mathcal{R}}_{ lm}(r^{\prime}){}_{2}S_{lm}(\theta^{\prime})e^{im\phi^{\prime}},\end{split} \tag{63}\] where \(\mathcal{R}_{lm}(r)\) and \(\tilde{\mathcal{R}}_{lm}(r^{\prime})\) is defined as \[\begin{split}\mathcal{R}_{lm}(r)=&\left\{\begin{array}{ ll}R_{-}^{-}(r)&r_{+}<r<r\\ R_{\pm}^{\pm}(r)&r<r<\infty\end{array}\right\}\\ \tilde{\mathcal{R}}_{lm}(r^{\prime})=&c_{lm}^{\pm}(r^{\prime}) \left\{\begin{array}{ll}R_{\pm}^{\pm}(r^{\prime})&r_{+}<r<r\\ R_{\pm}^{-}(r^{\prime})&r<r<\infty\end{array}\right\}.\end{split} \tag{64}\] Since we are in the Fourier space for \((t,\phi)\), then \(S_{-2}^{\mu\nu\dagger}\) has no derivatives in both of these coordinates (replaced by their eigenvalues \((\omega,m)\) respectively. Then we can safely get part of \(\tilde{\mathcal{G}}_{lm}(x^{\prime})\) which dependents of \(\phi\) after taking into account the action of the \(\delta(\phi-\Omega t)\) on it. Finally, \(\psi_{0}\) can be written as \[\begin{split}&\psi_{4}=\sum_{lm}\mathcal{C}_{lm}-{}_{2}\mathcal{R}_{ lm}(r)-{}_{2}S_{lm}(\theta)e^{im\phi-i\Omega t},\\ &\mathcal{C}_{lm}=\{\mathcal{F}^{\mu\nu}S_{-2}^{\mu\nu\dagger}[ {}_{2}\tilde{\mathcal{R}}_{lm}(r^{\prime})\Sigma(r^{\prime},\theta^{\prime}) {}_{-2}S_{lm}(\theta^{\prime})]\}_{\vec{r}=\vec{r}_{p}(t)}.\end{split} \tag{65}\] The adjoint operator \(S_{-2}^{\mu\nu\uparrow}\) is given as \[\begin{split} S_{-2}^{\mu\nu\uparrow}=& e_{4}^{\alpha}e_{4}^{\beta}( \delta^{*}+\tau^{*}-3\alpha+\beta^{*}+\pi)(\Delta-4\gamma-3\mu)\\ -& e_{2}^{\alpha}e_{4}^{\beta}[(\Delta+\mu-3\gamma+ \gamma^{*})(\Delta-4\gamma-3\mu)\\ -&(\Delta+2\mu-2\mu^{*}-3\gamma-\gamma^{*})\left( \delta^{*}+\pi-4\alpha-4\tau\right)]\\ -& e_{2}^{\alpha}e_{2}^{\beta}\left(\delta^{*}+\pi- 3\alpha-\beta^{*}\right)\left(\delta^{*}+\pi-4\alpha-4\tau\right).\end{split} \tag{66}\] ## IV Applying the CCKW procedure The CCKW procedure is dedicated to constructing the metric from the Hertz potential. To arrive at this final goal, it will be crucial to algebrize the equation connecting the Weyl scalars. Thus, the source terms in the Teukolsky equation would be manifested in the perturbation metric [61]. ### Hertz Potential-Weyl Scalars Equations In CCKW, the source-free Teukolsky equation for any \(\psi_{i}\) will be a defining equation for a Hertz-like potential labeled by \(\Psi_{H}\). Accordingly, each \(\psi_{i}\) could generate a Hertz potential. Wald proved that by means of applying linear PDE operators on this \(\Psi_{H}\) all \(\psi\)'s will be defined. If the conjugate source-free Teukolsky equation for \(\psi_{4}\) is chosen to define the corresponding conjugate Hertz-potential \(\Psi_{H}^{*}\), then we have \[\begin{split}\mathcal{O}_{-2}^{*}\Psi_{H}^{*}=0,\\ \mathcal{O}_{-2}^{*}\equiv\left[(\delta+3\alpha^{*}+\beta-\tau) \left(\delta^{*}+4\beta^{*}+3\tau^{*}\right)-\right.\\ \left.(\Delta-\gamma+3\gamma^{*}+\mu\right)\left(D+4\epsilon^{*}+ 3\rho^{*}\right)+3\psi_{2}^{*}\right].\end{split} \tag{67}\] Then both \(\psi_{0}\) and \(\psi_{4}\) will be provided respectively as \[\begin{split}\psi_{0}=\\ \frac{1}{2}\left[\left(D-3\epsilon+\epsilon^{*}-\rho^{*}\right) \left(D-2\epsilon+2\epsilon^{*}-\rho^{*}\right)\right.\\ \left.\left.\left(D-\epsilon+3\epsilon^{*}-\rho^{*}\right) \left(D+4\epsilon^{*}+3\rho^{*}\right)\right]\Psi_{H}^{*}.\end{split} \tag{68}\] \[\begin{split}\psi_{4}=\\ \frac{1}{2}\left[\left(\delta^{*}+3\alpha+\beta^{*}-\tau^{*} \right)\left(\delta^{*}+2\alpha+2\beta^{*}-\tau^{*}\right)\right.\\ \left.\left.\left(\delta^{*}+\alpha+3\beta^{*}-\tau^{*}\right) \left(\delta^{*}+4\beta^{*}+3\tau^{*}\right)\right]\Psi_{H}^{*}\\ +3\psi_{2}\left[\tau\left(\delta^{*}+4\alpha\right)-\rho(\Delta+ 4\gamma)-\mu(D+4\epsilon)+\right.\\ \left.\pi(\delta+4\beta)+2\psi_{2}\right]\Psi_{H}.\end{split} \tag{69}\] These equations are known as the ingoing radiation gauge (IRG) provided by Wald in [60], and are relating the gravitational Hertz potential \(\Psi_{H}\) to Weyl scalars \(\psi_{i}\) in the NP formalism. The tetrad legs are aligned along the repeated null direction of the Weyl tensor. The equations connecting \(\Psi_{H}\) to \(\psi_{4}\) will be the same angular equation that appears in BL coordinates \[(r-ia\cos\theta)^{4}\psi_{4}=\frac{1}{8}\left[\tilde{L}^{4}\Psi_{H}^{*}-12M \partial_{t}\Psi_{H}\right]. \tag{70}\] where the \(\tilde{L}^{4}\) is given by \[\begin{split}\tilde{L}^{4}=L_{1}L_{0}L_{-1}L_{-2},\\ L_{n}\equiv-\partial_{\theta}+a\omega\sin\theta-\frac{m}{\sin \theta}+n\cot\theta.\end{split} \tag{71}\] The equation connecting \(\Psi_{H}\) to \(\psi_{0}\) will still maintain its radial nature but will take a different form as shown below. In the IEF coordinates, \[\begin{split}&\left\{\frac{1}{2}D^{4}+\triangle^{\prime}D^{3}+ \left[6\triangle+2\left(a^{2}-m^{2}\right)\right]D^{2}\right.\\ +\left[6\triangle^{\prime}\triangle+4\triangle^{\prime}\left(a^{ 2}-m^{2}\right)\right]D+12\triangle^{2}\right\}\Psi_{H}=\psi_{0},\\ D=(\triangle+4Mr)\partial_{t}+\triangle\partial_{r}+2a\partial_{ \phi}.\end{split} \tag{72}\] In the OEF coordinates, \[\begin{split}&\frac{1}{2}D^{4}\Psi_{H}=\psi_{0},\\ D=\partial_{t}-\partial_{r}.\end{split} \tag{73}\] Since the angular equation is form-invariant under these transformations, it will be useful to choose the method used in [61] to algebrize the angular fourth-order ODE. ### Cckw Since the Hertz-angular equation is already form-invariant, its algebraization would be very similar [61]. We can follow the same steps to algebrize the Hertz-angular equation using the Teukolsky-Starobinsky identities. At this point, we can use the identity equivalent to equation (59) in [35]. \[L_{1}L_{0}L_{-1}L_{-2}S_{-2}=D_{2}S_{+2}, \tag{74}\] where D is defined with \[\begin{split} D^{2}&=\lambda_{CH}^{2}\left(\lambda_ {CH}+2\right)^{2}+8a\omega(m-a\omega)\lambda_{CH}\left(5\lambda_{CH}+6\right)\\ &+48a^{2}\omega^{2}\left[2\lambda_{CH}+3(m-a\omega)^{2}\right], \end{split} \tag{75}\] where \(\lambda_{CH}=\lambda+s+2\). Also, we can write the Hertz potential as \[\Psi_{H}^{\pm}=\sum_{lmw}H_{lmw-2}\tilde{\tilde{R}}_{lmw}e^{i(m\phi-\omega t )}{}_{-2}S_{lmw}(\theta). \tag{76}\] given that \({}_{-2}S_{lmw}^{*}(\theta)=(-1)^{m}{}_{2}S_{lmw}(\theta)\). Then, the angular equation can be written as \[\begin{split}&\sum_{lm\omega}\{8(r-ia\cos\theta)^{4}\mathcal{C}_{ lmw-2}\mathcal{R}_{lmw}+12iM\omega_{-2}\tilde{\tilde{R}}_{lmw}H_{lm\omega}\\ &-(-1)^{m}D_{-2}\tilde{\tilde{R}}_{l-m-\omega}^{*}H_{l-m-\omega}^{* }\}e^{i(m\phi-\omega t)}{}_{-2}S_{lmw}(\theta)=0.\end{split} \tag{77}\] Then we arrive to this relation \[\begin{split} 8(r-ia\cos\theta)^{4}\mathcal{C}_{lm\omega-2} \mathcal{R}_{lm\omega}=-12iM\omega_{-2}\tilde{\tilde{\tilde{R}}}_{lm\omega}H_{ lm\omega}\\ +(-1)^{m}D_{-2}\tilde{\tilde{\tilde{R}}}^{*}_{l-m-\omega}H^{*}_{l- m-\omega}.\end{split} \tag{78}\] We can take the complex conjugate of this equation and solve for \(H_{lmw-2}\tilde{\tilde{\tilde{R}}}_{lm\omega}\), and finally write \(\Psi^{\pm}_{H}\) as \[\begin{split}\Psi^{\pm}_{H}=\sum_{lm}[\mathcal{A}_{lm}\ _2 \mathcal{R}_{lm}+\mathcal{B}_{lm}\ _2\mathcal{R}^{*}_{lm}]e^{i(m\phi-\omega t)} \ _{-2}S_{lm}(\theta),\\ \mathcal{A}_{lm}=\frac{-96imM\omega(r-ia\cos\theta)^{4}\mathcal{ C}_{lm}}{D^{2}+144M^{2}m^{2}\omega^{2}},\\ \mathcal{B}_{lm}=(-1)^{m}\frac{8D(r+ia\cos\theta)^{4}\mathcal{C} ^{*}_{lm}}{D^{2}+144M^{2}m^{2}\omega^{2}}.\end{split} \tag{79}\] We can use the relation \(R^{*}_{lmw}=R_{l-m-w}\) to rewrite the expression for \(\Psi^{\pm}_{H}\) as \[\begin{split}\Psi^{\pm}_{H}=\sum_{lm\omega}\ _2\mathcal{S}_{lm \omega}\ _2\mathcal{R}_{lm\omega}e^{i(m\phi-\omega t)},\\ \ _2\mathcal{S}_{lm\omega}=\mathcal{A}_{lm\omega}\ _2S_{lm \omega}(\theta)+\mathcal{B}_{lm\omega}\ _2S_{l-m-\omega}(\theta).\end{split} \tag{80}\] ### Metric Reconstruction In the outgoing Radiation gauge, the metric perturbation could be constructed from the Hertz potential \(\Psi^{\pm}_{H}\) following the CCKW procedure with this relation \[h^{\mu\nu}=S^{\mu\nu}_{+2}\Psi^{\pm}_{H}+c.c. \tag{81}\] We can use the radial and angular ODEs as well as Fourier decomposition to write \[\begin{split} h^{\mu\nu}=\sum_{lm\omega}\{&\alpha^{ \mu\nu}_{lm\omega\,-2}\mathcal{R}_{lm\omega\,-2}\mathcal{S}_{lm\omega}+\gamma ^{\mu\nu}_{lm\omega\,-2}\mathcal{R}^{\prime}_{lm\,-2}\mathcal{S}_{lm\omega}\\ &+\beta^{\mu\nu}_{lm\omega\,-2}\mathcal{R}_{lm\,-2}\mathcal{S}^{ \prime}_{lm\omega}\}+c.c.\end{split} \tag{82}\] Each of \(\alpha^{\mu\nu}_{lm\omega}\), \(\beta^{\mu\nu}_{lm\omega}\) and \(\gamma^{\mu\nu}_{lm\omega}\) are functions depending on variables \((r,\theta)\) and parameters \((\omega,m)\). These functions have no singular points away from the horizon. The metric suffers from the discontinuity at \(r=R\) as we expected. We see that the perturbation of the metric is written in terms of the radial function and its derivative which have an additional singular point. Thus, the metric expansion needs to be treated carefully taking into consideration this additional singularity. ### Completion Although \(\psi_{0}\) and \(\psi_{4}\) contain most of the information about the gravitational perturbation, there are still missing parts due to the perturbation of the background itself. The regular parts of these perturbations come from the perturbation of the mass \(M\) and angular momentum \(J=aM\) of the black hole. Accordingly, the full metric perturbation \({}^{Full}h^{\mu\nu}\) can be written in this form \[{}^{Full}h^{\mu\nu}=h^{\mu\nu}+c_{M}h^{\mu\nu(\delta M)}+c_{J}h^{\mu\nu( \delta J)}. \tag{83}\] Thus, we need to compute these parts to have the full regular metric perturbation. The perturbation due to the mass \(h^{(\delta M)}_{\mu\nu}\) and angular momentum \(h^{(\delta J)}_{\mu\nu}\) are given respectively by [51]. \[\begin{split} h^{(\delta M)}_{\mu\nu}=\left.\frac{\partial g_{\mu \nu}\left(x^{\mu};M,J\right)}{\partial M}\right|_{J\to 0},\\ h^{(\delta J)}_{\mu\nu}=\left.\frac{\partial g_{\mu\nu}\left(x^{ \mu};M,J\right)}{\partial J}\right|_{J\to 0}.\end{split} \tag{84}\] In the IEF coordinates, \[h^{(\delta M)}_{\mu\nu}=\left(\begin{array}{cccc}-\frac{2r}{\Sigma}&-\frac{2 r}{\Sigma}&0&0\\ -\frac{2r}{\Sigma}&-\frac{2r}{\Sigma}&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right). \tag{85}\] \[h^{(\delta J)}_{\mu\nu}=\left(\begin{array}{cccc}0&0&0&\frac{2Mr\sin^{2}( \theta)}{\Sigma}\\ 0&0&0&0\\ 0&0&0&0\end{array}\right). \tag{86}\] In the OEF coordinates, \[h^{(\delta M)}_{\mu\nu}=\left(\begin{array}{cccc}-\frac{2r}{\Sigma}&\frac{2 r}{\Sigma}&0&0\\ \frac{2r}{\Sigma}&-\frac{2r}{\Sigma}&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right). \tag{87}\] \[h^{(\delta J)}_{\mu\nu}=\left(\begin{array}{cccc}0&0&0&\frac{2Mr\sin^{2}( \theta)}{\Sigma}\\ 0&0&0&0\\ 0&0&0&0\\ \frac{2Mr\sin^{2}(\theta)}{\Sigma}&0&0&0\end{array}\right). \tag{88}\] The above expressions represents the full metric perturbation in the IEF coordinates up to undetermined coefficients \(c_{M}\) and \(c_{J}\). ## V Conclusion and Discussion In this work, we studied perturbations of the Kerr metric due to a circularly orbiting perturber in different spacetime foliations: Boyer-Lindquist and outgoing/ingoing Eddington-Finkelstein coordinates. This problem may have applications in many realistic astrophysical situations. The reason for utilizing different foliations was to make a contrast between the regular and irregular charts and tetrad at the horizon. Though the Teukolsky equation has the same singularity structure, the asymptotic behavior of the equations at the horizon was different. We showed that, for the Kerr black hole perturbations in regular charts near the horizon, the radial part of the Weyl scalars naturally obeys the physical boundary conditions at the horizon and in the asymptotically flat regions. This removes the need for imposing any regularization conditions. Consequently, the freedom of a null rotation is still present which might be used as a gauge freedom. Using the CCKW procedure, we explicitly constructed the Kerr metric perturbation due to the existence of a perturber of energy \(E\) rotating around the black hole in circular orbits. We effectively expanded the metric in all the Weyl-scalars perturbation modes with spacetime-dependent coefficients. In our construction of the metric, we used the Green's functions method as well as a Hertz-Weyl equation algebraization technique identical to the ones provided in [52]. However, we solved for \(\psi_{4}\) in a different manner as illustrated through equations (60-65). Moreover, we completed the metric by fixing the trivial physical perturbation due to the mass and angular momentum of the black hole itself, in a way similar to the work found in [46]. We didn't determine the two coefficients \(c_{M}\) and \(c_{J}\), which (if needed) could be evaluated using the procedure introduced in [51] by utilizing the gauge-invariant quantities. We also ignored the divergence contribution from the C-metric acceleration and NUT-charge for physical considerations. The radial equations in the IEF/OEF as well as BL coordinates are the confluent Heun equations. Consequently, if we are only interested in obtaining the Teukolsky equation, there exists a radial transformation that can transform the equation directly, based on the nature of the Heun family ODEs. We are not reporting this transformation here, yet we believe it is a straightforward though tedious exercise following the procedure given in [41]. The first derivative of the radial equation has an additional regular singular point whose location depends on the spacetime parameters, \((M,a)\), and the perturbation mode parameters, \((m,\omega)\). To extend investigation of the perturbations beyond the horizon in the presented formalism, an explicit solution regular at the outer horizon might be needed. This can perhaps be achieved by a singular series expansion of the perturbed radial part of the equations, as well as its derivative. The existence of the derivative of the radial functions in the metric expansion is crucial for the radius of convergence of the expansion. Also, the expansion itself will be undefined at \(r=R\) where the perturber orbit is located. We report, in that the procedure outlined here; an explicit form of metric construction as expansion in the solution for the radial functions, their derivative as well as the angular functions. ###### Acknowledgements. We wish to thank Professor Gino Biondini for the useful discussions about the mathematical tools used in this paper. We are also grateful to Omar Elserif for helping us with proofreading. D.S. is partially supported by the US National Science Foundation, under Grant No. PHY-2014021.
2302.14757
Audio Retrieval for Multimodal Design Documents: A New Dataset and Algorithms
We consider and propose a new problem of retrieving audio files relevant to multimodal design document inputs comprising both textual elements and visual imagery, e.g., birthday/greeting cards. In addition to enhancing user experience, integrating audio that matches the theme/style of these inputs also helps improve the accessibility of these documents (e.g., visually impaired people can listen to the audio instead). While recent work in audio retrieval exists, these methods and datasets are targeted explicitly towards natural images. However, our problem considers multimodal design documents (created by users using creative software) substantially different from a naturally clicked photograph. To this end, our first contribution is collecting and curating a new large-scale dataset called Melodic-Design (or MELON), comprising design documents representing various styles, themes, templates, illustrations, etc., paired with music audio. Given our paired image-text-audio dataset, our next contribution is a novel multimodal cross-attention audio retrieval (MMCAR) algorithm that enables training neural networks to learn a common shared feature space across image, text, and audio dimensions. We use these learned features to demonstrate that our method outperforms existing state-of-the-art methods and produce a new reference benchmark for the research community on our new dataset.
Prachi Singh, Srikrishna Karanam, Sumit Shekhar
2023-02-28T16:59:13Z
http://arxiv.org/abs/2302.14757v1
# Audio Retrieval for Multimodal Design Documents: A New Dataset and Algorithms ###### Abstract We consider and propose a new problem of retrieving audio files relevant to multimodal design document inputs comprising both textual elements and visual imagery, e.g., birthday/greeting cards. In addition to enhancing user experience, integrating audio that matches the theme/style of these inputs also helps improve the accessibility of these documents (e.g., visually impaired people can listen to the audio instead). While recent work in audio retrieval exists, these methods and datasets are targeted explicitly towards natural images. However, our problem considers multimodal design documents (created by users using creative software) substantially different from a naturally clicked photograph. To this end, our first contribution is collecting and curating a new large-scale dataset called Melodic-Design (or MELON), comprising design documents representing various styles, themes, templates, illustrations, etc., paired with music audio. Given our paired image-text-audio dataset, our next contribution is a novel multimodal cross-attention audio retrieval (MMCAR) algorithm that enables training neural networks to learn a common shared feature space across image, text, and audio dimensions. We use these learned features to demonstrate that our method outperforms existing state-of-the-art methods and produce a new reference benchmark for the research community on our new dataset. Prachi Singh\({}^{1,2}\), Srikrishna Karanam\({}^{1}\), Sumit Shekhar\({}^{1}\)+\({}^{1}\)Adobe Research, Bangalore, India \({}^{2}\)Indian Institute of Science, Bangalore, India [email protected] Music Retrieval, Multimodal processing, cross attention. Footnote †: [https://stock.adobe.com/](https://stock.adobe.com/) Footnote †: thanks: [https://stock.adobe.com/](https://stock.adobe.com/) ## 1 Introduction With increasing proliferation of on-demand web/mobile-based graphics design softwares1, 2, 3, designing creative documents has become very easy for any occasion, e.g., greeting cards, event invitations/flyers, social media infographics etc. In most cases, such design documents tend to be multimodal, i.e., they comprise some visual imagery aspects and some textual elements (see Fig 1(right)). For such documents, adding an additional modality in the form of relevant audio/music files will not only enhance the consumption experience of users but also improve document accessibility for visually impaired users. To this end, our first contribution is the consideration and proposal of a new problem involving the retrieval of relevant audio files given a multimodal design document. While much work in the past [1, 2] has focused on audio retrieval for natural images, there has not been any work in the context of the kind of design documents referred to above, and this paper takes a step towards bridging this gap in the literature (see Fig 1). Footnote 1: [https://www.adobe.com/express](https://www.adobe.com/express) Footnote 2: https://www.c canvas.com Footnote 3: [https://www.sketch.com](https://www.sketch.com) As the problem is unexplored, the existing datasets [3, 2] for audio retrieval contains only natural images and are not suitable for our proposed problem. To this end, our second contribution is the collection and curation of a new paired design-audio dataset that comprises multimodal design documents, scraped from publicly available data from Adobe Stock 4, paired with relevant audio files collected from the MTG-Jamendo [4] repository. With \(\approx 500k\) design documents paired with over \(\approx 7.5k\) audio files, this is a first-of-its-kind dataset that we believe will help advance research in multimodal design understanding. Footnote 4: [https://stock.adobe.com/](https://stock.adobe.com/) Finally, our third contribution is a novel multimodal cross-attention algorithm that enables training neural networks to learn a shared representation among the image, text, and audio modalities present in our problem setting. In particular, given paired design-audio samples, we extract individual modality features and learn per-pair as well as overall weights to learn a unified design-audio embedding. With extensive experiments on our proposed new dataset, we demonstrate our algorithm substantially outperforms the existing state-of-the-art audio retrieval methods. ## 2 Related Work As noted in Section 1, works in multimodal audio retrieval are mainly focussed on natural images. In particular, in Image2Song [3], the shuttersong dataset was used to map images and song lyrics Figure 1: Natural images (left) vs. design documents (right). to the same feature space, which was then used for downstream tasks like retrieval. Using the same dataset, Liang et al. [1] proposed a method to jointly learn a feature space by fusing visual and acoustic features. Similarly, even datasets like VGGSound [2], and MUGEN [5], while having a video modality, also focus on either naturally occurring human actions or videos generated by game engines. In contrast, our contribution is unique by proposing a new dataset solely focused on multimodal creative design documents like greeting cards, infographics etc., that are commonly created using creative design software. Our large-scale dataset comprising hundreds of thousands of design documents paired with audio files provides a challenging testbed for advancing retrieval research. ## 3 Melodic Design (MELON) - A NEW Dataset As discussed in Section 1, given a multimodal design document like the ones shown in Fig 1, our problem is one of retrieving a short list of audio files that _go well_ with the various elements of the input. For example, the first row/second column in Fig 1 shows an adventure-themed design document containing both images and text. Different elements include the background image/color, text fields (e.g., "Mountains"), and decorative elements and shapes. Note that each of these elements forms a _layer_ in the design document (e.g., background image is the background layer, and the textual greeting is the foreground layer), giving a multi-layered multimodal design document. As noted in Section 1, and also from Figure 1, existing datasets focus solely on natural images, whereas our problem entails design documents. To bridge this clear gap in the literature, we collect and curate a new dataset comprising pairs of multimodal design documents and corresponding audio files, and we call our dataset "MELOdic designN" (MELON). ### Collecting Raw Dataset Samples We use the publicly available MTG-Jamendo [4] database that encompasses a variety of mood/theme categories, instruments, and genres as our source of audio files. We use various time-frequency features like intensity, timbre, pitch, tempo, and rhythm [8] to identify the mood of audio. For example, the pitch varies from very high to very low as we move from the "happy" to "sad" mood. Similarly, the intensity and tempo of mood "upbeat" is very high, whereas that of "calm" is very low (see Fig. 2). For mapping/associating audio to design documents below, we use music files corresponding to 50 mood categories in MTG-Jamendo. Since MTG-Jamendo also has audio files labelled with multiple mood categories, we only retain those data samples labelled with only one mood for simplicity. We use publicly available data from Adobe Stock as our source for collecting multimodal design documents comprising image and text content. To scrape images, we built a software utility that can query Stock with any mood category along with data types as part of the input. For instance, one such query would involve fetching illustrations, vectors, templates, and background images for the adventure mood. By restricting the search-page limit to 10, we obtain about \(10,000\) images across all the above document types for every mood category. Note that Adobe Stock also provides image metadata which contains manually generated captions describing the image elements in detail. ### Establishing Correspondence & Dataset Statistics In our dataset, images and text are already paired since each downloaded image comes with a ground-truth caption. We use the common mood categories across the MTG music dataset and our proposed design document dataset to form image-caption-audio pairs. Specifically, given a mood category, we first extract the CLIP [9] features for an image-text sample. For each audio file tagged with the same mood, we extract Wav2CLIP [10] embeddings and compute cosine similarities between the image-audio embeddings (denoted \(s(i,a)\)) and text-audio embeddings (denoted \(s(t,a)\)). We use the weighted sum \(\lambda_{1}s(i,a)+\lambda_{2}s(t,a)\) to retain the audio files corresponding to the top-N similarity scores. We repeat this process for all images in each mood category to curate our audio-design dataset. Our proposed MELON dataset consists of 488,510 images and corresponding captions and 7,737 music audio belonging to 50 moods/themes. Each mood category consists of \(\approx 10k\) images. A distribution plot of the audio samples per category as well as per- mood word clouds to demonstrate the diversity and variability of our dataset are provided in the supplementary material5. In Table 1, we quantitatively compare the proposed MELON dataset with existing datasets. One can note that while existing datasets are focused on natural images and videos, our proposed dataset is unique in the sense it contains creative illustrations, vectors, templates, and background designs with complete descriptions of the image content as part of the caption. With \(\approx 500k\) images, our dataset will help the community build robust models for both music retrieval (MR) and classification (C) tasks. \begin{table} \begin{tabular}{l|l l l l l} \hline Dataset & Visual & \multicolumn{1}{c}{Modalities} & \multicolumn{1}{c}{\#Images} & \multicolumn{1}{c}{\#Audio} & \multicolumn{1}{c}{Tasks} \\ \cline{2-6} VGGSound [2] & Action vid. & I+A & 199k & 199k & C \\ Audio set [6] & Human vid. & I+A & 2.1m & 2.1m & R + G \\ MUGEN [5] & Game vid. & I+A-T & 233K & 233K & R+G \\ Shutteroga [3] & Images & I+A+T & 17k & 17k & MR + C \\ IMEMNet [7] & Images & I+A & 25k & 1.8k & MR \\ \hline **MELON** & \multicolumn{1}{c}{Various} & I+A+T & 488k & 7.7k & MR + C \\ [Proposed] & \multicolumn{1}{c}{} & & & & \\ \end{tabular} \end{table} Table 1: Melodic-Design vs. other datasets. I, A, and T corresponds to image, audio, and text respectively. “Various” for Melodic-Design covers illustrations, vectors, template, and background designs. C, R, G, MR represents classification, retrieval, generation, and music retrieval respectively. Figure 2: Comparison of time-frequency chroma features of audio of happy, sad, upbeat and calm moods. Colour (dark to light) represent the energy content (low to high) in each time-frequency bin. ## 4 Multi-modal cross-attention audio retrieval (mmcar) Here, we describe our proposed algorithm for retrieving audio files given an input design document. Our key algorithmic novelty is a multi-modal cross-attention module that operates on feature vectors from all three input modalities (image, text, and audio) and learns a common representation space for the downstream retrieval task. Figure 3 visually summarizes our proposed algorithm. During training, given input triplets from our MELON dataset, we first use per-modality embedding extractors to compute feature vectors \(\mathbf{i}\), \(\mathbf{t}\), and \(\mathbf{a}\) for the image, text, and audio modalities respectively. For \(\mathbf{i}\) and \(\mathbf{t}\), we use the CLIP [9] model to obtain 512-dimensional embeddings each. For \(\mathbf{a}\), we train a Resnet-18 model (for audio classification) on the publicly available VGGSound dataset. Given the \(\mathbf{i}\), \(\mathbf{t}\), and a embeddings, we propose a multi-modal cross-attention operation to learn a unified multi-modal design-audio embedding. Given the three feature vectors, we perform pairwise cross-attention pooling taking any two modalities \(\mathbf{x},\mathbf{y}\in\{\mathbf{i},\mathbf{t},\mathbf{a}\}\) such that \(\mathbf{x}\in\mathcal{R}^{d}\) is the query and \(\mathbf{y}\in\mathcal{R}^{d}\) is the key, resulting in an output \(\hat{x}\). Similarly, by interchanging \(\mathbf{x}\) and \(\mathbf{y}\), we compute the output \(\hat{y}\). These two outputs are then used to compute a common embedding \(\mathbf{u}_{xy}\) for this particular pair of \(x\) and \(y\). We repeat this for all the possible pairs (\((x=i,y=t),(x=i,y=a),(x=t,y=a)\)), and use the corresponding outputs to obtain the proposed unified embedding \(\mathbf{u}_{\text{all}}\) as: \[\mathbf{C}_{xy} =\mathbf{x}\mathbf{y}^{T}\in\mathcal{R}^{dXd}\quad and\quad\mathbf{C}_{ yx}=\mathbf{y}\mathbf{x}^{T}\] \[\mathbf{S}_{x} =\sigma(\mathbf{C}_{xy}*\mathbf{W}+\mathbf{B})\in\mathcal{R}^{dXd} \tag{1}\] \[\hat{\mathbf{x}} =diag(\mathbf{S}_{x}\mathbf{C}_{xy}^{T})\] \[\mathbf{S}_{y} =\sigma(\mathbf{S}_{yx}*\mathbf{W}+\mathbf{B})\in\mathcal{R}^{dXd}\] (2) \[\hat{\mathbf{y}} =diag(\mathbf{S}_{y}\mathbf{C}_{yx}^{T})\] \[\mathbf{u}_{xy} =\hat{\mathbf{x}}\oplus\hat{\mathbf{y}}\in\mathcal{R}^{2dX1}\] \[\mathbf{u}_{all} =\mathbf{u}_{it}\oplus\mathbf{u}_{ia}\oplus\mathbf{u}_{ta}\in\mathcal{R}^{dX1} \tag{3}\] This unified embedding \(\mathbf{u}_{\text{all}}\) is then passed to a fully connected neural network unit which generates, with a sigmoid operation, a scalar score \(\hat{z}\) in range \(\in[0,1]\). We compare this with the ground truth score \(z=1\) (if the input is a correct pair) and \(z=0\) otherwise, resulting in a binary cross-entropy training objective \[L=\frac{1}{B}\sum_{i=1}^{B}z\text{log}(\hat{z})+(1-z)\text{log}(1-\hat{z}) \tag{4}\] where B is the batch size. During inference, given a design document input and a database of \(n\) audio samples from which to retrieve relevant files, our model computes the similarity scores \(\hat{z}_{1},\hat{z}_{2},...,\hat{z}_{n}\) for the input image-text pair with all the \(n\) audio samples. Given these scores, we pick the audio files corresponding to the top-\(k\) highest scores as the retrieval results (see Fig 3 (right)). ## 5 Experiments and results Since our proposed problem, data, and algorithm is centered around audio retrieval given image-text design documents, the closest baselines in the literature include JTAV [1] and Wav2CLIP [10]. While JTAV operates on an image, its caption, and the textual lyrics of an audio to learn features, Wav2CLIP uses the CLIP [9] image encoder and an audio autoencoder to map audio and image features close. To benchmark the performance of these algorithms and compare them to our proposed method on our new dataset, we use rank-based evaluation metrics proposed in prior work [1]. In particular, we use the _Med r_\(\in[1,M]\) metric that represents the medium rank of the ground-truth retrieved audio, where \(M\) is the maximum number of classes considered. A lower value of _Med r_ indicates better performance. We also use recall@k (\(k=\{1,5,10,15,20\}\)) that is the fraction of ground-truth audios retrieved in the top-k ranked items across all test cases, and higher values indicate better performance. For training and evaluating all models, we selected 38 moods in our dataset based on maximum uniqueness among all the 50 moods (see Table 2 for a list). We construct the training split using \(60\%\) of the image-caption pairs and audio samples. Each triplet comprises the image, caption, and audio along with a \(1/0\) label based on the correct audio mapping. To form positive triplets, we select \(10\) audio \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline adventure & advertising & drama & funny & love \\ fun & commercial & dramatic & groovy & romantic \\ game & corporate & movie & happy & nature \\ holiday & ambiental & dream & hopeful & summer \\ horror & calm & emotional & motivational & retro \\ space & relaxing & heavy & melodic & background \\ sport & soft & melancholic & children & \\ upbeat & mellow & sad & christmas & \\ \hline \end{tabular} \end{table} Table 2: Lists of moods/themes used for training and evaluation. Figure 3: Block diagram of the proposed music retrieval model using image and text pair. samples for each design based on mood as discussed in Section 3.2. The negative triplets are formed by randomly selecting \(5\) different moods from the mood of the image and selecting \(2\) audio samples from each mood. The remaining \(40\%\) of the data is equally split to obtain validation and test splits with \(20\%\) of the samples each. We next present our evaluation results. Since we seek to retrieve the best matched audio file in terms of mood, we first compute a mean feature vector for each mood from our audio repository. Then, given the feature vector for an input design document, we generate a score vector \(\mathbf{\hat{z}}\in\mathcal{R}^{M}\), where \(M=38\) as noted above, for the mean features of all the audio files. We then compute the recall@k and _Med r_ metrics using the reference mood label. In Table 3, we compare the performance of our proposed MMCAR algorithm with JTAV and Wav2CLIP baselines on both val and test sets. One can note that our proposed MMCAR gives the lowest \(Med\ r\) of \(3.9\) and the highest recall accuracy at all ranks, e.g., \(79\%\) accuracy at k=5 on the test set compared to JTAV's \(30\%\) and Wav2CLIP's \(39\%\), accounting for more than \(100\%\) relative improvements. For an even more fair comparison, we also implemented our MMCAR with Wav2CLIP's features (noted MMCAR* in table). While MMCAR* leads to performance degradation when compared to our original, end-to-end-learned MMCAR model, it is still substantially better than the baseline Wav2CLIP approach. This provides evidence for our multimodal cross-attention module's discriminative capabilities in the shared image-text-audio feature space. To provide additional evidence, we show t-SNE plots of the learned design document embeddings in Figure 5 where one can see a clearer clustering, compared to baseline approaches, of the features according to the mood using the proposed MMCAR approach. In Figure 4, we compare MMCAR's confusion matrix with the baseline ones for a random selection of 14 moods, where one can note while baseline predictions are biased towards a few specific moods, the proposed method has a close-to-diagonal matrix as expected. ## 6 Summary We considered and proposed a new problem of retrieving relevant audio files given multimodal design documents as input. In the absence of any relevant datasets in the literature, we built and presented a first-of-its-kind multimodal design-audio dataset called MELON comprising hundreds of thousands of design files with mapped audio files. We then proposed a multimodal cross attention algorithm that enables training neural networks to learn a joint image-text-audio feature space for design documents and used it to retrieve relevant audios given a certain design input at test time. We benchmarked our algorithm against the existing state of the art on our new dataset and hope that this will spur further research in this area. ## 7 Acknowledgements The authors would like to thank Dr. Sriram Ganapathy of LEAP Lab, Indian Institute of Science, Bangalore, for his valuable input and help in offering the required resources to run the experiments. Figure 4: Confusion matrix of proposed MMCAR vs. baseline JTAV and Wav2CLIP. Figure 5: t-SNE plots of baselines vs. proposed approach. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{\(Med\ r\)} & \multicolumn{6}{c}{Recall@k} \\ \cline{3-7} & & k=1 & k=5 & k=10 & k=15 & k=20 \\ \hline \multicolumn{7}{c}{Val} \\ \hline JTAV [1] & 17.1 & 0.12 & 0.30 & 0.35 & 0.46 & 0.62 \\ Wav2CLIP [10] & 9.2 & 0.12 & 0.37 & 0.66 & 0.83 & 0.92 \\ **MMCAR (Iours)** & **3.8** & **0.42** & **0.80** & **0.92** & **0.96** & **0.98** \\ MMCAR* & 7.0 & 0.23 & 0.59 & 0.77 & 0.87 & 0.93 \\ \hline \hline \multicolumn{7}{c}{Test} \\ \hline JTAV [1] & 16.9 & 0.12 & 0.30 & 0.35 & 0.45 & 0.62 \\ Wav2CLIP [10] & 9.2 & 0.11 & 0.39 & 0.67 & 0.83 & 0.91 \\ **MMCAR [Ours]** & **3.9** & **0.42** & **0.79** & **0.92** & **0.96** & **0.98** \\ MMCAR* & 7.2 & 0.22 & 0.56 & 0.76 & 0.89 & 0.93 \\ \hline \end{tabular} \end{table} Table 3: Proposed MMCAR vs. baselines.
2303.18244
Speeding up Madgraph5 aMC@NLO through CPU vectorization and GPU offloading: towards a first alpha release
The matrix element (ME) calculation in any Monte Carlo physics event generator is an ideal fit for implementing data parallelism with lockstep processing on GPUs and vector CPUs. For complex physics processes where the ME calculation is the computational bottleneck of event generation workflows, this can lead to large overall speedups by efficiently exploiting these hardware architectures, which are now largely underutilized in HEP. In this paper, we present the status of our work on the reengineering of the Madgraph5_aMC@NLO event generator at the time of the ACAT2022 conference. The progress achieved since our previous publication in the ICHEP2022 proceedings is discussed, for our implementations of the ME calculations in vectorized C++, in CUDA and in the SYCL framework, as well as in their integration into the existing MadEvent framework. The outlook towards a first alpha release of the software supporting QCD LO processes usable by the LHC experiments is also discussed.
Andrea Valassi, Taylor Childers, Laurence Field, Stephan Hageböck, Walter Hopkins, Olivier Mattelaer, Nathan Nichols, Stefan Roiser, David Smith, Jorgen Teig, Carl Vuosalo, Zenny Wettersten
2023-03-31T17:58:23Z
http://arxiv.org/abs/2303.18244v2
# Speeding up Madgraph5_aMC@NLO through CPU vectorization and GPU offloading: ###### Abstract The matrix element (ME) calculation in any Monte Carlo physics event generator is an ideal fit for implementing data parallelism with lockstep processing on GPUs and vector CPUs. For complex physics processes where the ME calculation is the computational bottleneck of event generation workflows, this can lead to large overall speedups by efficiently exploiting these hardware architectures, which are now largely underutilized in HEP. In this paper, we present the status of our work on the reengineering of the Madgraph5_aMC@NLO event generator at the time of the ACAT2022 conference. The progress achieved since our previous publication in the ICHEP2022 proceedings [1] is discussed, for our implementations of the ME calculations in vectorized C++, in CUDA and in the SYCL framework, as well as in their integration into the existing MadEvent framework. The outlook towards a first alpha release of the software supporting QCD LO processes usable by the LHC experiments is also discussed. ## 1 Introduction Computing architectures designed for data parallelism, such as CPUs with vector registers and GPUs, are now ubiquitous in the computing resources used for the data processing of High Energy Physics (HEP) experiments, such as the High Performance Computing (HPC) centers available to the Large Hadron Collider (LHC) experiments and the sites of the Worldwide LHC Computing Grid (WLCG). The full compute power of GPUs and vector CPUs, however, is often underexploited in HEP processing, partly because the software is old and was designed before these architectures became mainstream, but also because many HEP workflows involve a lot of stochastic branching and are therefore intrinsically difficult to port to data parallel paradigms, one notable example being detector simulation. Monte Carlo (MC) matrix element generators, conversely, are an ideal fit to exploit these architectures. This is because the calculation of scattering amplitudes and matrix elements (MEs), which is the computational bottleneck of these programs for complex physics processes, involves the repeated execution of the same functions on different data items (the various "events" randomly generated by MC sampling), and it is possible to achieve a perfect lockstep processing in its data parallel execution. **Version 1.0 (31 March 2023)** Our work on the reengineering of the Madgraph5_aMC@NLO (MG5aMC) event generator [2] follows precisely this approach. As described in our previous proceedings of the vCHEP2021 [3] and ICHEP2022 [1] conferences, our new implementation of the ME calculation in CUDA and vectorized C++ achieves lockstep processing with 100% branch efficiency on NVidia GPUs and the maximum theoretically possible SIMD speedups (x8 and x16 in double and single floating point precision for AVX512/xmm) on vector CPUs. In this paper, we mainly document the results presented at the ACAT2022 conference (October 2022), where we had reported the process achieved in the few months since ICHEP2022 (July 2022). This includes in particular some performance tests of the vectorized C++ implementation using all cores of a CPU rather than a single CPU core, some performance improvements for the serial component of the overall workflow, the implementation of a new "mixed" precision mode where both single and double floating point precision are used for different parts of the ME calculation, and the full integration into the existing MadEvent framework of the ME calculation implemented using SYCL. We also briefly mention a few new results achieved since ACAT2022 at the time of writing (March 2023), which will be described in more detail in future talks and papers. ## 2 Speeding up the serial component of the MadEvent framework As we previously described in our ICHEP2022 proceedings [1], our strategy for delivering to the LHC experiments a software application that they can run to generate samples of events, with well-known user interfaces and identical physics output but at a fraction of current computational costs, is based on injecting one of our new data-parallel implementations (in CUDA/C++ or SYCL) of the ME calculation into the existing MadEvent framework, replacing only the previous scalar Fortran implementation of the same ME calculation. The "outer shell" of the MadEvent framework, which is also implemented in Fortran, takes care of all tasks other than the ME calculation, which we will collectively refer to as the "non-ME serial component" of MadEvent: this includes, amongst other things, the generation of pseudo-random numbers, their mapping to particle momenta using a well defined sampling strategy (based on the MadEvent single-diagram enhancement multichannel algorithm [4]), the merging of multi-jet final states (for instance using the so-called "MLM" scheme [5, 6]), the execution of the hit-or-miss unweighting algorithm, the calculation of cross sections and the I/O intensive writing of LHE event data files. While all these tasks only account for a few percent of the overall wall-clock time when the Fortran serial MEs are used, the situation changes dramatically when the much faster (one to three orders of magnitude) CUDA/C++ or SYCL data-parallel MEs based on CPU vectorization or GPUs are used, as the MadEvent non-ME serial component quickly becomes the bottleneck. In the results presented at ICHEP2022 for the \(gg\!\rightarrow\!t\bar{t}gg\) process, for instance, we had reported that generating 90k weighted events took 58.3 seconds overall (5.2s in the MadEvent non-ME serial component and 53.1s in the ME calculation, see Table 2 in Ref. [1]) using Fortran MEs, but only 6.1 seconds overall (5.7s non-ME and 0.36s MEs) using double-precision CUDA MEs. In other words, the factor \(\sim\)200 speedup in the ME calculation only led to an overall speedup by a factor \(\sim\)10: this is the limit predicted by Amdahl's law [7] since the serial non-ME component was originally 5.2s/58.3s, i.e. approximately 10% of the overall processing time. Our new ACAT2022 \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \cline{3-6} \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{molecular} & \multicolumn{2}{c|}{standalone} \\ \hline \multicolumn{2}{|c|}{CUDA grid size} & \multicolumn{4}{c|}{8192} & \multicolumn{2}{c|}{524288} \\ \hline \(gg\!\rightarrow\!t\bar{t}gg\) & MEs & \(t_{\rm TOT}=t_{\rm Mad}+t_{\rm MEs}\) & \(N_{\rm events}/t_{\rm TOT}\) & \multicolumn{2}{c|}{\(N_{\rm events}/t_{\rm MEs}\)} \\ & precision & [sec] & [events/sec] & \multicolumn{2}{c|}{[MEs/sec]} \\ \hline Fortran & double & \(55.4=2.4\) + 53.0 & 1.63E3 (=1.0) & 1.70E3 (=1.0) & — & — \\ \hline CUDA & double & \(2.9=2.6\) + 0.35 & 3.06E4 (x18.8) & 2.60E5 (x152) & 2.62E5 & 4.21E5 (x247) \\ \hline CUDA & float & \(2.8=2.6\) + 0.24 & 3.24E4 (x19.9) & 3.83E5 (x225) & 3.96E5 & 8.77E5 (x516) \\ \hline \end{tabular} \end{table} Table 1: Processing times and throughputs for 90112 \(gg\!\rightarrow\!t\bar{t}gg\) weighted events. One core of a CERN VM (Intel Silver 4216 CPUs, one NVidia V100 GPU), cuda11.7 and gcc11.2 builds. See Ref. [1] for further details, e.g. on the difference between the madevent and standalone columns. results for the same process are given in Table 1: the generation workflow in the madevent executable now takes 55.4 seconds overall (2.4s non-ME and 53.0s MEs) using Fortran MEs, but only 2.9 seconds overall (2.6s non-ME and 0.35s MEs) using double-precision CUDA MEs, i.e. a factor two faster than in the ICHEP2022 results. The difference between the two sets of results is only in the MadEvent non-ME serial component, which is now a factor two faster, while the speed of the CUDA ME calculation is essentially unchanged. The overall speedup from Fortran to CUDA is now \(\sim\)20, as predicted by Amdahl's law since the serial component was originally 2.4s/55.4s, i.e. approximately 5% of the overall processing time. To explain this speed-up, we recall [1] that the original MadEvent framework, which was looping through individual events and executing the full processing chain (random sampling of momenta, computing MEs, unweighting, multi-jet merging etc.) one event at a time, had to be modified to allow the data-parallel calculation of MEs on a large batch of events at the same time: this naturally led to the introduction of large Fortran arrays to keep all relevant properties of all the events in that batch. In this particular case, the speedup of the serial non-ME component from 5.2s to 2.4s was obtained by rationalizing the handling of MLM multi-jet merging, and in particular by moving most of its processing before the ME calculation, which made it possible to completely get rid of some very large Fortran arrays that had been introduced in the initial transformation of MadEvent from a single-event to a multi-event processing framework. Speeding up the MadEvent serial non-ME component is especially important when offloading the ME calculation to a GPU, but it remains relevant when MEs are computed on vector CPUs. For instance, our new results for generating 80k \(gg\!\rightarrow\!t\bar{t}gg\) events on an Intel Gold 6148 CPU, which are given in Table 2, show that the overall workflow now takes 6.1 seconds (1.8s non-ME and 4.3s MEs) using our "512z" vectorization level (AVX512 with zmm registers [1]), while at ICHEP2022 we had reported that the same workflow on the same machine took 7.1 seconds (2.5s non-ME and 4.5s MEs, see Table 1 in Ref. [1]). Again, the difference between the two sets of results mainly comes from the MadEvent serial non-ME component, but the effect of Amdahl's law is less pronounced for C++ than for CUDA, as the ME calculation is still the bottleneck. While this speed-up in MadEvent is already an important achievement, we think that this is just the first step and that there is still much potential for further performance improvements. Further rationalizations of the use of large Fortran arrays may still be possible. In addition, we are investigating ways to speed up the MadEvent serial non-ME component by parallelizing it at least in part. One idea, for instance, is to offload to the GPU (or vectorize on the CPU) some parts of the computation, such as the mapping from random numbers to momenta in the sampling algorithm, or the unweighting process. Another possible approach, which represents a truly heterogeneous processing scenario, would consist in running several copies of the madevent application in parallel on different CPU threads, while sharing the GPU amongst them for the \begin{table} \begin{tabular}{|l|l|c|c|c|c|} \cline{3-6} \multicolumn{1}{c|}{} & & \multicolumn{2}{c|}{machert} & standalone \\ \hline \multirow{2}{*}{\(gg\!\rightarrow\!t\bar{t}gg\)} & MEs & \(t_{\rm TOT}=t_{\rm Mad}+t_{\rm MEs}\) & \(N_{\rm events}/t_{\rm TOT}\) & \(N_{\rm events}/t_{\rm MEs}\) \\ & precision & [sec] & [events/sec] & [MEs/sec] \\ \hline Fortran(scalar) & double & 37.3 = 1.7 + 35.6 & 2.20E3 (=1.0) & 2.30E3 (=1.0) & — \\ \hline C++/none(scalar) & double & 37.8 = 1.7 + 36.0 & 2.17E3 (x1.0) & 2.28E3 (x1.0) & 2.37E3 \\ C++/sse4(128-bit) & double & 19.4 = 1.7 + 17.8 & 4.22E3 (x1.9) & 4.62E3 (x2.0) & 4.75E3 \\ C++/avx2(256-bit) & double & 9.5 = 1.7 + 7.8 & 8.63E3 (x3.9) & 1.05E4 (x4.6) & 1.09E4 \\ C++/512y(256-bit) & double & 8.9 = 1.8 + 7.1 & 9.29E3 (x4.2) & 1.16E4 (x5.0) & 1.20E4 \\ C++/512z(512-bit) & double & 6.1 = 1.8 + 4.3 & 1.35E4 (x6.1) & 1.91E4 (x8.3) & 2.06E4 \\ \hline C++/none(scalar) & float & 36.6 = 1.8 + 34.9 & 2.24E3 (x1.0) & 2.35E3 (x1.0) & 2.45E3 \\ C++/sse4(128-bit) & float & 10.6 = 1.7 + 8.9 & 7.76E3 (x3.6) & 9.28E3 (x4.1) & 9.21E3 \\ C++/avx2(256-bit) & float & 5.7 = 1.8 + 3.9 & 1.44E4 (x6.6) & 2.09E4 (x9.1) & 2.13E4 \\ C++/512y(256-bit) & float & 5.3 = 1.8 + 3.6 & 1.54E4 (x7.0) & 2.30E4 (x10.0) & 2.43E4 \\ C++/512z(512-bit) & float & 3.9 = 1.8 + 2.1 & 2.10E4 (x9.6) & 3.92E4 (x17.1) & 3.77E4 \\ \hline \end{tabular} \end{table} Table 2: Processing times and throughputs for 81952 \(gg\!\rightarrow\!t\bar{t}gg\) weighted events. One core of Juwels Cluster login node jwlogin07 (Intel Gold 6148 CPUs), gcc11.2 builds. See Ref. [1] for further details, e.g. on the five different vectorization levels (none, sse4, avx2, 512y, 512z). ME calculation. In addition to speeding up the MadEvent non-ME component by parallelizing it amongst different CPU cores, another advantage of this approach is that it could allow a decrease in the RAM footprint of each madevent process on the CPU (which is problematic as discussed in Ref. [1]), as it should be possible to achieve the same overall occupancy of the GPU while decreasing the number of events computed in parallel by a single madevent process, i.e. its CUDA grid size. The results of a preliminary test relevant to this approach are displayed in Fig. 1, which shows the variation of the combined ME throughput achievable from a single NVidia V100 GPU when this is shared by up to 8 processes running in parallel on different CPU threads. The notable effect that we were hoping to see, and which is indeed achieved, is that the throughput curve moves to the left as the number of CPU processes increases, while still reaching the same combined throughput plateau at the end: this means that the maximum GPU throughput may be reached by running many CPU applications with smaller CUDA grid sizes, rather than a single application with a very large grid size. Another positive result, which however we were not anticipating and will deserve more in-depth analysis, is the fact that the maximum combined GPU throughput actually increases by almost 50% when launching kernels from different CPU threads. It should be stressed that this plot, which was obtained using the infrastructure developed for the HEP-SCORE benchmarking project [8], refers to the "standalone" application [1] where the ME calculation is not yet integrated in the full MadEvent workflow: in the future, we plan to repeat similar studies using the full MadEvent workflows, which would represent a more realistic test of a production-like heterogeneous scenario. Figure 1: Total combined throughput for the \(gg\!\rightarrow\!t\bar{t}gg\) process using 1, 2, 4 or 8 copies of our standalone application (see Ref. [1]), as a function of the CUDA grid size (number of blocks per grid times number of threads per block, where the latter is fixed to 256). Figure 2: Total combined throughput for the \(gg\!\rightarrow\!t\bar{t}gg\) process as a function of the number of copies of our standalone application, in our usual five C++ vectorization scenarios. The y-axis represents the ratio of the achieved throughput to a reference with no vectorization and a single CPU process. For reference, the range of values of the absolute throughputs is also shown. ## 3 Further performance tests and improvements in the ME calculation In parallel to our efforts to understand and speed up the MadEvent serial non-ME component, we have also continued to pursue further improvements and analyses of the ME calculations. To start with, based on the same benchmarking infrastructure that we used to produce Fig. 1 for the CUDA back-end, we analysed the performance of our vectorized C++ back-end when several CPU cores are used. This differs from the results that we presented in our previous papers as well as in Table 2 above, which all refer to a single CPU core. The results of this test are given in Fig. 2. One effect that is immediately visible is that the AVX512/xmm throughputs (purple line) continue to be significantly faster than the AVX512/xmm (red) and AVX2 (green) throughputs even when many cores are used, but not by a factor two. This may be due to a clock slowdown, but we have not verified it. With respect to non-vectorized throughput on a single core, the overall speedup of AVX512/xmm with 32 processes (the number of physical cores in this Intel Gold 6130 CPU) using single floating point precision is approximately 300, compared to a theoretical maximum of 512 (32 times 16), which seems quite satisfactory. Another progress in the CUDA/C++ back-end has been the addition of a "mixed" floating precision mode, where Feynman diagrams are computed in double precision, while the "color algebra" part of the ME calculation is done in single precision. The rationale for this approach is that floats provide approximately a factor two speedup over doubles both in vectorized C++ (because twice as many floats as doubles fit into the same vector register) and in CUDA (because typical NVidia data center cards have twice as many FLOPs for FP32 as for FP64), but single precision does not provide enough numerical precision for the Feynman diagram part of the ME calculation. The idea was to test whether single precision could at least be used for the "color algebra": our tests confirmed that the same cross sections could be obtained within \(\sim\)\(10^{-5}\) in \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \cline{3-6} \multicolumn{1}{c}{} & \multicolumn{3}{|c|}{macdevent} & standalone \\ \hline \(gg\!\rightarrow\!t\bar{t}ggg\) & MEs & \(t_{\rm TOT}=t_{\rm Mad}+t_{\rm MEs}\) & \(N_{\rm events}/t_{\rm TOT}\) & \(N_{\rm events}/t_{\rm MEs}\) \\ & precision & [sec] & [events/sec] & [MEs/sec] & \\ \hline Fortran(scalar) & double & 813.2 = 3.7 + 809.6 & 1.01E2 (=1.0) & 1.01E2 (=1.0) & — \\ \hline C++/none(scalar) & double & 986.0 = 4.3 + 981.7 & 8.31E1 (x0.8) & 8.35E1 (x0.8) & 9.82E1 \\ C++/sse4(128-bit) & double & 514.7 = 4.2 + 510.5 & 1.59E2 (x1.6) & 1.61E2 (x1.6) & 1.95E2 \\ C++/avx2(256-bit) & double & 231.6 = 4.0 + 227.6 & 3.54E2 (x3.5) & 3.60E2 (x3.6) & 4.41E2 \\ C++/512y(256-bit) & double & 208.6 = 3.9 + 204.8 & 3.93E2 (x3.9) & 4.00E2 (x4.0) & 4.95E2 \\ C++/512z(512-bit) & double & 124.6 = 4.0 + 120.6 & 6.58E2 (x6.5) & 6.79E2 (x6.7) & 8.65E2 \\ \hline C++/none(scalar) & float & 936.1 = 4.3 + 931.8 & 8.75E1 (x0.9) & 8.79E1 (x0.9) & 1.02E2 \\ C++/sse4(128-bit) & float & 228.9 = 3.9 + 225.0 & 3.58E2 (x3.6) & 3.64E2 (x3.6) & 4.30E2 \\ C++/avx2(256-bit) & float & 114.1 = 3.8 + 110.4 & 7.18E2 (x7.2) & 7.43E2 (x7.4) & 9.06E2 \\ C++/512y(256-bit) & float & 104.5 = 3.8 + 100.7 & 7.84E2 (x7.9) & 8.14E2 (x8.1) & 1.00E3 \\ C++/512z(512-bit) & float & 61.8 = 3.8 + 58.0 & 1.33E3 (x13.3) & 1.41E3 (x14.1) & 1.77E3 \\ \hline C++/none(scalar) & mixed & 986.0 = 4.3 + 981.6 & 8.31E1 (x0.8) & 8.35E1 (x0.8) & 9.98E1 \\ C++/sse4(128-bit) & mixed & 500.4 = 3.9 + 496.5 & 1.64E2 (x1.6) & 1.65E2 (x1.6) & 2.00E2 \\ C++/avx2(256-bit) & mixed & 220.5 = 3.8 + 216.7 & 3.72E2 (x3.7) & 3.78E2 (x3.8) & 4.55E2 \\ C++/512y(256-bit) & mixed & 195.6 = 3.7 + 191.8 & 4.19E2 (x4.2) & 4.27E2 (x4.3) & 5.21E2 \\ C++/512z(512-bit) & mixed & 118.5 = 3.8 + 114.7 & 6.92E2 (x6.9) & 7.15E2 (x7.2) & 8.97E2 \\ \hline \end{tabular} \end{table} Table 4: Processing times and throughputs for 81952 \(gg\!\rightarrow\!t\bar{t}ggg\) weighted events. One core of Juwels Cluster login node jwlogin07 (Intel Gold 6148 CPUs), gcc11.2 builds. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \cline{3-6} \multicolumn{1}{c}{} & \multicolumn{3}{|c|}{macdevent} & standalone \\ \hline \multicolumn{2}{|c|}{CUDA grid size} & \multicolumn{3}{|c|}{8192} & \multicolumn{3}{|c|}{16384} \\ \hline \(gg\!\rightarrow\!t\bar{t}ggg\) & MEs & \(t_{\rm TOT}=t_{\rm Mad}+t_{\rm MEs}\) & \(N_{\rm events}/t_{\rm TOT}\) & \multicolumn{3}{|c|}{\(N_{\rm events}/t_{\rm MEs}\)} \\ & precision & [sec] & [events/sec] & \multicolumn{3}{|c|}{[MEs/sec]} \\ \hline Fortran & double & 1228.2 = 5.0 + 1223.2 & 7.34E1 (=1.0) & 7.37E1 (=1.0) & — & — \\ \hline CUDA & double & 19.6 = 7.4 + 12.1 & 4.61E3 (x63) & 7.44E3 (x100) & 9.10E3 & 9.51E3 (x129) \\ \hline CUDA & float & 11.7 = 6.2 + 5.4 & 7.73E3 (x105) & 1.66E4 (x224) & 1.68E4 & 2.41E4 (x326) \\ \hline CUDA & mixed & 16.5 = 7.0 + 9.6 & 5.45E3 (x74) & 9.43E3 (x128) & 1.10E4 & 1.19E4 (x161) \\ \hline \end{tabular} \end{table} Table 3: Processing times and throughputs for 90112 \(gg\!\!\rightarrow\!t\bar{t}ggg\) weighted events. One core of a CERN VM (Intel Silver 4216 CPUs, one NVidia V100 GPU), cuda11.7 and gcc11.2 builds. this case, which seems enough. Our throughput results for the \(gg\!\rightarrow\!t\bar{t}ggg\) process are shown in Table 3 for CUDA and Table 4 for vectorised C++. While encouraging, these results are still preliminary and we plan to pursue further tests of this approach. ## 4 SYCL-based developments and C++ compiler studies While all tables and plots presented so far in this paper refer to our original CUDA/C++ implementation, significant progress has also been achieved on various fronts in our parallel implementations using performance portability frameworks. Most recently, this work has focused on the SYCL implementation, while the developments using Kokkos have slowed down and those based on Alpaka have stopped. As noted in Ref. [1], the main interest of these APIs is that a single code base, with a few back-end-specific customizations, may be executed on many architectures, including GPUs from different vendors such as NVidia, AMD and Intel. This is shown in Fig. 3, which compares the performances of our CUDA, SYCL and Kokkos implementations on different systems; compared to previous results [1], this ACAT2022 plot is interesting because it also includes results on Intel XE-HPC, which is an early implementation of the Aurora GPU. A notable achievement reported at ACAT2022 is that the SYCL implementation of the ME calculation is now also fully integrated into MadEvent, which means for instance that we are able to produce cross-sections and LHE event data files by offloading the ME calculation to AMD or Intel GPUs, rather than using the Fortran CPU implementation. A more recent development, which started well after ACAT2022, is that a vectorized SYCL implementation for CPU has also been prototyped. Preliminary tests indicate that this achieves a promising performance, with throughputs which sometimes exceed those of the gcc builds of the CUDA/C++ implementation: while this is not yet understood and will require further studies, it is likely that this may be due at least in part to the fact that the SYCL implementation is built using the clang-based icx Intel compiler. As shown in Fig. 4, in fact, which presents a recent [9] performance comparison between many builds of the CUDA/C++ implementation using different C++ compilers, we have observed that the performance of icx builds is almost the same as that of clang builds, which can be significantly better than that of gcc builds in some cases (more than a factor 2 faster with AVX512/xmm vectorization and agressive inlining); these results are however preliminary and will need more in-depth analysis. It is also interesting to note Figure 3: Comparison of the CUDA, Kokkos and SYCL ME engines for \(gg\!\rightarrow\!t\bar{t}gg\) on many GPUs, using the standalone application (with optimal GPU grid sizes at the throughput plateau). “Xe-HP SDV” is a Software Development Vehicle for functional testing only, currently used at Argonne and at other customer sites to prepare their code for future Intel data centre GPUs. “XE-HPC” is an early implementation of the Aurora GPU. The throughput achieved on a full Xeon 8180 CPU using SYCL and Kokkos multi-threading is also shown for reference. that, while our CUDA/C++ implementation of vectorization is based on gcc and clang compiler vector extensions, our SYCL version uses the sycl::vec type, which is itself implemented as a wrapper over clang vector extensions: in other words, compiler vector extensions are ultimately used for CPU vectorization in both of our CUDA/C++ and SYCL implementations. ## 5 Outlook: towards a first alpha release Finally, the most important progress we achieved since ACAT2022 is that we completed the implementation of the event-by-event random choice of leading colors and helicities in LHE files. This was the last missing piece before we could provide in the CUDA/C++ MadEvent framework the full set of features needed by the LHC experiments for unweighted event generation. This functionality is now essentially complete, but we are still performing some final tests, also to understand its impact on performance; in particular, this feature introduces a minor level of stochastic branching in the ME workflow, degrading lockstep processing both on GPUs and on vector CPUs (it is possible that this effect is already visible in Fig. 4, which was prepared using this more recent code base). We are now working towards repackaging our work to provide a first alpha release of our work for the experiments, which we plan to achieve during Q2 2023. ## Acknowledgements This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility under contract DE-AC02-06CH11357, and of the Joint Laboratory for System Evaluation (JLSE) at Argonne National Laboratory. We also gratefully acknowledge the use of computing resources at CINECA under ISCRA-C project MG5A100 and at the Julich Supercomputing Centre at Forschungszentrum Julich under PRACE-DEV-2022D01-022.
2309.05528
On the detection of Out-Of-Distribution samples in Multiple Instance Learning
The deployment of machine learning solutions in real-world scenarios often involves addressing the challenge of out-of-distribution (OOD) detection. While significant efforts have been devoted to OOD detection in classical supervised settings, the context of weakly supervised learning, particularly the Multiple Instance Learning (MIL) framework, remains under-explored. In this study, we tackle this challenge by adapting post-hoc OOD detection methods to the MIL setting while introducing a novel benchmark specifically designed to assess OOD detection performance in weakly supervised scenarios. Across extensive experiments based on diverse public datasets, KNN emerges as the best-performing method overall. However, it exhibits significant shortcomings on some datasets, emphasizing the complexity of this under-explored and challenging topic. Our findings shed light on the complex nature of OOD detection under the MIL framework, emphasizing the importance of developing novel, robust, and reliable methods that can generalize effectively in a weakly supervised context. The code for the paper is available here: https://github.com/loic-lb/OOD_MIL.
Loïc Le Bescond, Maria Vakalopoulou, Stergios Christodoulidis, Fabrice André, Hugues Talbot
2023-09-11T15:12:05Z
http://arxiv.org/abs/2309.05528v2
# On the detection of Out-Of-Distribution samples in Multiple Instance Learning ###### Abstract The deployment of machine learning solutions in real-world scenarios often involves addressing the challenge of out-of-distribution (OOD) detection. While significant efforts have been devoted to OOD detection in classical supervised settings, the context of weakly supervised learning, particularly the Multiple Instance Learning (MIL) framework, remains under-explored. In this study, we tackle this challenge by adapting post-hoc OOD detection methods to the MIL setting while introducing a novel benchmark specifically designed to assess OOD detection performance in weakly supervised scenarios. Across extensive experiments based on diverse public datasets, KNN emerges as the best-performing method overall. However, it exhibits significant shortcomings on some datasets, emphasizing the complexity of this under-explored and challenging topic. Our findings shed light on the complex nature of OOD detection under the MIL framework, emphasizing the importance of developing novel, robust, and reliable methods that can generalize effectively in a weakly supervised context. The code for the paper is available here: [https://github.com/loic-lb/OOD_MIL](https://github.com/loic-lb/OOD_MIL). ## 1 Introduction The rapid development of effective machine learning algorithms has facilitated their widespread application across diverse domains, including critical applications such as medical diagnosis [19, 21]. Nevertheless, a crucial concern, emphasized by Hendrycks and Gimpel [7], pertains to the challenges faced by machine learning classifiers when deployed in real-world scenarios where the distribution of test and training data differs. Such discrepancies can result in dramatic situations where the model provides inaccurate outputs due to variations in the input arising from different sample collection or preparation protocols. These disparities stem from the assumption made by most machine learning models that all inputs will be drawn from the same distribution used during training process, known as the in-distribution (ID). Consequently, the uncertainty estimation and detection of out-of-distribution (OOD) samples become imperative for the successful application of these algorithms. We distinguish two types of shifts between ID and OOD: the semantic shift, where there is no class overlap between the two distributions, and covariate shift, where the class can overlap, but the style of the input differs. In the context of OOD detection, emphasis is typically placed on the first type [28], while the second type is more closely associated with domain generalization [11]. The problem of OOD detection has been extensively addressed in various research works, demonstrating promising performance across different ID datasets and OOD conditions. As described in [29], these methods can be categorized into three main groups: post-hoc inference methods, which employ pretrained models without further training; methods that require retraining the model without the use of OOD examples; and methods that necessitate new training with additional OOD data. Among the post-hoc methods, some authors proposed to compute the confidence score directly by considering maximum softmax and logits values derived from the outputs of the penultimate layer of the network [6, 7], or employed energy measures based on the last layer logits [16]. Some methods built upon these ideas to improve the confidence score's selectivity [14, 23], while others focused on intermediate features constructing distance-based measures [13, 24]. Methods that require retraining often involve modifications to the core architecture, Figure 1: Example of bag sample created from CIFAR10 [12]. The positive target class is dogs, and the negative classes are planes and cars. The bag is labeled as positive as it contains two instances of dogs. as illustrated by G-ODIN [9], or the introduction of an additional loss function using OOD data during training [8]. However, such approaches may hinder the performance of the base classifier and be too specific to the OOD set considered. Therefore, we have opted to focus on post-hoc methods for OOD detection. These methods are particularly appealing with their ease of use, compatibility with various machine learning frameworks, and competitive performance compared to other approaches on multiple benchmarks. In this study, we propose to adapt post-hoc OOD methods to the Multiple Instance Learning (MIL) framework. Multiple Instance Learning (MIL) has emerged as a dominant approach for weakly supervised image classification. Initially introduced by Dietterich et al. [5] while exploring drug activity prediction, MIL framework tackles the challenge of classifying a set of input images where individual labels are unknown, with only access to a shared single label for the set. This set of images is commonly referred to as a "bag," and the individual images within it are referred to as "instances." An example of an input bag is illustrated in Fig. 1. Since its introduction, MIL has found successful applications in diverse medical domains, in particular digital pathology [17, 25]. Uncertainty estimation has already been explored to improve Multiple Instance Learning (MIL) approaches. For instance, in [4], uncertainty estimation is leveraged to identify local artifacts in the input image and enhance MIL generalization. Similarly, [22] employs Gaussian processes to model the uncertainty of parameters in AttentionMIL [10] to boost the model's performance. Although these approaches have the potential to provide uncertainty estimates for predictions, they do not directly address the problem of OOD detection. To the best of our knowledge, this is the first attempt to establish a benchmark for OOD detection in the context of MIL. We assert that the assumptions and methodologies underlying traditional post-hoc OOD detection approaches may not be directly applicable to MIL models. The weak supervision context in MIL, where instances are grouped into bags and labeled at the bag level, introduces perturbations into the feature representations and outputs. Consequently, the quality of the embeddings and predictions in the MIL setting may not be as reliable as in the traditional supervised setting. In particular, the main contributions of this work can be summarised as: * We present the first study that evaluates different post-hoc OOD methods in the MIL setting, discussing and comparing their performances. * We use common datasets and organize them in a MIL setting, focusing mainly on semantic shift. All our code and datasets will be made publicly available to help other teams to focus on this challenging and under-explored topic. Our extensive benchmarks highlight the need for more specialized methods for OOD in the MIL settings. ## 2 Methods ### Multiple Instance Learning Consider a classical binary supervised classification problem where the objective is to predict a binary label, denoted as \(Y\in\{0,1\}\), based on an input \(X\). Within the framework of Multiple Instance Learning (MIL), the variable \(X\) represents a collection of instances denoted as \(X=\{x_{1},...,x_{n}\}\), where the individual instance labels \(\{y_{1},...,y_{n}\}\) are unknown. Following the initial formulation by Dietterich et al. [5], we assume that \(Y\) is positive (\(Y=1\)) if at least one of the instances \(x_{i}\) has a positive label (\(y_{i}=1\)), else \(Y\) is considered negative (\(Y=0\)). MIL models commonly consist of three main components: an instance embedder \(f\), mapping each input \(x_{i}\in\mathbb{R}^{D}\) to a lower-dimensional vector representation \(h_{i}\in\mathbb{R}^{M}\); a permutation-invariant pooling operator \(\theta\), combining all the instance representations extracted from \(X\) into a single representation \(h\in\mathbb{R}^{M}\); and a classifier \(g\) that generates the final classification score based on the pooled representation. For the image classification task, the instance embedder \(f\) typically consists of a CNN architecture, while the classifier \(g\) is a simple linear layer. As for the pooling operator \(\theta\), we adopt the gated attention mechanism proposed by Ilse et al. [10]. Let \(H=\{h_{1},...,h_{n}\}\) denote the collection of the representation extracted from \(X\) using the embedder \(f\). The gated attention pooling mechanism is defined as follows: \[h=\sum_{i=1}^{n}a_{i}h_{i} \tag{1}\] where, \[a_{i}=\frac{\exp\{\mathbf{w}^{T}(\tanh(\mathbf{V}h_{i}^{T})\text{\textcircled{ e}}\,\text{sign}(\mathbf{U}h_{i}^{T}))\}}{\sum\limits_{j=1}^{n}\exp\{ \mathbf{w}^{T}(\tanh(\mathbf{V}h_{j}^{T})\text{\textcircled{e}}\,\text{sign}( \mathbf{U}h_{j}^{T}))\}} \tag{2}\] with \(\mathbf{w}\in\mathbb{R}^{L\times 1}\), \(\mathbf{U}\in\mathbb{R}^{L\times M}\), \(\mathbf{V}\in\mathbb{R}^{L\times M}\) trainable parameters, sigm the non-linear sigmoid activation function and \(\text{\textcircled{e}}\) the element-wise (Hadamard) product. ### Out-of-distribution detection As outlined in Zhang et al [29], the task of out-of-distribution (OOD) detection aims to construct a confidence score that effectively identifies samples \(x\sim\mathcal{D}_{\text{OOD}}\) while maintaining the model's performance on the in-distribution (ID) dataset \((x_{\text{ID}},y_{\text{ID}})\sim\mathcal{D}_{\text{ID}}\). We explored a first set of methods relying solely on the output of the penultimate layer, denoted as \(f_{c}\), namely the maximum softmax probability **MSP**[7], the maximum logits **MLS**[6] and an energy-based score based on the last layer logits values **EBO**[16]: \[C(X^{*})=\sigma(f_{c}(X^{*})) \tag{3}\] where \(X^{*}\) is a test sample, and \(\sigma\) is the composition of the maximum and softmax operators for MSP, the maximum operator for MLS, and the log of summed exponentials for EBO with a temperature scaling parameter \(T\). In addition to these first methods, we explored further enhancements involving the processing of \(f_{c}\). For instance, we used the recent **DICE** approach [23], in which certain weights from \(f_{c}\) are masked based on their relative contribution to enhance the energy score selectivity. Moreover, we investigated the use of input perturbations along temperature scaling with the **ODIN** method [14], and of distance-based approach with **KNN**[24]. In our approach, we propose to associate the penultimate layer \(f_{c}\) to the classifier \(g\) of the MIL framework to compute the confidence score directly at the bag level, in contrast to previous works that pool the uncertainty measured over the instances [15]. We also consider the pooled representation \(h\) as the feature vector of interest for the KNN method, computing the distance as follows: \[C(\bar{h}^{*},k)=||\bar{h}^{*}-\bar{h}_{(k)}||_{2} \tag{4}\] where \(\bar{h}^{*}\) represents the normalized pooled representation of the test sample \(X^{*}\), and \(\bar{h}(k)\) is the normalized pooled representation of the \(k\)-th nearest neighbor. ### Generation of OOD dataset for MIL No standardized benchmark is currently available for addressing the problem of OOD detection under the weakly supervised setting. Taking inspiration from the experiment proposed by Ilse et al. [10], we designed our own binary weakly supervised task using different common and public databases. In this setting, an input \(X\), referred to as a bag, consists of a variable number of instances randomly sampled from the original database \(\mathcal{D}\). Firstly, we define a positive target class based on the original database. If a bag contains at least one instance belonging to the positive target class, the bag is labeled as positive; otherwise, it is labeled as negative. For the negative instances, we established a set of negative classes, encompassing all classes except the positive target class. This allows us to control the difficulty of the classification problem by adjusting the number of negative classes. ## 3 Experiments ### Datasets We conducted evaluations on several datasets to assess the performance of our proposed approach. Specifically, we employed MNIST as in-distribution (ID) dataset, with Fashion-MNIST [27], and KMNIST [3] as out-of-distribution datasets. Additionally, we explored both CIFAR10 [12] and PCAM [1, 26] as alternative in-distribution datasets. PCAM is a representative dataset for a real-world application scenario, which contains patches of lymph node tissues with both healthy and metastatic tissue. For these two datasets, SVHN [18], Textures [2] and places365 [30] served as out-of-distribution (OOD) datasets. To generate the training and validation datasets, we created a balanced set of 20,000 bags for training and 4,000 bags for validation for each ID dataset. The bags were of variable length, following a normal distribution \(N\sim\mathcal{N}(10,2)\), and were composed of images uniformly sampled from the corresponding ID dataset's training set. In our experiments, we focused on the digit "5" for the MNIST dataset and dogs for CIFAR10 as the positive target classes. The negative instances included all other digits for the MNIST dataset and images of planes and cars for CIFAR10. For PCAM, we selected the patches containing metastatic tissue as the positive target class and the other patches as negative instances. The number of positive instances in positive bags ranged from 1% to 40% of the bag length, sampled uniformly. For the test and OOD datasets, we generated a balanced set of 400 bags under the same conditions. OOD bags contains samples extracted only from the corresponding OOD database. ### Experimental Setup All our models were implemented using PyTorch 2.0 [20]. The training was conducted with a learning rate of \(5.10^{-5}\) and a weight decay of \(10^{-5}\) with a batch size of \(1\). For the MNIST-based MIL dataset, we employed the tile embedder proposed by Ilse et al. [10] which consists of 2 convolutional blocks with maxpooling followed by a linear layer. Regarding the CIFAR10-based MIL and PCAM-based MIL datasets, we adopted a similar approach to many previous state-of-the-art MIL methods [17, 25], and replaced the first convolution blocks by a ResNet50 model pre-trained on ImageNet, which remained frozen during the training process. A linear layer was used as the classifier in all experiments, and the gated attention mechanism was identical as well. To enhance the model's generalization capabilities, random augmentations such as rotation, horizontal flips, and vertical flips were applied to the instances composing the training bags. During inference on the OOD datasets, the instances were resized to the same dimension as the samples in the training datasets. For DICE and ODIN, we considered the embeddings of non-augmented instances as we found superior performance compared to utilizing the exact augmented instances employed during training. We set the hyperparameters for the OOD methods to the same val ues as reported in the experiments of the corresponding papers. Performance evaluation was conducted using the common OOD detection metrics, including AUCROC (Area Under the Receiver Operating Characteristic curve) and FPR@95% TPR (False Positive Rate at 95% True Positive Rate). These metrics were used to assess the model's ability to detect instances from OOD datasets effectively. ### Results Table. 1 presents the OOD detection performances for the different methods for a model trained in the context of Multiple Instance Learning. In the MNIST ID experiments, DICE and KNN display the best performance, closely followed by ODIN in the former case and MSP in the latter, for Fashion-MNIST and KMNIST OOD datasets, respectively. However, the performance of DICE and KNN diminishes considerably when evaluated on the other OOD dataset. In contrast, the other methods demonstrate more consistent results. This would indicate that methods relying on the training data may necessitate specific tuning and exhibit limitations in terms of generalization within the context of Multiple Instance Learning. Regarding the FPR@95 results, they are prohibitively high in most cases, with the exception of DICE for Fashion-MNIST OOD. In the case of CIFAR10 ID and PCAM ID datasets, KNN consistently demonstrates superiority over the other methods. KNN outperforms the benchmark methods across all OOD datasets, except for places365 OOD in the case of CIFAR10 ID, where EBO performs the best. As the second-best performing method, DICE performs well with PCAM ID, but its performance lags behind on CIFAR10 ID, where EBO and MSP show better results. In all experiments, except for places365 OOD with CIFAR10 ID, methods relying on classifier outputs and their enhancements consistently exhibit lower performance. FPR@95 remains high in all experiments except for KNN when evaluated on PCAM ID. These results suggest that KNN appears to be the most reliable method, demonstrating strong performance across most experiments. However, its performance on the places365 OOD dataset with CIFAR10 ID and Fashion-MNIST OOD is notably low, and the improvement is small for KMNIST OOD compared to the other methods, making it challenging to confirm this advantage. FPR@95 also remains prohibitively high in the case of MNIST and CIFAR10 ID. As a general trend, it is worth noting that methods relying on the pooled representation rather than classifier activations appear to perform better when methods based on the final activation outputs perform poorly, and vice versa. Additionally, in the CIFAR10/PCAM experiments, the instance embedder \(f\) was fixed, whereas, in the case of MNIST, it was trained alongside the rest of the network. This observation suggests that the approach used to create embeddings for each instance, and consequently the pooled representation, should guide the decision on whether to rely more on intermediate features or classifier outputs for OOD detection. \begin{table} \begin{tabular}{|c|c|c|c c c c c c|} \hline _ID dataset_ & OOD dataset & Metric & \multicolumn{6}{c|}{Method} \\ \cline{3-10} & & MSP [7] & MLS [6] & EBO [16] & ODIN [14] & DICE [23] & KNN [24] \\ \hline \multirow{4}{*}{_MNIST_} & Fashion-MNIST & AUC\(\uparrow\) & 92.33 & 91.83 & 91.77 & 94.13 & **99.05** & 62.33 \\ & MNIST & FPR@95\(\downarrow\) & 65.75 & 67.75 & 69.50 & 46.50 & **02.25** & 90.00 \\ \cline{2-10} & KMNIST & AUC\(\uparrow\) & 84.74 & 84.54 & 84.49 & 83.38 & 64.42 & **86.54** \\ & FPR@95\(\downarrow\) & 64.25 & 64.75 & 65.00 & 69.25 & 90.50 & **50.00** \\ \hline \multirow{4}{*}{_CIFAR10_} & places365 & AUC\(\uparrow\) & 70.13 & 70.98 & **71.01** & 54.75 & 48.37 & 64.67 \\ & FPR@95\(\downarrow\) & 78.25 & 78.25 & **75.50** & 94.50 & 99.50 & 95.00 \\ \cline{2-10} & SVHN & AUC\(\uparrow\) & 45.65 & 48.46 & 48.52 & 40.37 & 41.81 & **94.63** \\ & FPR@95\(\downarrow\) & 97.50 & 96.75 & 96.00 & 99.50 & 99.75 & **37.25** \\ \cline{2-10} & Textures & AUC\(\uparrow\) & 49.51 & 51.51 & 51.59 & 41.33 & 36.65 & **91.51** \\ & FPR@95\(\downarrow\) & 91.00 & 89.75 & 87.50 & 97.00 & 99.25 & **45.50** \\ \hline \multirow{4}{*}{_PCAM_} & places365 & AUC\(\uparrow\) & 30.27 & 34.37 & 35.38 & 51.08 & 78.23 & **92.62** \\ & FPR@95\(\downarrow\) & 100.00 & 100.00 & 100.00 & 100.00 & 71.25 & **37.00** \\ \cline{2-10} & SVHN & AUC\(\uparrow\) & 16.48 & 18.51 & 18.86 & 50.55 & 68.54 & **99.04** \\ \cline{1-1} & FPR@95\(\downarrow\) & 100.00 & 100.00 & 100.00 & 100.00 & 89.50 & **01.75** \\ \cline{1-1} \cline{2-10} & Textures & AUC\(\uparrow\) & 37.09 & 42.43 & 44.43 & 57.21 & 56.02 & **98.23** \\ \cline{1-1} & FPR@95\(\downarrow\) & 97.50 & 95.75 & 94.50 & 98.00 & 93.00 & **06.25** \\ \hline \end{tabular} \end{table} Table 1: Results of the different OOD detection methods for a MIL model trained on a bag version of the MNIST, CIFAR10 [12] and PCAM [1, 26] datasets. The accuracy of the model for the in-distribution (ID) task is 94.75, 90.50 and 78.25 for MNIST, CIFAR10 [12] and PCAM [1, 26], respectively. **Bold** denotes best performance, and underline denotes the second best for each metric. ## 4 Conclusion In this study, we present the first benchmark for out-of-distribution (OOD) detection in the context of Multiple Instance Learning (MIL). Through extensive experimentations on various datasets, we have found that methods based on intermediate features, such as KNN, demonstrate strong performance in the context of Multiple Instance Learning. However, the performance is not consistent in every scenario, depending on the specific dataset characteristics and the configurations of MIL models. This lack of robustness of current OOD detection methods points out the need for innovative techniques that can take into consideration characteristics of MIL models, such as the pooling operator, which represents an interesting avenue for future research. ## Acknowledgments This work was partially supported by the ANR project Hagnodice ANR-21-CE45-0007 and the PRISM project funded by France 2030 and grant number ANR-18-IBHU-0002.
2307.01209
Multi-Dialectal Representation Learning of Sinitic Phonology
Machine learning techniques have shown their competence for representing and reasoning in symbolic systems such as language and phonology. In Sinitic Historical Phonology, notable tasks that could benefit from machine learning include the comparison of dialects and reconstruction of proto-languages systems. Motivated by this, this paper provides an approach for obtaining multi-dialectal representations of Sinitic syllables, by constructing a knowledge graph from structured phonological data, then applying the BoxE technique from knowledge base learning. We applied unsupervised clustering techniques to the obtained representations to observe that the representations capture phonemic contrast from the input dialects. Furthermore, we trained classifiers to perform inference of unobserved Middle Chinese labels, showing the representations' potential for indicating archaic, proto-language features. The representations can be used for performing completion of fragmented Sinitic phonological knowledge bases, estimating divergences between different characters, or aiding the exploration and reconstruction of archaic features.
Zhibai Jia
2023-06-30T02:37:25Z
http://arxiv.org/abs/2307.01209v1
# Multi-Dialectal Representation Learning of Sinite Phonology ###### Abstract Machine learning techniques have shown their competence for representing and reasoning in symbolic systems such as language and phonology. In Sinite Historical Phonology, notable tasks that could benefit from machine learning include the comparison of dialects and reconstruction of proto-languages systems. Motivated by this, this paper provides an approach for obtaining multi-dialectal representations of Sinite syllables, by constructing a knowledge graph from structured phonological data, then applying the BoxE technique from knowledge base learning. We applied unsupervised clustering techniques to the obtained representations to observe that the representations capture phonemic contrast from the input dialects. Furthermore, we trained classifiers to perform inference of unobserved Middle Chinese labels, showing the representations' potential for indicating archaic, proto-language features. The representations can be used for performing completion of fragmented Sinite phonological knowledge bases, estimating divergences between different characters, or aiding the exploration and reconstruction of archaic features. ## 1 Introduction The evolution of languages in the Sinite family created intricate correspondences and divergences in its dense dialect clusters. Investigating the dynamics of this evolution, through comparison and proto-language reconstruction, is an essential task to Sinite Historical phonology. However, it may be costly for researchers to manually probe through the groups in search of phonological hints. Hence, it is desirable to accelerate the process with modern algorithms, specifically, representation learning. Graph-based machine learning Makarov et al. (2021) have gained increasing attention in recent years, due to their versatility with data with flexible structures. Especially, missing link prediction algorithms for knowledge graphs Wang et al. (2021) Zhu et al. (2022) can uncover a latent structure in noisy and incomplete knowledge. In the case for learning phonological representations, using graph-based learning can allow for more comprehensive integration of multi-dialectal evidence. Thus, we propose applying graph-based techniques for multi-dialectal representation learning. We construct a knowledge graph from the multi-dialectal phonological data, by abstracting unique phonetic components and individual characters into two kinds of nodes. Then, we connect them with edges specific to the dialect type wherein the character is associated with the given component. On the constructed knowledge graph, we train the BoxE algorithm Abboud et al. (2020), a Box Embedding Model for knowledge base completion. Finally, we evaluate the obtained representations with unsupervised and supervised clustering, as well as MLP probes based on Middle-Chinese-derived labels, to show this tool's value for Sinite phonological investigation. ## 2 Background on Sinite Languages The analysis of Sinite languages face a few specific challenges due to unique phonological characteristics. These characteristics define crucial details of our design. In Sinite languages, morphemes are primarily monosyllabic. Hence, Chinese writing binds one syllable to each of its glyphs, known as characters. A syllable in Sinite can be decomposed into an initial, a final and a tone. Shen (2020) Initials refer to the consonant-like sounds at the beginning of a syllable, which include both stops (e.g. /p-/, /b-/) and fricatives (e.g. /s-/, /j-/). These initials could be combined with various finals to form syllables. Finals refer to the vowel-like sounds at the end of a syllable, which included both simple vowels (e.g. /-a/, /-i/, /-u/), complex vowels (e.g. /-ai/, /-a/o/, /-ei/), and vowels combined with consonant codas (/-m//-n//-/-p/-/-/-/-/). Tones refer to the pitch patterns associated with syllables in Chinese. Tones could distinguish between words that were otherwise homophonous, and they were an important part of the Chinese phonological system. Due to the early conception of the Chinese writing system, syllables from different Finite languages can usually be aligned to each other through a written form. As this alignment is typically implemented in databases of raw Sinitic data, the difficulty of cognate identification is drastically reduced, facilitating analysis. However, the simple syllable structure introduces large amounts of homophones, words sharing same pronunciations, into Sinitic languages. This hinders the use of the comparative method in reconstructing a Sinitic proto-language. The existence of a supersegmental tone feature also complicates a historical analysis of Sinitic languages. Two factors that motivate the use of a graph-based method include the uniform structure of Sinitic syllables and their intimate relationship with characters. The intuitive syllable decomposition and the glyph-based alignment inspire viewing the components contextualized in various dialects as different "observations" of a single character. Theoretically, these observations are derivable from the reading of the character in the proto-language. ## 3 Related Work The practice of computationally-aided proto-language construction, often associated with cognate identification, has been extensively considered in the past two decades (Nerbonne et al., 2007). Examples include (Steiner et al., 2011) which draws insights from bio-informatics and the classical comparative workflow, and (List et al., 2017), which compared many methods for cognate identification. An relevant insight from the latter paper is that language-specific methods often outperform language-general ones, especially for languages like Sinitic. An epitome of neural methods for proto-language reconstruction would be (Meloni et al., 2021), in which Latin is reconstructed from Romance descendent languages with a encoder-decoder structure. Though, our approach differs from their study in many crucial aspects. In Meloni et al. 2021, the reconstruction is supervised, with the proto-language Latin provided at training time. But our method targets not only documented proto-languages like Middle Chinese, but also unknown, intermediate varieties in the development from ancient Sinitic to modern dialects, which requires an unsupervised approach. Additionally, in term of techniques, their use of GRU and attention-based transducers contrasts with our emphasis on a graph-based method. Considering the representation learning of Sinitic, we found abundant literature on the topic of speech recognition (Ma et al., 2022), segmentation and synthesis, which often yield representations of certain phonological relevance as by-product. Though, these studies devote heavily to a few major languages, specifically Mandarin or Cantonese, and, since they are rarely claim motivation from historical phonology, seldom take a multi-lingual or multi-dialectal approach. While speech representation learning often serve the aformentioned purposes, the proposals of using neural networks to model phonetics and phonology from either symbolic abstractions or acoustic data in order to examine theories in these fields are relevant to this study. Unsupervised binary stochastic autoencoders were explored in (Shan and Elsner, 2019). GAN (Generative Adversarial Networks) was used in (Begus, 2020). These proposals modeled perception and categorization, in relation to language acquisition. Most interestingly, representation learning has been applied for discovering phonemic tone contours in tonal languages(Li et al., 2020), of which a great portion are Sinitiic Languages. However, these proposals again rarely address issues from historical phonology. Finally, it should be noted that the concept of transforming porous data in a regular, matrix-like form to a loose, graph-like form for flexibility in processing, while essential to the designs of this paper, is not novel in the literature. Rather, it orig Figure 1: Highlighting key characteristics of Sinitic relevant to our approach. Characters are the central identity in the multi-dialectal representations. The orthographic alignment of sub-syllable components form the structure of data used in this study. inates with the GRAPE framework in (You et al., 2020). Notably, when the data in question concerns Chinese historical phonology, it coincides with Johann-Mattis List's proposals for introducing network methods into computational linguistics and Chinese historical phonology. Generally, this line of work should be considered most relevant to our study (List, 2018; List et al., 2014; List, 2015). List (2018) approaches issues spanning character formation, Middle Chinese annotation, as well as Old Chinese reconstruction with network methods. List et al. (2014); List (2015) examines dialect evolution with display graphs, with a focus on the complex word-borrowing dynamics between the dialect families. He calls for colleagues to lend more attention to data-driven, quantitative methods. Our proposal answers List's call by bringing together knowledge graphs with Chinese historical phonology. Furthermore, the utilization of SOTA representation learning extends beyond the scope of the aforementioned work. ## 4 Method The graph-based method for representing dialect data has the benefit of making the model more flexible, robust, and efficient at using porous, incomplete data. This is particularly important since investigations into dialects are often uncoordinated, resulting in a large amount of partial character entries, where only some columns have pronuncia-tions while others are missing. It could be argued that we can use missing data imputation to alleviate the issue, and continue processing the dialect data in a matrix form, perhaps with feed-forward neural networks or denoising autoencoders(Vincent et al., 2008). However, traditional missing-data imputation techniques may create fictitious syllables that violate the phonotactics of that dialect when imputing initials or finals according to the mode of a type. Conditioning the initials or finals on each other will cause higher-order dependencies that are hard to solve. Therefore, by keeping the spaces untouched and using paired comparisons, the graph formalism circumnavigates the problem. This formulation may also allow for auxiliary input features, such as basic phonological knowledge about the nature of phonemic contrast, to be injected into the model. On this graph, we learn the embeddings with the BoxE algorithm, to be discussed below. ### Construction of a Multi-Dialectal Knowledge Graph We expressed the data with a knowledge graph and trained the representations through an auxiliary task of completing the multi-dialectal knowledge graph. With a graph-based technique, the representations can be more robust to noisy and porous data. Additionally, the method will be more flexible, allowing for auxiliary input features to be injected. We construct a graph by leveraging the characters, as well as individual initials, finals and tones from various dialects as nodes. (See Figure 2).For instance, the fact of character C having an initial I in dialect D is modeled with an edge from C to I. The edge has type specific to the dialect D and the category of the component, which is an initial. This edge type can be denoted as "D-initial". Demonstrated in Fig. 2, C could be character No. 1, when I is _t/_ and the edge is "Changsha_initial". After constructing the graph, character-level and component-level representations are trained simultaneously. The knowledge graph algorithm attempts to model the nodes features as well as a prediction function so that, when given a character node and a type of link, the corresponding pronunciation node can be predicted with maximum likelihood. In this process, the model implicitly gen Figure 2: Partial Illustration of the Phonology Knowledge Graph. The numerals represent the indices representing the Chinese characters and the glyphs for what they represent. /33/ is a tone in Chao’s notation. The other nodes are segments represented in the International Phonetic Alphabet. The text labels for the edges demonstrate the how edges are categorized according to both dialect and phone type. Note that it is bi-partite by nature, as edges can only occur between “phonemic” nodes and “character” nodes, colored blue and black in the figure.(This is not provided explicitly) erates hypotheses about character pronunciations missing or unseen in training, as well as historical relationships between the syllables. If there are \(M\) characters with readings from \(N\) dialects involved in an experiment, the upper bound for the number of edge types will be \(3N\). Assuming that \(F_{1}+F_{2}+F_{3}\) unique initials, finals and tones could be found within the aggregated phonological systems of the \(N\) dialects, the upper bound for number of nodes is \(M+F_{1}+F_{2}+F_{3}\). The graph size scales sub-linearly with the number of dialects, since as more dialects are considered, their phonemic inventories will start to overlap and exhaust. Following convention in knowledge base research, the graph is presented in Triples of Head-Relation-Tail format. ### The Box Embedding Model In pilot tests, We considered various algorithms from the field of graph representation learning and knowledge base completion for application. In the process, it is revealed that few algorithms are inherently suitable, as there are many subtle requirements in this context: 1. Models designed for knowledge graphs are more suited to this application than general graph learning algorithms, since the graph to be processed is heterogeneous, besides carrying edge type as information. 2. The model must have strong capacity for modeling multiple unique relations between the same two nodes. It is very common for one character to have the same initial across different dialects. This rules out many translation-based models, that, when given different relations, always predict different tail nodes. Prominent examples of such models include TransE Bordes et al. (2013) and RotatE Sun et al. (2019). 3. If the model uses inverse triples as an augmentation technique, then the model should also be expressive in many-to-one and one-to-many relations, because one initial or final will be mapped to numerous characters. 4. Of the applicable algorithms, interpretability should be prioritized, since we hope to extract interpretable phonological knowledge from the obtained representations. This casts doubt on a another large family of knowledge graph models, namely the bi-linear models, epitomized by RESCAL(Nickel et al.) and Dist-MultYang et al. (2015). After consideration, we chose BoxE for its expressiveness and tolerance to many-to-one relationships, due to its Box embedding designs. Empirically, we also demonstrate that the BoxE is relatively optimal for the phonological task through comparison with RotatE Sun et al. (2019) and ComplEx Trouillon et al. (2016) in Table 4. Here is a brief description of the BoxE algorithm. It is a translational model that embeds each node with two vectors: \(e_{i}\), which represents the position vector, and \(b_{i}\in\mathbf{R}^{d}\), which represents the translational bump. These vectors are obtained after incorporating triples into the model. Additionally, each edge type is defined with two hyper-rectangles \(r^{(1)}\) and \(r^{(2)}\in\mathbf{R}^{d}\). To satisfy the relation \(R\) between entity \(E_{1}\) and \(E_{2}\), there is \(e_{1}+b_{2}\in r^{(1)}\) and \(e_{2}+b_{1}\in r^{(2)}\). Intuitively, this means that \(E_{1}\) and \(E_{2}\) "bump" each other in hyperspace \(\mathbf{R}^{d}\) by some distance. If the new vectors fall within the bounds of the associated boxes, then the proposition is considered probable. To facilitate gradient descent, the boxes have relaxed borders. It is worth noting that BoxE is also capable of hyper-graph learning as it accepts higher arity relations as input, though we did not exploit this feature for this study. Our training objective was to maximize the score or probability of given relations. To elaborate, this means maximizing the chance of predicting masked initials/finals/tones of some character in some dialect with the unmasked components associated with that character, from both within and without the dialect. This is analog to the comparative method in Historical Phonology, as the model implicitly reconstructs a latent "proto-language", from which the descendent languages can be deduced (or, "decoded") with maximum likelihood. ## 5 Data and Experimental Setup We use pronunciation data from four varieties of Xiang Chinese Changsha Chen et al. (2015), Shuangfeng Sun et al. (2019), Guanyang Wenshi Chen et al. (2019), and Quanzhou Xiancheng Sun et al. (2019), spoken primarily in Hunan Province, provided by CCRHuang et al. (2011), and retrived with Comparative analysis toolset for Chinese dialectsHuang (2021). We also obtain labels of Middle Chinese readings from the same source. In this work, Middle Chinese refers to the phonological system recorded in the dictionary Qieyun, from the year 601 AD. It was supplemented in the Song Dynasty into the dictionary Guangyun, from which this study draws data. Middle Chinese is literary and may not reflect the colloquial speech of China in any time or place. However, most phonological systems of modern Sinitic languages (with the notable exception of the Min Languages) can be derived from the Qieyun system. Thus we treat it as a useful protolanguage model for most Sinitic Languages. We operate on symbolic abstractions instead of raw acoustic data, as all the data have been transcribed into IPA in the database. One row of data corresponds to readings of one Chinese character. Internally, each character is mapped to a unique identifier, which is the character's serial number in Guangyun. For every variety of Chinese, there are four columns, corresponding to initial value, final value, tonal value and tonal type of a given character's pronunciation. The tone type argument is actually redundant, and it is assigned manually by investigators. In each dialect, there is a one-to-one correspondence between one tone value with one tone type. Between two dialects, tones arising from the same Middle Chinese tone are given same names. Hence, the tone type feature introduces prior expert knowledge about the historical origin of tones. However, we expect the model to derive the historical tones without any diachronic expert knowledge. Hence, we discard the tone type feature, and use only the three values for this study. ### Processing of Duplicate Data Characters in Sinitic can be polyphonic, that is, sometimes a character will be mapped to multiple readings in one dialect. This results in duplicate data in the dataset. For convenience, we drop the extra pronunciations and keep only the first line for every entry. Though, there can be ambiguity surrounding the correspondence of readings for polyphonic characters. For instance, the first reading entry for a polyphonic character in dialect A might be cognate with the second reading entry for the character in dialect B. However, our naive approach will match all the first entries to each other. Additionally, two dialects may inherit only partial readings of a polyphonic character in the proto-language. Hence, this procedure potentially introduces erroneous alignment into the model. ### Split of Training, Testing and Validating Datasets The model was not trained with all the data, so as to examine the robustness of the model. Instead, some triples are diverted to form testing and validating datasets. Unfortunately, assignment in this context is slightly more complicated than simple stochastic choice. There is the scenario where all initial (final/tonal) information about one character is diverted from training. In this case, the model will not be able to correctly embed this character. To circumvent this issue, we mandate that at least one feature from any of the three compositional types is retained in the training set for any character. In the four Xiangyu in this case, the result is empirically a split of 80.50%:12.52%:6.98%. ### Data Statistics The initials, finals and tones count for the four dialects are listed in Table 1. A total of 2805 characters is included, but not every character has the corresponding phonological data documented in every dialect. In the training set, there are 22300 entries. ### Model Setup For the parametric size of the model, see Table 2. We employ the BoxE algorithm implemented in the Python library PyKeen Ali et al. (2021). We did not fine-tune the model or any model parameters, so as to demonstrate the capability of the model in even in a highly suboptimal setting. \begin{table} \begin{tabular}{l c c c} & **Initials** & **Finals** & **Tones** \\ \hline Changsha & 21 & 38 & 11 \\ Shuangfeng & 28 & 35 & 11 \\ Guanyang & 28 & 42 & 5 \\ Quanzhou & 26 & 43 & 4 \\ \end{tabular} \end{table} Table 1: Data Statistics \begin{table} \begin{tabular}{l c} \hline \hline **Parameter** & **Value** \\ \hline Vector and hyperbox dimension & 64 \\ Number of nodes & 2946 \\ Number of edge types & 12 \\ Cumulative parameter size & 378624 \\ Optimization algorithm & Adam \\ Number of epochs & 2000 \\ \hline \hline \end{tabular} \end{table} Table 2: Model Parameters ## 6 Experimental Evaluation ### Canonical Evaluation of Model The convergence of the model, and a preview of the spatial distribution of embeddings can be seen in Figure 3. The model quickly converges. The entity plot decomposed with PCA reveals a mass of character readings "ejecting" two groups of entities, respectively the combination of all initials and tones, and all finals, which is in accordance with the bi-partite and heterogeneous nature of this graph. Canonically, BoxE is evaluated with the hit@n metric and MRR (mean reciprocal rank) for link prediction. On the validation set, our model achieved hit@1:51.25%, hit@5: 87.19%, hit@10: 93.76% on the "tail" batches. The head batches are not relevant because they involve "predicting characters from initials/finals", of which there is many to one. In Table 4, we demonstrate empirically the superiority of the BoxE algorithm over other common knowledge graph algorithms on this phonological task. A clearer visualization of the embedded points can be seen in Figure 4. Guangyun ensures that rhyming characters (having the same final) have similar coloring on the map. The coloring is only a reflection of the point's serial in the dataset and does not have any quantitative interpretation. Presumably, the translational bump for characters will contain more relevant information to historical phonology, as they designate which component types to "bump into the box." Without mention, all experiments are carried out on the bump embeddings and not positions. However, empirically we find that the two kinds of embeddings are interchangeable. ### Examining Contrastive Information In this section, unsupervised clustering is used to evaluate contrastive information in the embeddings. Based on the hypothesis that the phonological structures of the dialects are co-embedded in the latent structure of embeddings, we determined if the high-dimensional embeddings retain information associated with the theoretic categories of the input dialects, a similar task to Tilsen et al. 2021. After applying a clustering algorithm to the embedded characters, the information yield 1 of the found categories against input categories of initials, finals and tones is computed. A higher information yield indicates that the clusters found by unsupervised clustering were more interpretable with respect to the input phonemic categories. 23 Footnote 1: Entropy subtracted by conditional entropy, or an empirical estimate of mutual information. Footnote 2: HDBSCAN sometimes refuses to classify points it is not sure of. These points are combined into one category for the aforementioned purpose. Footnote 3: Before using HDBSCAN, UMAP was first used to reduce the 64 embedding dimensions to 8 dimensions, with the neighbour parameter set to 50. This is an advised practice from the HDBSCAN documentation. The clustering algorithms used for dissecting the cloud of embedded characters include HDBSCAN (McInnes and Healy, 2017,A density based method), Affinity Propagation, K-means and Agglomerated Clustering.4 The results can be seen in Figure 5. Footnote 4: The numerous methods were tried sequentially as we do not know which algorithm best recovers the latent structure of representations in accordance with theoretic categories. Affinity propagation and HDBSCAN achieved best effects on finding interpretable clusters from the datasets. Though, we find that HDBSCAN is very sensitive to the two parameters: its effect degrades when we allow for smaller clusters but demands greater confidence on the classification. Notably, HDBSCAN achieved an effect similar to affinity propogation with just 29 clusters, while the latter used 130. The large information yields reflect that the unsu Figure 4: UMAP(McInnes et al., 2018, McInnes et al., 2018,Uniform Manifold Approximation and Projection) decomposed visualizations of the translational bumps (a) and position embeddings (b). The coloring reflects a point’s index in the Guangyun, which is sorted according to rhyme. Figure 3: Preliminary Visualization of Training Dynamics and Trained Embeddings. pervised algorithms do tend to dissect the character set along latent lines corresponding to phonological opposition in the input dialects, as shown in a partial observation in Table 3. It appears that the distribution of finals in dialects had more influence on the latent structure than initials or tones. Simply put, the characters within each unsupervised cluster are more likely to rhyme than alliterate, though both cases occur in observation of the HDBSCAN Clusters. There are limitations to this experiment though, which will be discussed below. ### Inference of Proto-language Features In this section, we investigate the quality of our embeddings with respect to proto-language reconstruction tasks, as an important potential application of this method lies with such work. Hence, we trained classifiers in attempt to infer labels from Middle Chinese, which likely predates proto-Xiang, therefore an accessible surrogate for that proto-language. The features to infer are Grades (\(\includegraphics[width=10.0pt]{Xiangyu}\)), Voice(\(\includegraphics[width=10.0pt]{Xiangyu}\)), Tones(\(\includegraphics[width=10.0pt]{Xiangyu}\)), She (\(\includegraphics[width=10.0pt]{Xiangyu}\), a coarse division of finals), Initials (\(\includegraphics[width=10.0pt]{Xiangyu}\)), and Mu(\(\includegraphics[width=10.0pt]{Xiangyu}\),a fine division of finals). Grades are believed to be associated with medials, a component in the front of the final (amalgamated with final in Xiangyu data). Voice is a division based on properties of the initial, in which voiced consonants, voiceless unaspirated consonants, voiceless aspirated consonants and nasal consonants are distinguished. For tones, in Middle Chinese, there were four: level, rising, departing, and entering. Of these categorical labels, there are respectively 4, 4, 4, 16, 36 and 206 unique classes. 5 Footnote 5: Canonically so, but there are a few erroneous entries in the data we used, resulting in sometimes one or two extra categories containing a few characters. They were kept. For this experiment, a train-test split of 0.67-0.33 was instead. Since phonological evolution is quite regular and systematic, we should expect decent results without a great proportion of data used for training. Accuracies below are for the test set. These values are consistently higher than a naive baseline of guessing the mode of each distribution, proving that proto-language related features were preserved in the retrived embeddings. (See Table 5.) The MLP generally outperforms Ridge Classification on inference for these characters, with the sole exception of tones, where RC outperforms MLP by 1.1%. The best results are attained for tones and voice, showing these features to be phonologically well preserved from Middle Chinese to Xiang languages. Interesting observations can be drawn from the confusion matrices generated with such classification. Presumably, these matrices can offer insight Figure 5: Information yield in percentage averaged across four dialects. For HDBSCAN, the min samples and min cluster size parameters were set to 2 and 200, 5 and 75, 20 and 20 respectively. The other three methods were employed on the original embeddings. For K-means and agglomerative clustering, the number of clusters was specified to be 30 and 10. into what categories were blended, which oppositions were lost during the development of some language family. One such example is demonstrated in Figure 6. It could be seen that there is large confusion between the Xian, Dang and Shan. Shes, and also between Xie and Zhi. Shes. 6 This could indicate that in Proto-Xiang, there is confusion between these categories relative to Middle Chinese. Footnote 6: In Baxter’s transcription, \(\ket{\underline{\text{i}}}=\textit{-}\textit{cam}\), \(\ket{\underline{\text{i}}}=\textit{-}\textit{ang}\), \(\ket{\underline{\text{i}}}=\textit{-}\textit{cam}\); \(\ket{\underline{\text{i}}}=\textit{-}\textit{ea}\), \(\ket{\underline{\text{i}}}=\textit{-}\textit{i}\)(Baxter and Sagart, 2014). There are only hypothetical IPA values available for these archaic categories. ## 7 Discussions Our current setting only operates on pre-abstracted symbols and lacks incorporation of acoustic or articulatory evidence. Incorporating multi-modal data into a knowledge graph framework could enhance the quality of embeddings and enable more accurate representations of phonological features. Alsp, the proposed method uses shared embeddings for symbolic components across different dialects, which cannot fully capture dialect-specific variations. Investigating contextualized or dialect-specific component embeddings could improve the model's ability to capture finer-grained phonological distinctions. Finally, phonetically similar components are currently treated as independent items, which is too absolute an assumption. However, it is also possible for phonetic cues to override the correct phonological alignment in the model. In many cases, phonetic similarity does not imply diachronic homology. Two phonetically equivalent syllables from two different dialects may have different origins. Conversely, two phonetically distinct syllables from two different dialects may be cognate. The subtle balance between "phonetic" and "phonological" proximity requires further discussion. Several lines of research may benefit from robust multi-dialectal representations. In dialectology, there is need for estimating divergence between phonological systems. That includes the divergences between its constituents, such as individual characters, phonemes and syllables. With multi-dialectal representations, this divergence can be estimated quantitatively. In historical phonology, the reconstruction of a proto-language demands deep scrutiny of dialect systems whose efficiency can be improved with manipulating the representations. Also, they can be used for completion of the phonological knowledge base. Often knowledge bases for Sinitic phonology are fragmented, due to imperfect surveys and heterogeneity of sources, etc. The representations can be used to infer missing \begin{table} \begin{tabular}{l|c|c|c} \hline **Alg. (Metric \%)** & **Hit@1** & **Hit@5** & **Hit@10** \\ \hline **BoxE** & **51.25** & **87.19** & **93.76** \\ **RotatE** & 33.11 & 57.47 & 66.18 \\ **ComplEx** & 9.40 & 24.65 & 35.37 \\ \hline \end{tabular} \end{table} Table 4: An empirical demonstration of the superiority of the BoxE algorithm for the phonological investigation task among common missing link prediction methods. The models were set to the same embedding dimension. None of the models were fine-tuned or ran for more than a single time, hence all readings should be seen as sub-optimal. Figure 6: Confusion matrix for She. pronunciations in different dialects to improve the quality of observations. The graph-based method proposed in this paper benefits from phonological characteristics specific to Sinitic languages, but is also limited by these characteristics. Specifically, the process of constructing a phonological graph from words, as proposed in this study, is less natural in languages where words typically have many syllables, and vary in the number of syllables contained. In these languages, the temporal interaction of syllables within a word is a new phenomena that the graph-based method needs to adapt to. Additionally, in these languages, it will be less straightforward to tokenize the words into expressive sub-words to use as nodes in the graph. Presumably, in non-Sinitic languages, the proposed method will be most performant in other languages of the Southeast Asian Sprachbund, such as those in the Hmong-Mien or Austroasiatic families. These languages share phonological features with Sinitic languages that enable our method. On the other hand, this method will likely meet more complications outside of the local sprachbund. ## 8 Conclusion This paper demonstrated the potential of graph-based representation learning in Chinese Historical Phonology. The representations are potent in many ways, i.e. facilitating the reconstruction of minor proto-languages. In the future, more sophisticated techniques such as deep learning models could be explored to further improve the quality of the obtained representations. Furthermore, the proposed method can be integrated with other linguistic resources, such as recordings, articulatory time series, or orthographic corpora, to enrich the knowledge base and improve the accuracy of reconstructions. With the development of modern, massive linguistic datasets such as Nk2028(nk2028, 2020), CogNet(Batsuren et al., 2022) or MorphyNet(Batsuren et al., 2021) as well as improvements in large pre-trained models, we can expect foundational models that possess emergent and meta-generalizing capabilities to arise in historical phonology or morphology. This avenue of research holds great promise for advancing our understanding of the phonology and evolution of Sinitic languages, and potentially other language families as well. ## Limitations This study stems from a novel idea for Chinese Historical Phonology Studies. As few direct pre-decessors could offer hindsight, there are quite a few limitations to this study that may be addressed with further work. 1. While the initial-final-tone decomposition is convenient in this context, it also limits the transferability of the proposed tool to languages outside of the Sinosphere. This calls for further exploration of more generalize-able approaches to phonological representation learning. 2. Polyphonic characters were not fully utilized in the study, and their alignment per-reading and tokenization into separate identifiers should be considered in future work. 3. Finally, making full use of the dataset is crucial, and the stochastic train-test split used in this study may leave out important hints. Alternative sampling strategies, such as cross-validation or bootstrapping, could enhance the robustness of the results. ## Acknowledgements We are grateful for the valuable advice and feedback we received from various peers during the course of this work. Without their contributions, this research would not have been possible.
2307.16775
Arithmetic of Hecke L-functions of quadratic extensions of totally real fields
Deep work by Shintani in the 1970's describes Hecke $L$-functions associated to narrow ray class group characters of totally real fields $F$ in terms of what are now known as Shintani zeta functions. However, for $[F:\mathbb{Q}] = n \geq 3$, Shintani's method was ineffective due to its crucial dependence on abstract fundamental domains for the action of totally positive units of $F$ on $\mathbb{R}^n_+$, so-called $\textit{Shintani sets}$. These difficulties were recently resolved in independent work of Charollois, Dasgupta, and Greenberg and Diaz y Diaz and Friedman. For those narrow ray class group characters whose conductor is an inert rational prime in a totally real field $F$ with narrow class number $1$, we obtain a natural combinatorial description of these sets, allowing us to obtain a simple description of the associated Hecke $L$-functions. As a consequence, we generalize earlier work of Girstmair, Hirzebruch, and Zagier, that offer combinatorial class number formulas for imaginary quadratic fields, to real and imaginary quadratic extensions of totally real number fields $F$ with narrow class number $1$. For CM quadratic extensions of $F$, our work may be viewed as an effective affirmative answer to Hecke's Conjecture that the relative class number has an elementary arithmetic expression in terms of the relative discriminant.
Marie-Hélène Tomé
2023-07-31T15:41:54Z
http://arxiv.org/abs/2307.16775v2
# Arithmetic of Hecke \(L\)-functions of quadratic extensions of totally real fields ###### Abstract. Deep work by Shintani in the 1970's describes Hecke \(L\)-functions associated to narrow ray class group characters of totally real fields \(F\) in terms of what are now known as Shintani zeta functions. However, for \([F:\mathbb{Q}]=n\geq 3\), Shintani's method was ineffective due to its crucial dependence on abstract fundamental domains for the action of totally positive units of \(F\) on \(\mathbb{R}^{n}_{+}\), so-called _Shintani sets_. These difficulties were recently resolved in independent work of Charollois, Dasgupta, and Greenberg and Diaz y Diaz and Friedman. For those narrow ray class group characters whose conductor is an inert rational prime in a totally real field \(F\) with narrow class number \(1\), we obtain a natural combinatorial description of these sets, allowing us to obtain a simple description of the associated Hecke \(L\)-functions. As a consequence, we generalize earlier work of Girstmair, Hirzebruch, and Zagier, that offer combinatorial class number formulas for imaginary quadratic fields, to real and imaginary quadratic extensions of totally real number fields \(F\) with narrow class number \(1\). For such extensions, our work may be viewed as an effective affirmative answer to Hecke's Conjecture that the relative class number has an elementary arithmetic expression in terms of the relative discriminant. ## 1. Introduction and statement of results The \(L\)-functions associated to number fields are important tools in analytic number theory that provide information about the algebraic properties of these fields, such as class numbers, fundamental units, and regulators. A pioneering example is the work of Dirichlet, who in the case of quadratic fields \(\mathbb{Q}(\sqrt{d})\) with fundamental discriminant \(d\), gave the following class number formula (see [8, Theorem 8.1.4]) \[h_{\mathbb{Q}(\sqrt{d})}=\begin{cases}\frac{w_{d}\sqrt{|d|}}{2\pi}\cdot L(1, \chi_{d})&\text{ if }d<0\\ \\ \frac{\sqrt{d}}{\ln(\varepsilon_{d})}\cdot L(1,\chi_{d})&\text{ if }d>0.\end{cases} \tag{1.1}\] Here \(L(s,\chi_{d})\) is the Dirichlet \(L\)-function associated to the Kronecker character \(\binom{d}{\cdot}\), \(w_{d}\) is the number of roots of unity lying in \(\mathbb{Q}(\sqrt{d})\), and \(\varepsilon_{d}\) is the fundamental unit of \(\mathbb{Q}(\sqrt{d})\). More recent class number formulas due to Girstmair [7], Hirzebruch [10], and Zagier [17] elegantly simplify this formula in terms of familiar objects from elementary number theory; namely they make surprising connections with digit expansions and continued fractions. For primes \(7\leq p\equiv 3\pmod{4}\) and \(g\) a primitive root in \(\mathbb{F}_{p}\), Girstmair showed that \[h_{\mathbb{Q}(\sqrt{-p})}=\frac{1}{g+1}\sum_{k=1}^{p-1}(-1)^{k}x_{k}, \tag{1.2}\] where \((x_{1},x_{2},\cdots,x_{p-1})\) are the digits of the periodic digit expansion of \(1/p\) in base \(g\). Under the additional assumption that \(h_{\mathbb{Q}(\sqrt{p})}=1\), Hirzebruch and Zagier proved \[h_{\mathbb{Q}(\sqrt{-p})}=\frac{1}{3}\sum_{i=1}^{2t}(-1)^{i}a_{i}, \tag{1.3}\] where \(\sqrt{p}\) has continued fraction expansion \(\sqrt{p}=[a_{0},\overline{a_{1}a_{2}\cdots a_{2t}}]\). 1 Footnote 1: It is a classical fact that these simple continued fractions have repeating digits of even period length. These results follow from the fact that Dirichlet \(L\)-functions can be expressed as finite linear combinations of Hurwitz zeta functions, which satisfy functional equations relating their values at \(s\) and \(1-s\) and whose values at nonpositive integers can be expressed in terms of generalized Bernoulli numbers. Combining these facts, the formulas in (1.1) become (see p. 234 of [5]) \[h_{\mathbb{Q}(\sqrt{d})}=\begin{cases}-\frac{w_{d}}{2|d|}\sum\limits_{r=1}^{| d|-1}\binom{d}{r}r&\text{ if }d<0\\ \\ -\frac{1}{2\ln(\varepsilon_{d})}\sum\limits_{r=1}^{d-1}\binom{d}{r}\ln\sin( \frac{\pi r}{d})&\text{ if }d>0.\end{cases} \tag{1.4}\] The alternating sums in (1.2) and (1.3) are combinatorial reformulations of the sums above. It is natural to ask whether class numbers can be expressed as finite alternating sums of combinatorial numbers beyond the realm of imaginary quadratic fields. We show that this is indeed the case for totally imaginary quadratic extensions of totally real fields \(F\) with narrow class number \(1\). As the discussion above suggests, it is the combinatorial structure of \(L\)-functions themselves which underlies these formulas, and so we require a generalization of the theory of Dirichlet \(L\)-functions and their decomposition as sums of Hurwitz zeta functions. To this end, we consider certain Hecke \(L\)-functions of totally real fields of narrow class number \(1\) and their combinatorial description in terms of generalizations of Hurwitz zeta functions. A significant step in this direction has already been obtained in the deep work of Shintani [13, 14]. We reformulate his work combinatorially in terms of number field invariants. We now recall the work of Shintani. Associated to a matrix \(A\in\mathbb{M}_{n\times r}(\mathbb{R}_{>0})\) and a vector \(\mathbf{x}=(x_{1},\cdots,x_{n})\in\mathbb{R}_{\geq 0}^{r}\), Shintani defined [14, SS2] the _Shintani zeta function_ \[\zeta(s,A,\mathbf{x})\coloneqq\sum\limits_{m_{1},\dots,m_{n}=0}^{\infty}\prod \limits_{i=1}^{n}\left(\sum\limits_{j=1}^{n}a_{ij}(m_{j}+x_{j})\right)^{-s}, \quad\text{Re}(s)>1 \tag{1.5}\] (in the notation of [2]), which coincides with the Hurwitz zeta function in the case \(n=r=1\). He described (see Lemma 2 of [14]) a decomposition of Hecke \(L\)-functions associated to totally real fields \(F\) in terms of Shintani zeta functions evaluated at a finite set of algebraic points. His decomposition depends critically on the explicit description of the fundamental domain of the group action of the totally positive units of \(\mathcal{O}_{F}\) on \(\mathbb{R}_{+}^{n}\), known as the _Shintani set_. For totally real \(F\) with \([F:\mathbb{Q}]\geq 3\), the construction of Shintani sets remained open. Therefore, Shintani's method offered a framework, albeit ineffective, for the decomposition of certain Hecke \(L\)-functions. Recent independent work by Charollois, Dasgupta, and Greenberg [4] and Diaz y Diaz and Friedman [15] filled the gap and provided explicit constructions of these Shintani sets. In view of this recent work, effective descriptions of Shintani's decomposition of Hecke \(L\)-functions can be obtained. Moreover, the relative class number formulas Shintani derived (see Theorem 2 of [13]) for totally imaginary quadratic extensions of \(F\) (i.e., where \(n=2\)) become effective for \(F\) of arbitrary degree over \(\mathbb{Q}\). By carefully studying the combinatorial structure endowed by these Shintani sets, we obtain combinatorial descriptions of Shintani's decomposition of certain Hecke \(L\)-functions. This allows us to obtain reformulations of analogs of (1.4), in the spirit of Girstmair, Hirzebruch, and Zagier, for all quadratic extensions of \(F\) with narrow class number \(1\), where \([F:\mathbb{Q}]\) is arbitrary. To make this precise, we now turn to the problem of describing the Shintani sets. Recall that by Dirichlet's Unit Theorem, when \(F\) is a totally real field of degree \(n\) over \(\mathbb{Q}\), the totally positive unit group \(\mathcal{O}_{F}^{\times,+}\) is a free abelian group of rank \(n-1\). Hence there exist \(n-1\) totally positive units \(\varepsilon_{1},\cdots,\varepsilon_{n-1}\) such that \(\mathcal{O}_{F}^{\times,+}=\langle\varepsilon_{1},\cdots,\varepsilon_{n-1}\rangle\). Following [2, 15], let the \(n\) real embeddings of \(F\) be given by \(\sigma_{1},\cdots,\sigma_{n}\) and let \(\iota\coloneqq F\hookrightarrow\mathbb{R}^{n}\) be given by \[x\mapsto(\sigma_{1}(x),\cdots,\sigma_{n}(x)),\quad x\in F. \tag{1.6}\] For any permutation \(\tau\in S_{n-1}\), we define \(f_{\tau,1}\coloneqq 1\), and \[f_{\tau,j}\coloneqq\prod_{i=1}^{j-1}\varepsilon_{\tau(i)},\quad 2\leq j\leq n, \tag{1.7}\] an associated matrix \[A^{\tau}\coloneqq(\sigma_{i}(f_{\tau,j}))\in\mathbb{M}_{n}(\mathbb{R}_{>0}), \tag{1.8}\] and a weight \(w_{\tau}\in\{0,\pm 1\}\) (see Section 2.3). When \(w_{\tau}\) is nonzero, the set of algebraic integers \(\mathcal{B}_{F,\tau}\coloneqq\{f_{\tau,1},f_{\tau,2},\cdots,f_{\tau,n}\}\) forms a \(\mathbb{Q}\)-basis for \(F\) and the set \(\mathcal{B}_{\iota(F),\tau}\coloneqq\{\iota(f_{\tau,1}),\iota(f_{\tau,2}), \cdots,\iota(f_{\tau,n})\}\) forms a basis for \(\mathbb{R}^{n}\). Hence the lattice \(\bigoplus_{i=1}^{n}\mathbb{Z}f_{\tau,i}\) is full rank. Let \(e_{n}\) be the \(n^{th}\) standard basis vector for \(\mathbb{R}^{n}\), and denote by \((c_{1},\cdots,c_{n})\) the coefficients of \(e_{n}\) in the basis \(\mathcal{B}_{\iota(F),\tau}\), i.e., \(e_{n}=\sum_{i=1}^{n}c_{i}\iota(f_{\tau,i})\). According to the sign of \(c_{i}\), define \(n\) intervals \[I_{\tau,i}\coloneqq\begin{cases}[0,1)&\text{if }c_{i}>0\\ (0,1)&\text{otherwise},\end{cases}\quad 1\leq i\leq n. \tag{1.9}\] For any nonzero integral ideal \(\mathfrak{f}\subset F\), the _Shintani set_\(R^{\tau}(\mathfrak{f})\) is defined by \[R^{\tau}(\mathfrak{f})\coloneqq\bigg{\{}z=\sum_{i=1}^{n}t_{z,\tau,i}f_{\tau,i} \in\mathfrak{f}^{-1}\,:\,(t_{z,\tau,1},\cdots,t_{z,\tau,n})\in I_{\tau,1} \times\cdots\times I_{\tau,n}\bigg{\}}. \tag{1.10}\] Now we turn to the problem of obtaining a combinatorial description of the Shintani sets associated to the ideals generated by inert rational primes. Throughout, we let \(F\coloneqq\mathbb{Q}(\theta_{F})\) for \(\theta_{F}\in\mathcal{O}_{F}\) be a totally real field with narrow class number \(1\). Furthermore, we let \(p\nmid[\mathcal{O}_{F}:\mathbb{Z}[\theta_{F}]]\) be a rational prime which remains inert in \(F\). Therefore, we have that \(\mathcal{O}_{F}/p\mathcal{O}_{F}\) is isomorphic to \(\mathbb{F}_{p^{n}}\) under the isomorphism \(\varphi\) (see (3.1)), and so we can fix \(\rho\) such that \(\mathbb{F}_{p^{n}}=\langle\rho\rangle\). Let \(h_{\rho}(x)\in\mathbb{Z}[x]\) be the minimal polynomial for \(\rho\) whose reduction mod \(p\) is a primitive polynomial in \(\mathbb{F}_{p^{n}}\), say \[h_{\rho}(x)=x^{n}+p_{n-1}x^{n-1}+\cdots+p_{0}. \tag{1.11}\] Using the coefficients of (1.11), we define a matrix \(A_{F,\rho}(z)\) and a vector \(\mathbf{v}_{F,\rho}\), whose entries lie in the rational function field \(\mathbb{Q}(z)\), by \[A_{F,\rho}(z)\coloneqq\begin{pmatrix}1&0&0&\cdots&0&2p_{0}\\ -z&1&0&\cdots&0&0&zp_{1}\\ 0&-z&1&\cdots&0&0&zp_{2}\\ 0&0&-z&\cdots&0&0&zp_{3}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&-z&1&zp_{n-2}\\ 0&0&0&\cdots&0&-z&1+zp_{n-1}\end{pmatrix}\qquad\text{and}\qquad\mathbf{v}_{F, \rho}\coloneqq\begin{pmatrix}p_{0}\\ p_{1}\\ p_{2}\\ p_{3}\\ \vdots\\ p_{n-2}\\ p_{n-1}\end{pmatrix}. \tag{1.12}\] Since \(\det(A_{F,\rho}(z))=1+zp_{n-1}+z^{2}p_{n-2}+\cdots+z^{n}p_{0}\) is a nonzero rational function (see Lemma 3.2), by Cramer's Rule, there is a unique vector of rational functions with integer coefficients, say \[\mathbf{X}_{F,\rho}\coloneqq(X_{F,\rho,1}(z),X_{F,\rho,2}(z),\cdots,X_{F,\rho, n}(z)),\] that satisfies \(A_{F,\rho}(z){\bf X}_{F,\rho}={\bf v}_{F,\rho}.\) As power series, for \(1\leq i\leq n\), we have that \[X_{F,\rho,i}(z)=\sum_{m\geq 0}x_{F,\rho,i}(m)z^{m}=\sum_{m\geq 0}x_{i}(m)z^{m}, \tag{1.13}\] where \(x_{i}(0)=-p_{i}\). Note that for notational convenience, we drop the dependence on \(F\) and \(\rho\). For each permutation \(\tau\) and each \(1\leq m<p^{n}\), the \(n\)-tuple \((x_{1}(m),\cdots,x_{n}(m))\) will be modified to obtain a finite set of \(n\)-tuples \(\hat{\bf x}_{\tau}(i,m)\coloneqq(\tilde{x}_{\tau,1}(i,m),\cdots,\tilde{x}_{ \tau,n}(i,m))\), where \(1\leq i\leq\#\left(\mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F})\right)\). These \(n\)-tuples are the coefficients of elements of \(R^{\tau}(p\mathcal{O}_{F})\) in the basis \(\mathcal{B}_{F,\tau}\). Using the notation above, we explicitly describe Hecke \(L\)-functions for narrow ray class group characters with finite part of their conductor given by \(p\mathcal{O}_{F}\) in terms of the combinatorial data above and Shintani zeta functions. **Theorem 1.1**.: _Assuming the notation and hypotheses above, we have that_ \[L(s,\chi_{F})=N(p\mathcal{O}_{F})^{-s}\sum_{\begin{subarray}{c}\tau\in S_{n-1 }\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}\exp\left(\frac{2\pi ik( n+m)}{d}\right)^{\#\left(\mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F})\right)} \zeta\left(s,A^{\tau},\hat{\bf x}_{\tau}(i,m)\right),\] _where \(\chi_{F}\) is a narrow ray class group character with conductor \(p\mathcal{O}_{F}\), \(N(p\mathcal{O}_{F})\) denotes the norm of the ideal \(p\mathcal{O}_{F}\), and \(\exp\left((2\pi ik)/d\right)\), a primitive \(d^{th}\) root of unity with \(1<d\ |\ (p^{n}-1)\), is the value of the finite part of the character \(\chi_{F}\) on the equivalence class of \(\varphi(\rho)\) in \((\mathcal{O}_{F}/p\mathcal{O}_{F})^{\times}\)._ **Two Remarks.** (i) If \(n=1\), then \(F=\mathbb{Q}\) and every prime is inert in \(F\). Therefore, Theorem 1.1 applies for every odd prime \(p\), and gives the standard decomposition of \(L(s,\chi_{d_{p}})\) as a linear combination of Hurwitz zeta functions where \(d_{p}=(-1)^{\frac{p-1}{2}}p\). (ii) By the Chebotarev Density Theorem, for each field \(F\), Theorem 1.1 applies to a positive density of primes. Now we turn to the motivating problem of describing class numbers of imaginary quadratic extensions of totally real fields with narrow class number \(1\). **Corollary 1.2**.: _Assume the notation and hypotheses above. Let \(K=F(\sqrt{-p})\) and additionally assume that \(p\equiv 3\pmod{4}\). We have that_ \[h_{K}=\frac{1}{n}\cdot\frac{w_{K}}{[\mathcal{O}_{F}^{\times}: \mathcal{O}_{F}^{\times,+}][\mathcal{O}_{F}^{\times,+}:N_{K/F}\mathcal{O}_{K}^{ \times}]}\sum_{\begin{subarray}{c}\tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\\ \times\left\{\sum_{m=1}^{p^{n}-1}\sum_{i=1}^{\#\left(\mathcal{O}_{ F}\cap R^{\tau}(p\mathcal{O}_{F})\right)}(-1)^{m}\sum_{\begin{subarray}{c}(l_{1}, \ldots,l_{n})\in\mathbb{Z}_{\geq 0}^{n}\\ \sum_{j=1}^{n}l_{j}=n\end{subarray}}\prod_{k=1}^{n}\frac{B_{l_{k}}(\tilde{x}_{ \tau,k}(i,m))}{l_{k}!}\operatorname{Tr}_{F/\mathbb{Q}}\biggl{(}\prod_{k=1}^{n} f_{\tau,k}^{l_{k}-1}\biggr{)}\right\},\] _where \(w_{K}\) is the number of roots of unity lying in \(K\), \(N_{K/F}\mathcal{O}_{K}^{\times}:=\{N_{K/F}(u)\ |\ u\in\mathcal{O}_{K}^{\times}\}\), and \(B_{n}(x)\) denotes the \(n^{th}\) Bernoulli polynomial._ **Three Remarks.** (i) When \(n=1\), the set \(\{k/p:1\leq k<p\}\) considered by Girstmair is the Shintani set \(R(p^{-1}\mathbb{Z})\) generated by a primitive root \(g\) of \(\mathbb{F}_{p}^{*}\) through the relation \(x_{1}(m+1)=gx_{1}(m)\) for \(1\leq m<p\). This relation is encoded as the rational function \(X_{F,g,1}(z)=gz/(1-gz)\). (ii) For field extensions \(K/F\) of this form, the above formula may be viewed as an effective affirmative answer to Hecke's Conjecture that \(h_{K}/h_{F}\) admits an elementary arithmetic description in terms of the relative discriminant of similar composition as (1.4) (see p. 2 of [9]). (iii) Independently of this work, the case of Corollary (1.2) when \(n=2\) was simultaneously obtained by Athaide, Cardwell, and Thompson in [1]. **Example**.: Let \(F=\mathbb{Q}(\zeta_{7}+\zeta_{7}^{-1})\) and let \(K=F(\sqrt{-3})\). The irreducible polynomial for \(-\zeta_{7}-\zeta_{7}^{-1}\) is primitive over \(\mathbb{F}_{3}\), so we may identify \(-\zeta_{7}-\zeta_{7}^{-1}\) with a generator \(\rho\) of \(\mathbb{F}_{27}\). One checks that the minimal polynomial for \(\rho\) is given by \[h_{\rho}(x)=x^{3}-x^{2}-2x+1.\] By applying Cramer's Rule, we find that \[X_{F,\rho,1}=\frac{1}{1-z-2z^{2}+z^{3}},\quad X_{F,\rho,2}=\frac{z-2}{1-z-2z^{ 2}+z^{3}},\quad\text{and}\quad X_{F,\rho,3}=\frac{z^{2}-2z-1}{1-z-2z^{2}+z^{3}}.\] Using SageMath, we obtain \(w_{\text{id}}=w_{(12)}=1\), \(f_{\text{id},1}=f_{(1,2),1}=1\), \(f_{\text{id},2}=(\zeta_{7}+\zeta_{7}^{-1})^{2}\), \(f_{(12),2}=(\zeta_{7}+\zeta_{7}^{-1}+1)^{2}\), \(f_{\text{id},3}=f_{(1,2),3}=(\zeta_{7}+\zeta_{7}^{-1})^{2}(\zeta_{7}+\zeta_{ 7}^{-1}+1)^{2}\), \(w_{K}=6\), \([\mathcal{O}_{F}^{\times}:\mathcal{O}_{F}^{\times,+}]=8\), and \([\mathcal{O}_{F}^{\times,+}:N_{K/F}\mathcal{O}_{K}^{\times}]=1.\) Hence, we have that the class number is given by \[h_{K}=\frac{1}{3}\cdot\frac{6}{8}\Bigg{(}\sum_{m=1}^{26}\sum_{i= 1}^{3}(-1)^{m}\sum_{\begin{subarray}{c}(l_{1},l_{2},l_{3})\in\mathbb{Z}_{\geq 0 }^{3}\\ \sum_{i=1}^{3}l_{i}=3\end{subarray}}\prod_{k=1}^{3}\frac{B_{l_{k}}(\tilde{x}_ {\text{id},k}(i,m))}{l_{k}!}\operatorname{Tr}_{F/\mathbb{Q}}\bigg{(}\prod_{k= 1}^{3}f_{\text{id},k}^{l_{k}-1}\bigg{)}\] \[+\sum_{m=1}^{26}(-1)^{m}\sum_{\begin{subarray}{c}(l_{1},l_{2},l_ {3})\in\mathbb{Z}_{\geq 0}^{3}\\ \sum_{i=1}^{3}l_{i}=3\end{subarray}}\prod_{k=1}^{3}\frac{B_{l_{k}}(\tilde{x}_ {(12),k}(1,m))}{l_{k}!}\operatorname{Tr}_{F/\mathbb{Q}}\bigg{(}\prod_{k=1}^{3} f_{(12),k}^{l_{k}-1}\bigg{)}\Bigg{)}\] \[=\frac{1}{3}\cdot\frac{6}{8}\left(\frac{1435}{4374}-\frac{5}{2916 }+\frac{325}{5832}+\cdots-\frac{5}{216}+\frac{17}{72}-\frac{1}{3}\right)= \frac{1}{3}\cdot\frac{6}{8}\left(\frac{24}{6}\right)=1.\] Expressing \(L(s,\chi_{F})\) in terms of evaluation of the Shintani zeta function over a union of Shintani sets sheds light on the combinatorics of the \(L\)-function and its higher order derivatives. The study of the Taylor expansion at \(s=0\) (or at \(s=1\) via the functional equation) of the Hecke \(L\)-function \(L(s,\chi_{F})\) associated to the narrow ray class group character \(\chi_{F}\), is a fundamental problem in number theory. This expansion holds significant information for understanding arithmetic properties by using the data intrinsic to \(F\). Hence, it is natural to investigate the higher order derivatives of Hecke \(L\)-functions associated to narrow ray class group characters, as they have the potential to encode essential arithmetic invariants. In particular, when \(\chi_{F}\) is taken to be the narrow ray class group character associated to the field extension \(K/F\) through class field theory, denoted \(\chi_{K/F}\), the Taylor series expansion at \(s=0\) of the associated \(L\)-function encodes arithmetic invariants of \(K\). When \(K/F\) is a totally imaginary extension, the constant term in the Taylor expansion gives the relative class number of \(K/F\). When only one infinite place splits in \(K\), the exponential of the first derivative of this \(L\)-function yields Stark units, which can be used to generate \(K\). For \(L\)-functions associated to totally real quadratic extensions of \(F\), the \(n^{th}\) derivative of \(L(s,\chi_{K/F})\) gives the relative class number and also the relative regulator \(R_{K}/R_{F}\) of the extension (for example, see [3, Chapter 2]). **Corollary 1.3**.: _Assume the notation and hypotheses above. Let \(K=F(\sqrt{p})\) and assume additionally that \(p\equiv 1\pmod{4}\). Then we have that_ \[h_{K}\cdot\frac{R_{K}}{R_{F}}=\frac{1}{n!}\sum_{k=0}^{n}(-1)^{k} \binom{n}{k}\left(\ln(N(p\mathcal{O}_{F}))\right)^{n-k}\sum_{\begin{subarray}{ c}\tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}(-1)^{m}\sum_{i=1}^{ \#(\mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F}))}\zeta^{(k)}\left(0,A^{\tau},\tilde{\mathbf{x}}_{\tau}(i,m)\right).\] **Remark**.: Shintani also proved closed formulas for the first and second derivative of the Shintani zeta function evaluated at \(s=0\) in terms of special functions. More precisely, in [14, Proposition 1], he related the first derivative at \(s=0\) to the logarithm of the Barnes multiple gamma function. However, not much is known about the values at \(s=0\) of higher order derivatives. **Example**.: Let \(F=\mathbb{Q}(\sqrt{2})\) and let \(K=F(\sqrt{5})\). The irreducible polynomial for \(2+\sqrt{2}\) is primitive over \(\mathbb{F}_{5}\), and hence we may identify \(2+\sqrt{2}\) with a generator \(\rho\) of \(\mathbb{F}_{25}\). One can verify that the minimal polynomial for \(\rho\) is given by \[h_{\rho}(x)=x^{2}-4x+2.\] Applying Cramer's Rule, we compute \[X_{F,\rho,1}=\frac{2}{1-4z+2z^{2}}\quad\text{and}\quad X_{F, \rho,2}=\frac{-4+2z}{1-4z+2z^{2}}.\] Using SageMath, we obtain \(w_{\text{id}}=1\), \(f_{\text{id},1}=1\), and \(f_{\text{id},2}=3+2\sqrt{2}\). We have that \[A^{\text{id}}=\begin{pmatrix}1&3+2\sqrt{2}\\ 1&3-2\sqrt{2}\end{pmatrix},\] and hence, the class number and relative regulator are given by \[h_{K}\cdot\frac{R_{K}}{R_{F}}=1.7501158...=\frac{1}{2}\cdot\sum_{ k=0}^{2}\sum_{m=1}^{24}\sum_{i=1}^{2}(-1)^{k+m}\binom{2}{k}\left(\ln(25) \right)^{2-k}\zeta^{(k)}\left(0,A^{\text{id}},\tilde{\mathbf{x}}_{\text{id} }(i,m)\right),\] where the values of \(\tilde{\mathbf{x}}_{\text{id}}(i,m)\) are displayed in the following tables. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(m\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & \(10\) & \(11\) & \(12\) \\ \hline & \((\frac{7}{10},\frac{1}{10})\) & \((\frac{4}{5},\frac{1}{5})\) & \((\frac{3}{10},\frac{1}{10})\) & \((\frac{3}{5},0)\) & \((\frac{3}{10},\frac{3}{10})\) & \((1,\frac{5}{5})\) & \((\frac{2}{5},\frac{1}{5})\) & \((\frac{3}{5},\frac{2}{5})\) & \((\frac{3}{5},\frac{1}{5})\) & \((\frac{1}{5},0)\) & \((\frac{1}{10},\frac{1}{10})\) & \((1,\frac{2}{5})\) \\ \hline \hline \(m\) & \(13\) & \(14\) & \(15\) & \(16\) & \(17\) & \(18\) & \(19\) & \(20\) & \(21\) & \(22\) & \(23\) & \(24\) \\ \hline & \((\frac{4}{5},\frac{2}{5})\) & \((\frac{7}{10},\frac{3}{10})\) & \((\frac{1}{5},\frac{2}{5})\) & \((\frac{2}{5},0)\) & \((\frac{1}{5},\frac{1}{5})\) & \((\frac{1}{10},\frac{3}{10})\) & \((\frac{1}{10},\frac{3}{10})\) & \((\frac{9}{10},\frac{1}{10})\) & \((\frac{9}{10},\frac{3}{10})\) & \((\frac{4}{5},0)\) & \((\frac{2}{5},\frac{2}{5})\) & \((\frac{1}{2},\frac{1}{10})\) \\ \hline \end{tabular} Table 1. Points \(\tilde{\mathbf{x}}_{\text{id}}(1,m)\) for \(1\leq m\leq 24\). Now, we describe the organization of the paper. In Sections 2.1 and 2.2, we recall basic results on Hecke \(L\)-functions and Shintani zeta functions. In Section 2.3, we recall the work of Charollois, Dasgupta, and Greenberg [4] and Diaz y Diaz and Friedman [15] which offers an effective construction of higher dimensional Shintani sets. Using these results, in Section 3, we prove Theorem 1.1 and in Section 4, we prove Corollaries 1.2 and 1.3 as a direct application of Theorem 1.1 to Hecke characters associated to relative quadratic extensions by class field theory. ## Acknowledgements The author is a participant in the 2023 UVA REU in Number Theory. She is grateful for the support of grants from Jane Street Capital, the National Science Foundation (DMS-2002265 and DMS- 2147273), the National Security Agency (H98230-23-1-0016), and the Templeton World Charity Foundation. She thanks Ken Ono for suggesting this problem and for his mentorship and support, Wei-Lun Tsai for his mentorship, support, and for computing the above examples, and Eleanor McSpirit and Alejandro De Las Penas Castano for their mentorship and guidance. ## 2. Hecke \(L\)-functions and Shintani zeta functions To prove Theorem 1.1, we require some basic facts concerning Hecke \(L\)-functions, Shintani zeta functions, and Shintani sets which encode the interplay of these objects as dictated by the number fields in question. ### Hecke \(L\)-Functions In this section, we follow [12, Chapter VII SS6]. Dirichlet \(L\)-functions encapsulate the behavior of rational primes in extensions of \(\mathbb{Q}\). In the more general setting of number fields, the behavior of primes is captured by Hecke \(L\)-functions which naturally generalize Dirichlet \(L\)-functions. The simplest case of a Hecke \(L\)-function is the Dedekind zeta-function associated to a number field \(F\), defined as \[\zeta_{F}(s)=\sum_{\mathfrak{a}\subset\mathcal{O}_{F}}N(\mathfrak{a})^{-s}= \prod_{\mathfrak{p}}(1-N(\mathfrak{p})^{-s})^{-1}\quad\text{Re}(s)>1,\] where \(N(\mathfrak{a})=|\mathcal{O}_{F}/\mathfrak{a}|\), the sum ranges over all nonzero ideals of \(\mathcal{O}_{F}\), and the product is taken over prime ideals \(\mathfrak{p}\) of \(\mathcal{O}_{F}\). Global information about \(F\), such as the class number and the regulator, can be obtained from the analytic properties of the Dedekind zeta-function constructed from only local information about the primes of \(\mathcal{O}_{F}\). The analytic class number formula gives the following explicit formula for the residue of \(\zeta_{F}(s)\) at \(s=1\) \[\operatorname*{Res}_{s=1}\zeta_{F}(s)=\frac{2^{r_{1}}(2\pi)^{r_{2}}h_{F}R_{F} }{\sqrt{|d_{F}|}w_{F}}, \tag{2.1}\] where \(R_{F}\) is the regulator, \(d_{F}\) is the absolute discriminant, \(w_{F}\) is the number of roots of unity in \(F\), and \(r_{1}\) is the number of real embeddings of \(F\) and \(r_{2}\) is the number of pairs of complex embeddings of \(F\). By the functional equation of the Dedekind zeta-function, (2.1) implies that the first nonzero term in the Taylor series expansion of \(\zeta_{F}(s)\) around \(s=0\) is given by (see [16, p. 2]) \[-\frac{h_{F}R_{F}}{w_{F}}s^{r_{1}+r_{2}-1}. \tag{2.2}\] In the setting of \(F=\mathbb{Q}\), for any positive integer \(m\), a Dirichlet character mod \(m\) is a character \(\chi_{m}\) of the group \((\mathbb{Z}/m\mathbb{Z})^{\times}\) extended to all \(n\in\mathbb{Z}\) by letting \(\chi(n)=0\) when \((n,m)\neq 1\), and hence \(\chi_{m}\) has period \(m\) (see e.g., [6, p. 27]). The Dirichlet \(L\)-function associated to \(\chi_{m}\) is given by \[L(s,\chi_{m})=\sum_{n=1}^{\infty}\chi_{m}(n)n^{-s}=\prod_{p}(1-\chi_{m}(p)p^{- s})^{-1},\quad\text{Re}(s)>1.\] Seeking a generalization of Dirichlet \(L\)-functions and their analytic properties, Hecke was led to the notion of what is now called a Hecke character. The \(L\)-functions associated to these characters can be shown to satisfy a functional equation. To make this precise, we require the following definitions. A _modulus_\(\mathfrak{m}\) is defined as the formal product of finite primes and real infinite primes of \(F\). The modulus consists of \[\mathfrak{m}_{f}=\prod_{\mathfrak{p}\mid\infty}\mathfrak{p}^{\mathfrak{m}( \mathfrak{p})},\quad\mathfrak{m}_{\infty}=\prod_{v\mid\infty}v^{\mathfrak{m}( v)},\] where \(\mathfrak{m}(\mathfrak{p})\geq 0\) and \(\mathfrak{m}(\mathfrak{p})>0\) for only finitely many finite primes of \(F\), \(\mathfrak{m}_{f}\) is an integral ideal of \(\mathcal{O}_{F}\), \(\mathfrak{m}(v)\in\{0,1\}\), and \(\mathfrak{m}_{\infty}\) is a formal product of a subset of infinite real primes of \(F\). A fractional ideal \(\mathfrak{a}\subset I_{F}\) is said to be coprime to the modulus \(\mathfrak{m}\) if no primes appearing in the decomposition of \(\mathfrak{a}\) in \(\mathcal{O}_{F}\) appear in that of \(\mathfrak{m}_{f}\). These ideals form a group, denoted \(I_{F}^{\mathfrak{m}}\). A _Hecke character_\(\mathrm{mod}\) is a character \(\chi:I_{F}^{\mathfrak{m}}\to\mathbb{S}^{1}\) such that there exists a pair of characters \[\chi_{f}:(\mathcal{O}_{F}/\mathfrak{m}_{f})^{\times}\to\mathbb{S}^{1}\quad \text{and}\quad\chi_{\infty}:\mathbb{R}^{\times}\to\mathbb{S}^{1}\] for which \[\chi((\alpha))=\chi_{f}(\alpha)\chi_{\infty}(\alpha),\] for every \(\alpha\in\mathcal{O}_{F}\) coprime to \(\mathfrak{m}\). The character \(\chi_{f}\) is a multiplicative function on \((\mathcal{O}_{F}/\mathfrak{m}_{f})^{\times}\), which extends to all invertible \(\mathcal{O}_{F}\)-ideals by setting \(\chi_{f}(\mathfrak{a})=0\) when \(\mathfrak{a}\) is not coprime to the modulus \(\mathfrak{m}\). The _Hecke \(L\)-function_ associated to a Hecke character \(\chi\) is defined as \[L(s,\chi)=\sum_{\mathfrak{a}\subset\mathcal{O}_{F}}\chi(\mathfrak{a})N( \mathfrak{a})^{-s}=\prod_{\mathfrak{p}}\big{(}1-\chi(\mathfrak{p})N(\mathfrak{ p})^{-s}\big{)}^{-1}\quad\text{Re}(s)>1, \tag{2.3}\] where the sum is taken over all nonzero ideals of \(\mathcal{O}_{F}\) and the product is taken over all prime ideals \(\mathfrak{p}\) of \(\mathcal{O}_{F}\). Throughout, we restrict to narrow ray class group characters defined as follows. Let \[P_{F}^{\mathfrak{m}}\coloneqq\{(\alpha)\subset\mathcal{O}_{F}\ |\ \alpha\equiv 1 \pmod{\mathfrak{m}}\text{ and }\sigma_{v}(\alpha)>0\text{ for all real infinite primes }v\ |\ \mathfrak{m}_{\infty}\}\,,\] where \(\sigma_{v}\) is the embedding associated to the infinite place \(v\) and the congruence \(\alpha\equiv 1\pmod{\mathfrak{m}}\) means that \(v_{\mathfrak{p}}(\alpha-1)\geq v_{\mathfrak{p}}(\mathfrak{m}_{f})\) for all primes \(\mathfrak{p}\) dividing \(\mathfrak{m}_{f}\). Then the _ray class group mod_\(\mathfrak{m}\) is the finite group \(I_{F}^{\mathfrak{m}}/P_{F}^{\mathfrak{m}}\). If \(\mathfrak{m}_{\infty}\) is the formal product of all the real infinite primes of \(F\), the ray class group mod \(\mathfrak{m}\) is called _narrow_. Let \(\mathfrak{m}\) be a modulus whose infinite part contains all real infinite places of \(F\). If \(\chi:I_{F}^{\mathfrak{m}}\to\mathbb{S}^{1}\) is a Hecke character for which \[\chi\left((\alpha)\right)=\chi_{f}(\alpha)N\bigg{(}\bigg{(}\frac{\alpha}{| \alpha|}\bigg{)}^{\mathbf{p}}\bigg{)},\] for all \(\alpha\in\mathcal{O}_{F}\), where \(\chi_{f}\) is a character of \((\mathcal{O}_{F}/\mathfrak{m}_{f})^{\times}\) and \(\mathbf{p}\in\mathbb{Z}^{r_{1}+2r_{2}}\) is an admissible vector (see e.g., [12, Chapter 7]), then \(\chi\) is a _narrow ray class group character_\(\mathrm{mod}\). The _conductor_ of a narrow ray class group character \(\chi\) mod \(\mathfrak{m}\) is the smallest modulus \(\mathfrak{f}\) dividing \(\mathfrak{m}\) such that \(\chi\) factors through the narrow ray class group \(I_{F}^{\mathfrak{f}}/P_{F}^{\mathfrak{f}}\). ### Shintani Zeta Function The Barnes multiple zeta function is defined as \[\zeta_{r}(s,\omega,x)\coloneqq\sum_{\Omega=m_{1}\omega_{1}+\cdots+m_{r} \omega_{r}}(x+\Omega)^{-s}, \tag{2.4}\] where \(\omega=(\omega_{1},\cdots,\omega_{r})\), \(\omega_{i}>0\) for all \(1\leq i\leq r\), \(x>0\), and \((m_{1},\cdots,m_{r})\) ranges over all \(n\)-tuples of non-negative integers (see e.g., [16, p. 3]). Barnes, using a method dating back to Riemann, proved that the Barnes multiple zeta function is holomorphic at \(s=0\) and has a meromorphic continuation to the whole complex plane with only simple poles at \(s=1,2,\cdots,r\). The special value of the Barnes multiple zeta function at \(s=0\) has the following form (see e.g., [16, p. 3]) \[\zeta_{r}\left(0,\omega,\sum_{k=1}^{r}\omega_{k}x_{k}\right)=(-1)^{r}\sum_{ \begin{subarray}{c}(l_{1},\cdots,l_{r})\in\mathbb{Z}_{\geq 0}^{n}\\ \sum_{i=1}^{r}l_{r}=r\end{subarray}}\omega_{1}^{l_{1}-1}\omega_{2}^{l_{2}-1} \cdots\omega_{r}^{l_{r}-1}\prod_{k=1}^{r}\frac{B_{l_{k}}(x_{k})}{l_{k}!}. \tag{2.5}\] Shintani generalized the definition of the Barnes multiple zeta function to a higher-dimensional zeta function taking as arguments an \(n\times r\)-matrix \(A\in\mathbb{M}_{n\times r}(\mathbb{R}_{>0})\) and a vector \(\mathbf{x}=(x_{1},\cdots,x_{r})\in\mathbb{R}_{\geq 0}^{r}\). Shintani [14, SS2] defined the Shintani zeta function \[\zeta(s,A,\mathbf{x})\coloneqq\sum_{m_{1},\ldots,m_{n}=0}^{\infty}\prod_{i=1 }^{n}\left(\sum_{j=1}^{n}a_{ij}(m_{j}+x_{j})\right)^{-s},\quad\mathrm{Re}(s)>1. \tag{2.6}\] The Shintani zeta function converges for \(\mathrm{Re}(s)>r/n\), and has a meromorphic continuation to the whole complex plane. It is holomorphic except for possible poles at \(s=1,2,\cdots,\lfloor r/n\rfloor\) and \(s=t/n\) for integers \(t\geq r\) and \(n\nmid t\) (see [16, p. 19]). When \(n=1\), the Shintani zeta function associated to \(\omega\), viewed as a \(1\times r\)-matrix, and the scalar \(x\) coincides with the Barnes multiple zeta-function. Therefore, for any row \(A_{i}=(a_{i1},a_{i2},\cdots,a_{ir})\) of the matrix \(A\), we have that (see e.g., [16, p. 4]) \[\zeta(s,A_{i},\mathbf{x})=\zeta_{r}\left(s,(a_{i1},a_{i2},\cdots,a_{ir}),\sum_ {j=1}^{r}a_{ij}x_{j}\right).\] Shintani [14, SS2] showed \[\zeta(0,A,\mathbf{x})=\frac{1}{n}\sum_{i=1}^{n}\zeta(0,A_{i},x)=\frac{1}{n} \sum_{i=1}^{n}\zeta_{r}\left(0,(a_{i1},a_{i2},\cdots,a_{ir}),\sum_{j=1}^{r}a_ {ij}x_{j}\right).\] Using (2.5), he [13] obtained a finite formula for the Shintani zeta function evaluated at \(0\) (see e.g., [16, Theorem 2.1]) \[\zeta(0,A,\mathbf{x})=\frac{(-1)^{r}}{n}\sum_{i=1}^{n}\sum_{\begin{subarray}{ c}(l_{1},\ldots,l_{n})\in\mathbb{Z}_{\geq 0}^{n}\\ \sum_{j=1}^{r}l_{j}=r\end{subarray}}a_{i1}^{l_{1}-1}a_{i2}^{l_{2}-1}\cdots a_{ ir}^{l_{r}-1}\frac{B_{l_{1}}(x_{1})}{l_{1}!}\frac{B_{l_{2}}(x_{2})}{l_{2}!} \cdots\frac{B_{l_{r}}(x_{r})}{l_{r}!}. \tag{2.7}\] ### Effective Shintani Set In what follows, we will have need for the following notation used in and results of Diaz y Diaz and Friedman [15] and Barquero-Sanchez, Riad, and Tsai [2]. Recall that each permutation \(\tau\in S_{n-1}\) determines a set \(\{f_{\tau,1},\cdots,f_{\tau,n}\}\subset\mathcal{O}_{F}\), defined as the products of the totally positive units \(\varepsilon_{1},\cdots,\varepsilon_{n-1}\) generating \(\mathcal{O}_{F}^{\times,+}\) as follows: \(f_{\tau,1}\coloneqq 1\) and for \(2\leq j\leq n\), \(f_{\tau,j}\coloneqq\prod_{i=1}^{j-1}\varepsilon_{\tau(i)}\). For each \(1\leq j\leq n\), the vector \(\iota(f_{\tau,j})\coloneqq(\sigma_{1}(f_{\tau,j}),\cdots,\sigma_{n}(f_{\tau,j}))\in \mathbb{R}^{n}\) is defined to be the vector of real embeddings of \(f_{\tau,j}\). The \(n\times n\)-matrix with \(\iota(f_{\tau,j})\) as its \(j^{th}\) column is denoted by \(A^{\tau}\). Using the matrix \(A^{\tau}\) and the totally positive fundamental units \(\varepsilon_{1},\cdots,\varepsilon_{n}\), the weight \(w_{\tau}\) associated to \(\tau\) is defined as \[w_{\tau}\coloneqq\frac{(-1)^{n-1}\mathrm{sgn}(\tau)\cdot\mathrm{sign}(\det(A^ {\tau}))}{\mathrm{sign}(\det(\log|\sigma_{i}(\varepsilon_{j})|)_{1\leq i,j \leq n-1})}\in\{0,\pm 1\}.\] For any nonzero integral ideal \(\mathfrak{f}\subset F\), the _Shintani set_\(R^{\tau}(\mathfrak{f})\) is given by \[R^{\tau}(\mathfrak{f})=\bigg{\{}z=\sum_{i=1}^{n}t_{z,\tau,i}f_{\tau,i}\in \mathfrak{f}^{-1}\,:\,(t_{z,\tau,1},\cdots,t_{z,\tau,n})\in I_{\tau,1}\times \cdots\times I_{\tau,n}\bigg{\}}.\] For notational convenience, let \(\mathfrak{t}_{z,\tau}=(t_{z,\tau,1},\cdots,t_{z,\tau,n})\). The following result of Diaz y Diaz and Friedman will be needed in the proof of Theorem 1.1. **Corollary 2.1** (Corollary 3 of [15]).: _If \(F\) has narrow class number \(1\) and \(\chi\) is a narrow ray class group character of \(F\) having the ideal \(\mathfrak{f}\) as the finite part of its conductor, then we have that_ \[L(s,\chi)=N(\mathfrak{f})^{-s}\sum_{\begin{subarray}{c}\tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{z\in R^{\tau}(\mathfrak{f})}\chi((z) \mathfrak{f})\zeta(s,A^{\tau},\mathfrak{t}_{z,\tau}),\] _where \(\zeta(s,A,\mathbf{x})\) is the Shintani zeta function._ The contribution from summing over each Shintani set \(R^{\tau}(\mathfrak{f})\) depends on the permutation \(\tau\) which determines \(\{f_{\tau,1},\cdots,f_{\tau,n}\}\). The value of \(w_{\tau}\) weights the contribution from the elements of \(R^{\tau}(\mathfrak{f})\). The weight \(w_{\tau}=0\) precisely when \(A^{\tau}\) is singular, i.e., when the vectors \(\{\iota(f_{\tau,i})\;:\;1\leq i\leq n\}\) do not form a basis of \(\mathbb{R}^{n}\), and hence the totally positive units \(\{f_{\tau,i}\;:\;1\leq i\leq n\}\) do not form a \(\mathbb{Q}\)-basis for \(F\). When this is the case, the elements of \(R^{\tau}(\mathfrak{f})\) do not contribute to the value of \(L(s,\chi)\). In what follows, we assume \(w_{\tau}\neq 0\). We denote the set \(\{f_{\tau,1},\cdots,f_{\tau,n}\}\) giving a \(\mathbb{Q}\)-basis for \(F\) by \(\mathcal{B}_{F,\tau}\) and the set \(\{\iota(f_{\tau,1}),\cdots,\iota(f_{\tau,n})\}\) giving a basis for \(\mathbb{R}^{n}\) by \(\mathcal{B}_{\iota(F),\tau}\). Since \(F\) is a \(\mathbb{Q}\)-vector space of rank \(n\) with basis \(\mathcal{B}_{F,\tau}\), every element of \(F\) can be expressed as a unique \(\mathbb{Q}\)-linear combination of these basis elements. In particular, for any nonzero integral ideal \(\mathfrak{f}\subset F\), any element \(z\) lying in the fractional ideal \(\mathfrak{f}^{-1}\) has a unique expression as a \(\mathbb{Q}\)-linear combination \[z=\sum_{i=1}^{n}t_{z,\tau,i}f_{\tau,i}.\] The coordinates \(t_{z,\tau,i}\) of \(z\in\mathfrak{f}^{-1}\) in the basis \(\mathcal{B}_{F,\tau}\) determine when \(z\) lies in \(R^{\tau}(\mathfrak{f})\). The set \(R^{\tau}(\mathfrak{f})\) can be described as the set of all \(z\in\mathfrak{f}^{-1}\) for which \(\mathfrak{t}_{z,\tau}\) lies inside the bounded region of \(\mathbb{R}^{n}\) determined by \(I_{\tau,1}\times\cdots\times I_{\tau,n}\), where these intervals are defined as in (1.9). Associated to each interval \(I_{\tau,i}\) of the form \((0,1]\) we define the following modified fractional part function \[\{x\}_{I_{\tau,i}}=\begin{cases}\{x\}&\text{if }x\not\in\mathbb{Z}\\ 1&\text{if }x\in\mathbb{Z},\end{cases}\] otherwise, if \(I_{\tau,i}=[0,1)\), then \(\{x\}_{I_{\tau,i}}\) is the usual fractional part function. It is readily seen that the coordinates of \(z\) in the basis \(\mathcal{B}_{F,\tau}\) and the coordinates of \(\iota(z)\) in the basis \(\mathcal{B}_{\iota(f_{\tau})}\) coincide. The Shintani set \(R^{\tau}(\mathfrak{f})\) may also be viewed as the subset of \(\mathfrak{f}^{-1}\) whose image under \(\iota\) falls within the fundamental parallelipiped \(P^{\tau}\) for the full rank lattice \(\bigoplus_{i=1}^{n}\mathbb{Z}\iota(f_{\tau,i})\subset\mathbb{R}^{n}\). Hence the volume of \(P^{\tau}\) is given by the determinant of \(A^{\tau}\). Moreover, \(R^{\tau}(\mathfrak{f})\) is finite because, as is the case for any fractional ideal of \(F\), \(\mathfrak{f}^{-1}\) determines a full rank lattice in \(\mathbb{R}^{n}\). The quotient \(G_{\tau}(\mathfrak{f})\coloneqq\mathfrak{f}^{-1}/\bigoplus_{i=1}^{n}\mathbb{Z} f_{\tau,i}\) is a finite abelian group. Next we turn to the problem of describing the algebraic structure of the Shintani set. Throughout, we assume \(\chi\) is a narrow ray class group character of \(F\) with finite part of its conductor given by the principal ideal \(\mathfrak{f}=(\alpha)\). **Proposition 2.2** (Proposition 4.1 of [2]).: \(R^{\tau}(\mathfrak{f})\) _is a complete set of coset representatives for the quotient group \(G_{\tau}(\mathfrak{f})\) under the bijection_ \[R^{\tau}(\mathfrak{f}) \to G_{\tau}(\mathfrak{f})\] \[z \mapsto z+\bigoplus_{i=1}^{n}\mathbb{Z}f_{\tau,i}.\] By virtue of this bijection, one can define a binary operation \(\oplus:R^{\tau}(\mathfrak{f})\times R^{\tau}(\mathfrak{f})\to R^{\tau}( \mathfrak{f})\) for any two elements \(z_{1},z_{2}\in R^{\tau}(\mathfrak{f})\) using the group law of \(G_{\tau}(\mathfrak{f})\). **Proposition 2.3** (Proposition 4.3 of [2]).: _The element \(z_{1}\oplus z_{2}\) is defined to be the unique coset representative of_ \[z_{1}+z_{2}+\bigoplus_{i=1}^{n}\mathbb{Z}f_{\tau,i}\] _lying in \(R^{\tau}(\mathfrak{f})\). Under the group law \(\oplus\), \(R^{\tau}(\mathfrak{f})\) is a finite abelian group._ **Remark**.: The additive identity of \(R^{\tau}(\mathfrak{f})\) is the element \(1_{R^{\tau}(\mathfrak{f})}\in\bigoplus_{i=1}^{n}\mathbb{Z}f_{\tau,i}\) with coordinate vector \(\mathbf{t}_{z,\tau}\), where \(t_{z,\tau,i}=0\) or \(1\) according to the definition of the interval \(I_{\tau,i}\) (see (1.9)). Using the additive group structure of \(R^{\tau}(\mathfrak{f})\), one can establish the following homomorphism of groups which will be needed in the proof of Theorem 1.1. **Proposition 2.4** (Proposition 4.4 of [2]).: _The map multiplication by \(\alpha\) and reduction modulo \(\mathfrak{f}\)_ \[\pi_{\alpha,\tau}\colon R^{\tau}(\mathfrak{f}) \to\mathcal{O}_{F}/\mathfrak{f}\] \[z \mapsto\alpha z+\mathfrak{f}\] _is a surjective additive group homomorphism and for each coset \(w+\mathfrak{f}\in\mathcal{O}_{F}/\mathfrak{f}\),_ \[\#\pi_{\alpha,\tau}^{-1}(w+\mathfrak{f})=\#\mathrm{ker}(\pi_{\alpha,\tau}).\] By the First Isomorphism Theorem, \(R^{\tau}(\mathfrak{f})/\mathrm{ker}(\pi_{\alpha,\tau})\cong\mathcal{O}_{F}/ \mathfrak{f}\). Let \[\pi_{\alpha,\tau}^{*}:R^{\tau}(\mathfrak{f})/\mathrm{ker}(\pi_{\alpha,\tau}) \xrightarrow{\sim}\mathcal{O}_{F}/\mathfrak{f} \tag{2.8}\] be the bijection on \(R^{\tau}(\mathfrak{f})/\mathrm{ker}(\pi_{\alpha,\tau})\) induced by \(\pi_{\alpha,\tau}\). The map \(\pi_{\alpha,\tau}^{*}\) restricted to \((\mathcal{O}_{F}/\mathfrak{f})^{\times}\) is a bijection between \(R^{\tau}(\mathfrak{f})/\mathrm{ker}(\pi_{\alpha,\tau})-\{1_{R^{\tau}(\mathfrak{ f})}+\mathrm{ker}(\pi_{\alpha,\tau})\}\) and \((\mathcal{O}_{F}/\mathfrak{f})^{\times}\). For notational convenience, let \[R^{\tau}(\mathfrak{f})^{\times}\coloneqq R^{\tau}(\mathfrak{f})/\mathrm{ker }(\pi_{\alpha,\tau})-\{1_{R^{\tau}(\mathfrak{f})}+\mathrm{ker}(\pi_{\alpha, \tau})\},\] suggestive of the structure endowed by identifying \(R^{\tau}(\mathfrak{f})^{\times}\) with \((\mathcal{O}_{F}/\mathfrak{f})^{\times}\) under \(\pi_{\alpha,\tau}^{*}\). Next we turn to an explicit characterization of the kernel of the map \(\pi_{\alpha,\tau}\). **Lemma 2.5** (Lemma 4.6 & Proposition 4.7 of [2]).: _We have that_ \[\mathrm{ker}(\pi_{\alpha,\tau})=\mathcal{O}_{F}\cap R^{\tau}(\mathfrak{f}),\] _and_ \[\#\mathrm{ker}(\pi_{\alpha,\tau})=\frac{\mathrm{vol}(P_{F}^{\tau})}{\sqrt{d_ {F}}}=\frac{\det(A^{\tau})}{\sqrt{d_{F}}},\] _where \(d_{F}\) is the discriminant of \(F\)._ Using the tools introduced in Section 2 and the results of [2] we show the following lemma characterizing the relationship between narrow ray class group characters and the Shintani set for which we have need in the proof of Theorem 1.1. We show that \(\chi\) is invariant under translation by elements of \(\mathrm{ker}(\pi_{\alpha,\tau})\). **Lemma 2.6**.: _Let \(\chi\) be the narrow ray class group character which has the ideal \(\mathfrak{f}\) as the finite part of its conductor. If \(w\in\ker(\pi_{\alpha,\tau})\) then for any \(z\in R^{\tau}(\mathfrak{f})\),_ \[\chi((z)\mathfrak{f})=\chi((z\oplus w)\mathfrak{f}).\] Proof.: By Lemma 2.5, \(w\in\mathcal{O}_{F}\) and hence \((w)\mathfrak{f}\subset\mathfrak{f}\). By Proposition 2.3, \[s\coloneqq z\oplus w-(z+w)\in\bigoplus_{i=1}^{n}\mathbb{Z}f_{\tau,i}\subset \mathcal{O}_{F},\] and so \(w-s\in\mathcal{O}_{F}\). Since \(\mathfrak{f}\) is the finite part of the conductor of \(\chi\), \[\chi((z\oplus w)\mathfrak{f})=\chi((z+w-s)\mathfrak{f})=\chi((z)\mathfrak{f}+ (w-s)\mathfrak{f})=\chi((z)\mathfrak{f}).\] ## 3. Proof of Theorem 1.1 In this section, we will prove Theorem 1.1. To this end, we require several lemmas. ### Some Lemmas Here we derive lemmas that will be required to prove Theorem 1.1 using the tools introduced in Section 2. We specialize to the following setting. Let \(K\) be a quadratic extension of a totally real field \(F=\mathbb{Q}(\theta_{F})\) of narrow class number \(1\), where \(\theta_{F}\) is chosen to be an algebraic integer. Further, suppose \(p\) is a rational prime which remains inert in \(F\) and \(p\nmid[\mathcal{O}_{F}:\mathbb{Z}[\theta_{F}]]\). Let \(g(x)\in\mathbb{Z}[x]\) be the minimal polynomial of \(\theta_{F}\) and denote by \(\overline{g}(x)\in\mathbb{F}_{p}[x]\) its reduction mod \(p\). Define the map \[\varphi\colon\mathbb{F}_{p}[x]/(\overline{g}) \to\mathcal{O}_{F}/p\mathcal{O}_{F} \tag{3.1}\] \[x \mapsto\theta_{F}+p\mathcal{O}_{F}.\] Let \(\chi_{F}\) be the narrow ray class group character with finite part of its conductor given by \(p\mathcal{O}_{F}\) where \(p\) is a rational prime which remains inert in \(F\). In this setting, the Shintani set \(R^{\tau}(p\mathcal{O}_{F})\) has additional rich structure. Using the notation and tools developed in Section 2, we show that Shintani set \(R^{\tau}(p\mathcal{O}_{F})\) is endowed with a cyclic structure mirroring the multiplicative structure of the multiplicative group of the finite field \(\mathbb{F}_{p^{n}}\). This combinatorial structure underlies the proof of Theorem 1.1. **Lemma 3.1**.: _Assume the notation and hypotheses above. Let \(\varphi:\mathbb{F}_{p^{n}}\to\mathcal{O}_{F}/p\mathcal{O}_{F}\) be the map defined in (3.1) and let \(\pi_{p,\tau}^{*}:R^{\tau}(p\mathcal{O}_{F})/\ker(\pi_{p,\tau})\to\mathcal{O}_{ F}/p\mathcal{O}_{F}\) be the bijection defined in (2.8). Then we have that the map_ \[\Psi\coloneqq(\pi_{p,\tau}^{*})^{-1}\circ\varphi\colon\mathbb{F}_{p^{n}}\to R ^{\tau}(p\mathcal{O}_{F})/\ker(\pi_{p,\tau}) \tag{3.2}\] _is a homomorphism of additive groups and \(\mathbb{F}_{p^{n}}^{\times}\) and \(R^{\tau}(p\mathcal{O}_{F})^{\times}\) are in bijective correspondence._ Proof.: Without loss of generality, fix \(\tau\in S_{n-1}\) such that \(w_{\tau}\neq 0\). By the discussion following Proposition 2.4, \(R^{\tau}(p\mathcal{O}_{F})^{\times}\) and \((\mathcal{O}_{F}/p\mathcal{O}_{F})^{\times}\) are in bijective correspondence under \(\pi_{p,\tau}^{*}\). We have that \(\mathbb{Z}[\theta_{F}]/p\mathbb{Z}[\theta_{F}]\cong\mathbb{F}_{p}[x]/(\overline {g})\). Since \(p\nmid[\mathcal{O}_{F}:\mathbb{Z}[\theta_{F}]]\) we may apply Dedekind's criterion to \(p\) to obtain \(\mathbb{Z}[\theta_{F}]/p\mathbb{Z}[\theta_{F}]\cong\mathcal{O}_{F}/p\mathcal{O }_{F}\) and hence since \(p\mathcal{O}_{F}\) is a prime ideal, \(\varphi\) gives the following isomorphism of residue fields \[\mathbb{F}_{p}[x]/(\overline{g})\cong\mathcal{O}_{F}/p\mathcal{O}_{F}\] via \(x\mapsto\theta_{F}+p\mathcal{O}_{F}\). Since \(p\) remains inert, we have that the inertial degree \(f_{p\mathcal{O}_{F}|p}=[\mathcal{O}_{F}:p\mathcal{O}_{F}]\) equals \(n\) and hence \(\deg\overline{g}=n\) so that \[\mathbb{F}_{p^{n}}\cong\mathbb{F}_{p}(\theta_{F})=\mathbb{F}_{p}[x]/(\overline {g})\cong\mathcal{O}_{F}/p\mathcal{O}_{F}.\] Therefore, the composition \(\Psi\) gives an additive group isomorphism from \(R^{\tau}(p\mathcal{O}_{F})/\mathrm{ker}(\pi_{p,\tau})\) to \(\mathbb{F}_{p^{n}}\). Since \(1_{R^{\tau}(p\mathcal{O}_{F})}+\mathrm{ker}(\pi_{p,\tau})\) is the additive identity of \(R^{\tau}(p\mathcal{O}_{F})/\mathrm{ker}(\pi_{p,\tau})\), the map \(\Psi|_{\mathbb{F}_{p^{n}}^{\times}}=(\pi_{p,\tau}^{*})^{-1}\circ\varphi|_{ \mathbb{F}_{p^{n}}^{\times}}\) is a bijection between \(\mathbb{F}_{p^{n}}^{\times}\) and \(R^{\tau}(p\mathcal{O}_{F})^{\times}\). The multiplicative group of a finite field is cyclic, and so we may fix \(\rho\) such that \(\mathbb{F}_{p^{n}}^{\times}=\langle\rho\rangle\). Recall that we denoted by \(h_{\rho}(x)=x^{n}+p_{n-1}x^{n-1}+\cdots+p_{0}\in\mathbb{Z}[x]\) the minimal polynomial of \(\rho\) over \(\mathbb{Q}\) whose reduction mod \(p\) is the minimal polynomial for \(\rho\) over \(\mathbb{F}_{p}\). Moreover, using the coefficients of \(h_{\rho}\), we defined a matrix \(A_{F,\rho}(z)\) lying in the rational function field \(\mathbb{Q}(z)\) as \[A_{F,\rho}(z)=\begin{pmatrix}1&0&0&\cdots&0&0&zp_{0}\\ -z&1&0&\cdots&0&0&zp_{1}\\ 0&-z&1&\cdots&0&0&zp_{2}\\ 0&0&-z&\cdots&0&0&zp_{3}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&-z&1&zp_{n-2}\\ 0&0&0&\cdots&0&-z&1+zp_{n-1}\end{pmatrix}.\] **Lemma 3.2**.: _Assume the notation and hypotheses above. The determinant of \(A_{F,\rho}(z)\) is given by_ \[\det(A_{F,\rho}(z))=1+zp_{n-1}+z^{2}p_{n-2}+\cdots+z^{n}p_{0}.\] Proof.: Let \(Z^{(n)}\) be the \(n\times n\)-matrix with \(-z\) on the lower diagonal, ones on the diagonal and zeros elsewhere. We claim that \(\det(Z^{(n)})=(-1)^{n}z^{n}\). It can be checked that \(\det(Z^{(2)})=z^{2}\). Expanding \(\det(Z^{(n+1)})\) along the first row using the cofactor formula yields \[Z^{(n+1)}=\begin{vmatrix}-z&1&0&\cdots&0&0\\ 0&-z&1&\cdots&0&0\\ 0&0&-z&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&-z&1\\ 0&0&0&\cdots&0&-z\end{vmatrix}=-z\cdot Z^{(n)}+(-1)^{n+3}\begin{vmatrix}0&1& \cdots&0&0\\ 0&-z&\cdots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&-z&1\\ 0&0&\cdots&0&-z\end{vmatrix}=(-1)^{n+1}z^{n+1}\] so that by induction, \(Z^{(n)}=(-1)^{n}z^{n}\) for all \(n\geq 2\). Now let \(\deg(h_{\rho})=n\) and let \(C_{i,j}^{(n)}\) be the \((i,j)^{th}\) cofactor of \(A_{F,\rho}(z)\). Assume for the purpose of induction that for \(n\geq 3\), \(\det(A_{F,\rho}(z))=1+\sum_{i=1}^{n}z^{i}p_{n-i}\). For the case \(n=2\), \[\det(A_{F,\rho}(z))=\begin{vmatrix}1&zp_{0}\\ -z&1+zp_{1}\end{vmatrix}=1+zp_{1}+z^{2}p_{0}.\] Now assume \(h_{\rho}\) has degree \(n+1\). Expanding the \((n+1)\times(n+1)\) matrix \(A_{F,\rho}(z)\) along the first row using the cofactor formula, we have that \[\det(A_{F,\rho}(z)) =\begin{vmatrix}1&\cdots&0&0&zp_{1}\\ -z&\cdots&0&0&zp_{2}\\ \vdots&\ddots&\vdots&\vdots&\vdots\\ 0&\cdots&-z&1&zp_{n-1}\\ 0&\cdots&0&-z&1+zp_{n}\end{vmatrix}+(-1)^{n+2}zp_{0}Z^{(n)}\] \[=1+zp_{n}+z^{2}p_{n-1}+\cdots+z^{n}p_{1}+(-1)^{n+2}zp_{0}\cdot(-1 )^{n}z^{n}\] \[=1+zp_{n}+\cdots+z^{2}p_{n-1}+\cdots+z^{n+1}p_{0},\] since \(C_{1,1}^{(n+1)}\) is the determinant of the \(n\times n\)-matrix \(A_{F,\rho}(z)\) with the indices of the coefficients shifted by \(1\). Therefore, by induction, \(\det(A_{F,\rho})=1+\sum_{i=1}^{n}z^{i}p_{n-i}\) for \(\deg(h_{\rho})=n\geq 2\). ### Proof of Theorem 1.1 Define the map \(\Phi:\mathbb{F}_{p^{n}}\to R^{\tau}(p\mathcal{O}_{F})/\ker(\pi_{p,\tau})\) as the composition of the following sequence of maps \[\Phi\coloneqq\mathbb{F}_{p^{n}}\stackrel{{ T_{\mathbb{F}_{p^{n}}, \mathcal{B}_{\rho},\mathcal{B}_{\theta_{F}}}}}{{\longrightarrow}}\mathcal{O} _{F}/p\mathcal{O}_{F}\stackrel{{ T_{F,\mathcal{B}_{\theta_{F}}, \mathcal{B}_{F,\tau}}}}{{\longrightarrow}}\mathcal{O}_{F}/p\mathcal{O}_{F} \stackrel{{(\pi_{p,\tau}^{n})^{-1}}}{{\longrightarrow}}R^{\tau}(p \mathcal{O}_{F})/\ker(\pi_{p,\tau}),\] where \(T_{\mathbb{F}_{p^{n}},\mathcal{B}_{\rho},\mathcal{B}_{\theta_{F}}}\) is an \(\mathbb{F}_{p}\)-vector space endomorphism taking the basis \(\mathcal{B}_{\rho}\) to the basis \(\mathcal{B}_{\theta_{F}}\) for \(\mathbb{F}_{p^{n}}\) over \(\mathbb{F}_{p}\) and \(T_{F,\mathcal{B}_{\theta_{F}},\mathcal{B}_{F,\tau}}\) is the vector space endomorphism taking the basis \(\mathcal{B}_{\theta_{F}}\) to the basis \(\mathcal{B}_{F,\tau}\) for \(F\) over \(\mathbb{Q}\). For notational convenience, we also denote the change of basis matrix in \(\mathrm{GL}_{n}(\mathbb{F}_{p})\) representing \(T_{\mathbb{F}_{p^{n}},\mathcal{B}_{\rho},\mathcal{B}_{\theta_{F}}}\) by \(T_{\mathbb{F}_{p^{n}},\mathcal{B}_{\rho},\mathcal{B}_{\theta_{F}}}\). Similarly, we use the same notation \(T_{F,\mathcal{B}_{\theta_{F}},\mathcal{B}_{F,\tau}}\) to denote the matrix in \(\mathrm{GL}_{n}(\mathbb{Q})\) representing the endomorphism \(T_{F,\mathcal{B}_{\theta_{F}},\mathcal{B}_{F,\tau}}\). Then we explicitly have that \[\Phi(\rho^{n+m})=\left(\sum_{i=1}^{n}\left\{\frac{\left(T_{F, \mathcal{B}_{\theta_{F}},\mathcal{B}_{F,\tau}}T_{\mathbb{F}_{p^{n}},\mathcal{B }_{\rho},\mathcal{B}_{\theta_{F}}}\mathbf{\overline{x}}(m)\right)_{i}}{p} \right\}_{I_{\tau,i}}f_{\tau,i}\right)+\ker(\pi_{p,\tau}),\] where \(\mathbf{\overline{x}}(m)=(\overline{x}_{1}(m),\overline{x}_{2}(m),\cdots, \overline{x}_{n}(m))\) is the minimal non-negative representative in \(\mathbb{F}_{p^{n}}\) of the vector \((x_{1}(m),\cdots,x_{n}(m))\) holding the coefficients of \(\rho^{n+m}\) in the basis \(\mathcal{B}_{\rho}\) for \(\mathbb{F}_{p^{n}}\). We will show that the \(n\)-tuple \((x_{1}(m),\cdots,x_{n}(m))\) is generated by \(n\) unique rational functions \(X_{F,\rho,1}(z),\cdots,X_{F,\rho,n}(z)\) which can be explicitly solved for using the coefficients of the minimal polynomial \(h_{\rho}(x)\) as described below. Since \(p\) remains inert in \(F\) and \(p\nmid[\mathcal{O}_{F}:\mathbb{Z}[\theta_{F}]]\) by Dedekind's criterion, we have \[\mathbb{Z}[\theta_{F}]/p\mathbb{Z}[\theta_{F}]\cong\mathbb{F}_{p}[x]/( \overline{q})=\mathbb{F}_{p}(\theta_{F})\cong\mathcal{O}_{F}/p\mathcal{O}_{F}.\] Therefore \(\mathcal{B}_{\theta_{F}}\) forms a power basis for \(\mathbb{F}_{p^{n}}\) as a \(\mathbb{F}_{p}\)-vector space of dimension \(n\). For a primitive element \(\rho\) of \(\mathbb{F}_{p^{n}}^{\times}\), the set \(\mathcal{B}_{\rho}\) is a power basis for \(\mathbb{F}_{p^{n}}\) over \(\mathbb{F}_{p}\). Hence there exists a change of basis matrix \(T_{\mathbb{F}_{p^{n}},\mathcal{B}_{\rho},\mathcal{B}_{\theta_{F}}}\in\mathrm{ GL}_{n}(\mathbb{F}_{p})\) taking an expression for \(\rho^{n+m}\), \(m>0\) in the basis \(\mathcal{B}_{\rho}\) to an expression of \(\rho^{n+m}\) in the basis \(\mathcal{B}_{\theta_{F}}\) by letting \(T_{\mathbb{F}_{p^{n}},\mathcal{B}_{\rho},\mathcal{B}_{\theta_{F}}}\) act on the coefficient vector of \(\rho^{n+m}\) for \(m>0\) in basis \(\mathcal{B}_{\rho}\). Let \(\varphi\) be defined as in (3.1) and let \(\pi_{p,\tau}^{*}\) be the bijection defined in (2.8). Since \([F:\mathbb{Q}]=n\), we may view \(F\) as a \(\mathbb{Q}\)-vector space of dimension \(n\). Hence there exists a change of basis matrix \(T_{F,\mathcal{B}_{\theta_{F}},\mathcal{B}_{F,\tau}}\) taking the basis \(\mathcal{B}_{\theta_{F}}\) to the basis \(\mathcal{B}_{F,\tau}\). The powers of \(\rho\) larger than \(n-1\) can be written in the basis \(\mathcal{B}_{\rho}\) using the relation given by \(h_{\rho}\), namely, \[\rho^{n}=-p_{0}-p_{1}\rho-\cdots-p_{n-1}\rho^{n-1}\] so that \[\rho^{n+1}=-p_{n-1}p_{0}+\left(p_{0}-p_{n-1}p_{1}\right)\rho+\cdots+\left(p_{n -2}-p_{n-1}p_{n-1}\right)\rho^{n-1}.\] Let \(c_{i+1}(0)=-p_{i}\) for \(i=1,\cdots,n\) and for \(m>0\), assume the coefficients are properly chosen so that \[\rho^{n+m}=c_{1}(m)+c_{2}(m)\rho+\cdots+c_{n}(m)\rho^{n-1}. \tag{3.3}\] Then we have that \[\rho^{n+m+1} =c_{1}(m)\rho+c_{2}(m)\rho^{2}+\cdots+c_{n}(m)\rho^{n} \tag{3.4}\] \[=-c_{n}(m)p_{0}+\left(c_{1}(m)-c_{n}(m)p_{1}\right)\rho+\cdots+ \left(c_{n-1}(m)-c_{n}(m)p_{n-1}\right)\rho^{n-1},\] and hence by induction, for \(1\leq i\leq n\) and \(m\geq 0\), the coefficients \(c_{i}(m)\) satisfy (3.3). Using (3.4) and the coefficients of \(h_{\rho}\), we define the following matrix \(A_{F,\rho}(z)\) and vector \(\mathbf{v}_{F,\rho}\), both lying in \(\mathbb{Q}(z)\), by \[A_{F,\rho}(z)\coloneqq\begin{pmatrix}1&0&0&\cdots&0&0&zp_{0}\\ -z&1&0&\cdots&0&0&zp_{1}\\ 0&-z&1&\cdots&0&0&zp_{2}\\ 0&0&-z&\cdots&0&0&zp_{3}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&-z&1&zp_{n-2}\\ 0&0&0&\cdots&0&-z&1+zp_{n-1}\end{pmatrix}\qquad\text{and}\qquad\mathbf{v}_{F,\rho}\coloneqq\begin{pmatrix}p_{0}\\ p_{1}\\ p_{2}\\ p_{3}\\ \vdots\\ p_{n-2}\\ p_{n-1}\end{pmatrix}.\] Since \(\det(A_{F,\rho}(z))=1+zp_{n-1}+z^{2}p_{n-2}+\cdots+z^{n}p_{0}\) is a nonzero rational function (see Lemma 3.2), by Cramer's Rule, there exists a unique vector of rational functions with integer coefficients, say \[\mathbf{X}_{F,\rho}\coloneqq(X_{F,\rho,1}(z),X_{F,\rho,2}(z),\cdots,X_{F,\rho, n}(z))\] that satisfies \(A_{F,\rho}(z)\mathbf{X}_{F,\rho}=\mathbf{v}_{F,\rho}\). For notational convenience, we drop the dependence of the coefficients on \(F\) and \(\rho\), and let \(x_{i}(m)\) be the \(m^{th}\) coefficient of \(X_{F,\rho,i}(z)\) as follows \[X_{F,\rho,i}(z)=\sum_{m\geq 0}x_{F,\rho,i}(m)z^{m}=\sum_{m\geq 0}x_{i}(m)z^{m}. \tag{3.5}\] By Cramer's Rule, each of the rational functions \(X_{F,\rho,1}(z),\cdots,X_{F,\rho,n}(z)\) has denominator \(1+zp_{n-1}+\cdots+z^{n}p_{0}\) of degree \(n\). Note that \(p_{0}\neq 0\) since \(h_{\rho}\) is irreducible. Setting \(z=0\), one verifies that for all \(1\leq i\leq n\), we have that \(x_{i}(0)\) equals \(c_{i}(0)\). Moreover, using this and the fact that \(\mathbf{X}_{F,\rho}\) satisfies \(A_{F,\rho}(z)\mathbf{X}_{F,\rho}=\mathbf{v}_{F,\rho}\), equating coefficients of \(z^{m}\) one can verify that for all \(1\leq i\leq n\) and all \(m\geq 0\), we have that \(x_{i}(m)\) equals \(c_{i}(m)\). Define \(\overline{\mathbf{x}}(m)\coloneqq(\overline{x}_{1}(m),\cdots,\overline{x}_{n }(m))^{T}\) to be the minimal non-negative representative in \(\mathbb{F}_{p}^{n}\) of the \(n\)-tuple of \(m^{th}\) coefficients of each \(X_{F,\rho,i}(z)\) viewed as a power series. We can perform a change of basis to express \(\rho^{n+m}\) in the basis \(\mathcal{B}_{\theta_{F}}\) for \(\mathbb{F}_{p^{n}}\) corresponding to a modified vector \(T_{\mathbb{F}_{p^{n}},\mathcal{B}_{\rho},\mathcal{B}_{\theta_{F}}}\overline{ \mathbf{x}}(m)\). Under the isomorphism \(\varphi\), after applying the change of basis \(T_{\mathbb{F}_{p^{n}},\mathcal{B}_{\rho},\mathcal{B}_{\theta_{F}}}\), we have that \[\varphi(\rho^{n+m})=(1,\theta_{F},\cdots,\theta_{F}^{n-1})\cdot T_{\mathbb{F} _{p^{n}},\mathcal{B}_{\rho},\mathcal{B}_{\theta_{F}}}\overline{\mathbf{x}}(m )+p\mathcal{O}_{F}.\] Now applying the change of basis \(T_{F,\mathcal{B}_{\theta_{F}},\mathcal{B}_{F,\tau}}\), we obtain an expression for \(\varphi(\rho^{n+m})\) in the basis \(\mathcal{B}_{F,\tau}\). Namely, we have that \[\varphi(\rho^{n+m})=(f_{\tau,1},\cdots,f_{\tau,n})\cdot T_{F, \mathcal{B}_{\theta_{F}},\mathcal{B}_{F,\tau}}T_{\mathbb{F}_{p^{n}},\mathcal{ B}_{\rho},\mathcal{B}_{\theta_{F}}}\overline{\mathbf{x}}(m)+p\mathcal{O}_{F}.\] Under the mapping \((\pi_{p,\tau}^{*})^{-1}\), \(\varphi(\rho^{n+m})\) is sent to the unique coset for \(\varphi(\rho^{n+m})/p\) in \(R^{\tau}(p\mathcal{O}_{F})/\ker(\pi_{p,\tau})\) given by \[\left(\sum_{j=1}^{n}\left\{\frac{(T_{F,\mathcal{B}_{\theta_{F}}, \mathcal{B}_{F,\tau}}T_{\mathbb{F}_{p^{n}},\mathcal{B}_{\rho},\mathcal{B}_{ \theta_{F}}}\overline{\mathbf{x}}_{F,\rho}(m))_{j}}{p}\right\}_{I_{\tau,j}}f_{ \tau,j}\right)+\ker(\pi_{p,\tau}).\] Since \(\rho\) has order \(p^{n}-1\) in \(\mathbb{F}_{p^{n}}\), each element of the set \(\{\rho^{n+m}\ :\ 1\leq m\leq p^{n}-1\}\) is distinct. Since \(\#(R^{\tau}(p\mathcal{O}_{F})/\ker(\pi_{p,\tau}))=\#(\mathbb{F}_{p^{n}})\) and \(\Phi\) is an isomorphism, we have that \[R^{\tau}(p\mathcal{O}_{F})/\ker(\pi_{p,\tau})=\Phi(0_{\mathbb{F}_{p^{n}}}) \cup\operatorname{Im}_{\Phi}(\mathbb{F}_{p^{n}}^{\times})=\{1_{R^{\tau}(p \mathcal{O}_{F})}\}\cup\{\Phi(\rho^{n+m}):1\leq m\leq p^{n}-1\}.\] Therefore the set \[C_{\tau}\coloneqq\{1_{R^{\tau}(p\mathcal{O}_{F})}\}\cup\left\{\vartheta(\rho^{n+m} )\coloneqq\sum_{i=1}^{n}\left\{\frac{\left(T_{F,\mathcal{B}_{\theta_{F}}, \mathcal{B}_{F,\tau}}T_{\mathbb{F}_{p^{n}},\mathcal{B}_{\rho},\mathcal{B}_{ \theta_{F}}}\overline{\mathbf{x}}(m)\right)_{j}}{p}\right\}_{I_{\tau,j}}f_{ \tau,j}\ :\ 1\leq m\leq p^{n}-1\right\}\] forms a complete set of coset representatives for the quotient group \(R^{\tau}(p\mathcal{O}_{F})/\ker(\pi_{p,\tau})\). Put \[\ker(\pi_{p,\tau})=\left\{w(i)\coloneqq\sum_{j=1}^{n}w_{j}(i)f_{\tau,j}\ :\ 1\leq i\leq\#\ker(\pi_{p,\tau})\right\}.\] Then we have that \[R^{\tau}(p\mathcal{O}_{F})-\{1_{R^{\tau}(p\mathcal{O}_{F})}\}=\{\vartheta( \rho^{n+m})\oplus w(i)\ :\ 1\leq i\leq\#\ker(\pi_{p,\tau}),1\leq m\leq p^{n}-1\}.\] For \(1\leq j\leq n\), define the coefficients \[\tilde{x}_{\tau,j}(i,m)\coloneqq\left\{\left\{\frac{(T_{F,\mathcal{B}_{\theta_ {F}},\mathcal{B}_{F,\tau}}T_{\mathbb{F}_{p^{n}},\mathcal{B}_{\rho},\mathcal{B }_{\theta_{F}}}\overline{\mathbf{x}}(m))_{j}}{p}\right\}_{I_{\tau,j}}+w_{j}(i) \right\}_{I_{\tau,j}}.\] Then by definition of the group law \(\oplus\) we have that \[\vartheta(\rho^{n+m})\oplus w(i)=\tilde{x}_{\tau,1}(i,m)f_{\tau,1}+\cdots+ \tilde{x}_{\tau,n}(i,m)f_{\tau,n}.\] Hence the set of such \(n\)-tuples \((\tilde{x}_{\tau,1}(i,m),\cdots,\tilde{x}_{\tau,n}(i,m))\), one for each \(1\leq i\leq\#\ker(\pi_{p,\tau})\) and \(1\leq m\leq p^{n}-1\), is in bijective correspondence with \(R^{\tau}(p\mathcal{O}_{F})-\{1_{R^{\tau}(p\mathcal{O}_{F})}\}\) under the mapping \[(\tilde{x}_{\tau,1}(i,m),\cdots,\tilde{x}_{\tau,n}(i,m))\mapsto\tilde{x}_{\tau,1}(i,m)f_{\tau,1}+\cdots+\tilde{x}_{\tau,n}(i,m)f_{\tau,n}. \tag{3.6}\] Since \(\chi_{F}\left((1_{R^{\tau}(p\mathcal{O}_{F})})p\mathcal{O}_{F}\right)=0\) and by Lemma 2.6, the value of \(\chi_{F}\) is invariant under translation by an element of the kernel, we may neglect the coset \(1_{R^{\tau}(p\mathcal{O}_{F})}+\ker(\pi_{p,\tau})\). Moreover, by Lemma 2.6, the value of \(\chi_{F}\) along a coset of \(\ker(\pi_{p,\tau})\) is determined by the character value of a distinguished representative. Therefore, it suffices to consider the value of \(\chi_{F}\) at each element of \(C_{\tau}\). By Lemma 2.5, \(\vartheta(\rho^{n+m})\) differs from any element of the coset \(\Phi(\rho^{n+m})\) by an algebraic integer. Therefore, \[\varphi(\rho^{n+m})=(\pi_{p,\tau}^{*}\circ\Phi)(\rho^{n+m})=p\cdot\vartheta( \rho^{n+m})+p\mathcal{O}_{F}.\] The restriction of \(\varphi\) to \(\mathbb{F}_{p^{n}}^{\times}\) gives a multiplicative group isomorphism between \(\mathbb{F}_{p^{n}}^{\times}\) and \((\mathcal{O}_{F}/p\mathcal{O}_{F})^{\times}\). Therefore, we have that \[p\cdot\vartheta(\rho^{n+m})+p\mathcal{O}_{F}=\varphi(\rho^{n+m})=(\varphi( \rho))^{n+m}=(p\cdot\vartheta(\rho)+p\mathcal{O}_{F})^{n+m}=(p\cdot\vartheta( \rho))^{n+m}+p\mathcal{O}_{F},\] and hence \[x\coloneqq p\cdot\vartheta(\rho^{n+m})-p^{n+m}\cdot\vartheta(\rho)^{n+m}\in p \mathcal{O}_{F}.\] Since \(\chi_{F}\) is a multiplicative character of with finite part of its conductor given by \(p\mathcal{O}_{F}\), \[\chi_{F}\left((\vartheta(\rho^{n+m}))p\mathcal{O}_{F}\right)=\chi_{F}\left((p \cdot\vartheta(\rho^{n+m})-x)\mathcal{O}_{F}\right)=(\chi_{F}((\vartheta(\rho ))p\mathcal{O}_{F}))^{n+m}\,.\] Since \(\chi_{F}\) is a character of the narrow ray class group mod \(p\mathcal{O}_{F}\), there exists a character \(\chi_{f}:(\mathcal{O}_{F}/p\mathcal{O}_{F})^{\times}\to\mathbb{S}^{1}\) and a subset \(S\) of \(\{\sigma_{1},\cdots,\sigma_{n}\}\) so that \[\chi_{F}(x\mathcal{O}_{F})=\chi_{f}(x)\prod_{\sigma_{i}\in S}\operatorname{sgn}( \sigma_{i}(x))\] for any \(x\in\mathcal{O}_{F}\) (see e.g., [14, p. 209]). Since every element of the Shintani set is totally positive, for any \(x\in R^{\tau}(p\mathcal{O}_{F})\), we obtain \[\chi_{F}((px)\mathcal{O}_{F})=\chi_{f}(px),\] which yields \[\chi_{F}((p\cdot\vartheta(\rho))\mathcal{O}_{F})=\chi_{f}(p\cdot \vartheta(\rho)).\] Since \(\varphi(\rho)=p\cdot\vartheta(\rho)+p\mathcal{O}_{F}\), we have that \(p\cdot\vartheta(\rho)\) lies in a nontrivial equivalence class mod \(p\mathcal{O}_{F}\) which generates \((\mathcal{O}_{F}/p\mathcal{O}_{F})^{\times}\), and hence has order \(p^{n}-1\). Moreover, since \(\chi_{f}\) is a multiplicative homomorphism, a generator of \((\mathcal{O}_{F}/p\mathcal{O}_{F})^{\times}\) is mapped to a primitive \(d^{th}\) root of unity of order \(d>1\) dividing \(p^{n}-1\). Therefore, \[\chi_{F}((p\cdot\vartheta(\rho)))=\exp\left(\frac{2\pi ik}{d} \right),\] where \(\exp\left(\frac{2\pi ik}{d}\right)\) is a primitive \(d^{th}\) root of unity of order \(d>1\) dividing \(p^{n}-1\). By [15, Corollary 3], \[L(s,\chi_{F})=N(p\mathcal{O}_{F})^{-s}\sum_{\begin{subarray}{c} \tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{z\in R^{\tau}(p\mathcal{O}_{F})} \chi_{F}((z)p\mathcal{O}_{F})\zeta(s,A^{\tau},\mathfrak{t}_{z,\tau}).\] Using (3.6) to run through the Shintani set and grouping by cosets of \(\ker(\pi_{p,\tau})\) in \(R^{\tau}(p\mathcal{O}_{F})\), we have that \[L(s,\chi_{F})=N(p\mathcal{O}_{F})^{-s}\sum_{\begin{subarray}{c} \tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}\sum_{i=1}^{\#\ker( \pi_{p,\tau})}\chi_{F}((\vartheta(\rho^{n+m})\oplus w_{i})p\mathcal{O}_{F}) \zeta\left(s,A^{\tau},\tilde{\mathbf{x}}_{\tau}(i,m)\right).\] By Lemma 2.6, \[L(s,\chi_{F})=N(p\mathcal{O}_{F})^{-s}\sum_{\begin{subarray}{c} \tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}\sum_{i=1}^{\#\ker( \pi_{p,\tau})}\chi_{F}((\vartheta(\rho^{n+m})p\mathcal{O}_{F})\zeta\left(s,A ^{\tau},\tilde{\mathbf{x}}_{\tau}(i,m)\right).\] By Lemma 2.5, we have that \[L(s,\chi_{F})=N(p\mathcal{O}_{F})^{-s}\sum_{\begin{subarray}{c} \tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}\chi_{F}((\vartheta(\rho )))^{n+m}\sum_{i=1}^{\#(\mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F}))}\zeta \left(s,A^{\tau},\tilde{\mathbf{x}}_{\tau}(i,m)\right).\] Since \(\chi_{F}((\varphi(\rho)))=\exp\left((2\pi ik)/d\right)\), we obtain \[L(s,\chi_{F})=N(p\mathcal{O}_{F})^{-s}\sum_{\begin{subarray}{c} \tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}\exp\left(\frac{2\pi ik( n+m)}{d}\right)^{\#(\mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F}))}\zeta \left(s,A^{\tau},\tilde{\mathbf{x}}_{\tau}(i,m)\right).\] ## 4. Proof of Corollaries In this section, we will prove Corollaries 1.2 and 1.3 using Theorem 1.1 and the tools introduced in Section 2. ### Somes Lemmas In order to prove Corollaries 1.2 and 1.3, we require two lemmas. **Lemma 4.1**.: _Let \(F\) be a totally real field with narrow class number \(1\) and let \(K=F(\sqrt{-p})\) where \(p\) is a rational prime \(p\equiv 3\mod 4\) which remains inert in \(F\). Then \(\mathfrak{D}_{K/F}=p\mathcal{O}_{F}\)._ Proof.: We show that \(\{1,\frac{1+\sqrt{-p}}{2}\}\) is an integral basis for \(K\) over \(F\). Since \(h_{F}=1\), \(\mathcal{O}_{K}\) is a free \(\mathcal{O}_{F}\)-module of rank \(2\)[11, Corollary 3 on p. 386]. Therefore, there exist \(\omega_{1},\omega_{2}\in K\) such that \(\mathcal{O}_{K}=\omega_{1}\mathcal{O}_{F}\oplus\omega_{2}\mathcal{O}_{F}\). In particular, \(\{\omega_{1},\omega_{2}\}\) is an \(F\)-basis of \(K\). Since \(1\in\mathcal{O}_{K}\) and \(\frac{1+\sqrt{-p}}{2}\in\mathcal{O}_{K}\), there exist \(m_{11},m_{12},m_{21},m_{22}\in\mathcal{O}_{F}\) such that \[m_{11}\omega_{1}+m_{12}\omega_{2}=1\] \[m_{21}\omega_{1}+m_{22}\omega_{2}=\frac{1+\sqrt{-p}}{2}.\] Namely, \[\begin{pmatrix}m_{11}&m_{12}\\ m_{21}&m_{22}\end{pmatrix}\begin{pmatrix}\omega_{1}\\ \omega_{2}\end{pmatrix}=\begin{pmatrix}1\\ \frac{1+\sqrt{-p}}{2}\end{pmatrix}\] where \(M=(m_{ij})\in\mathbb{M}_{2\times 2}(\mathcal{O}_{F})\). We now show that \(M\in\operatorname{GL}_{2}(\mathcal{O}_{F})\). If \(M\) is invertible in \(\mathcal{O}_{F}\), then \(M\) takes the integral basis of \(\mathcal{O}_{F}\) to another integral basis. By definition, \[\operatorname{Disc}\biggl{(}\biggl{\{}1,\frac{1+\sqrt{-p}}{2}\biggr{\}}\biggr{)} =\det\begin{pmatrix}\operatorname{Tr}_{K/F}(1)&\operatorname{Tr}_{K/F} \biggl{(}\frac{1+\sqrt{-p}}{2}\biggr{)}\\ \operatorname{Tr}_{K/F}\biggl{(}\frac{1+\sqrt{-p}}{2}\biggr{)}& \operatorname{Tr}_{K/F}\biggl{(}\biggl{(}\frac{1+\sqrt{-p}}{2}\biggr{)}^{2} \biggr{)}=-p\end{pmatrix}.\] We also have that \[\operatorname{Disc}\biggl{(}\biggl{\{}1,\frac{1+\sqrt{-p}}{2} \biggr{\}}\biggr{)} =\det\left(M\begin{pmatrix}\operatorname{Tr}_{K/F}(\omega_{1}^{2 })&\operatorname{Tr}_{K/F}(\omega_{1}\omega_{2})\\ \operatorname{Tr}_{K/F}(\omega_{2}\omega_{1})&\operatorname{Tr}_{K/F}( \omega_{2}^{2})\end{pmatrix}M^{T}\right)\] \[=\det(M)^{2}\cdot\operatorname{Disc}(\{\omega_{1},\omega_{2}\}).\] Hence we have \[-p=\det(M)^{2}\cdot\operatorname{Disc}(\{\omega_{1},\omega_{2}\}).\] Since \(p\) remains inert in \(\mathcal{O}_{F}\), \(p\) cannot divide \(\det(M)\). Since \(F\) is a UFD, cancellation then shows \(\det(M)\) must be a unit of \(\mathcal{O}_{F}\). Notice this also holds for primes which are unramified in \(F\). Therefore, \(M\in\operatorname{GL}_{2}(\mathcal{O}_{F})\) and therefore \(\{1,\frac{1+\sqrt{-p}}{2}\}\) forms a \(\mathcal{O}_{F}\)-integral basis of \(K\). Thus we have that \[\mathfrak{D}_{K/F}=\left(\det\begin{pmatrix}1&\frac{1+\sqrt{-p}}{2}\\ 1&\frac{1-\sqrt{-p}}{2}\end{pmatrix}\right)^{2}\mathcal{O}_{F}=p\mathcal{O}_{F}.\] **Lemma 4.2**.: _Let \(F\) a totally real field with \(h_{F}^{+}=1\) and let \(K=F(\sqrt{p})\) where \(p\) is a rational prime \(p\equiv 1\mod 4\) which remains inert in \(F\). Then \(\mathfrak{D}_{K/F}=p\mathcal{O}_{F}\)._ Proof.: Applying the same argument as in Lemma (4.1) to \(\{1,\sqrt{p}\}\) shows \(\{1,\sqrt{p}\}\) forms an \(\mathcal{O}_{F}\)-integral basis of \(K\), which yields \[\mathfrak{D}_{K/F}=\left(\det\begin{pmatrix}1&\sqrt{p}\\ 1&\sqrt{p}\end{pmatrix}\right)^{2}\mathcal{O}_{F}=p\mathcal{O}_{F}.\] ### Proof of Corollary 1.2 When \(\chi_{F}\) in Theorem 1.1 is taken to be the Hecke character \(\chi_{K/F}\) associated to \(K/F\) by class field theory, \(\chi_{K/F}((z)p\mathcal{O}_{F})\in\{0,\pm 1\}\) for all \(z\in F\). By the argument in the proof of Theorem 1.1, \(\chi_{K/F}((\vartheta(\rho))p\mathcal{O}_{F})\) is a primitive root of unity of order \(d>1\) dividing \(p^{n}-1\). Therefore, \(\chi_{K/F}((\vartheta(\rho))p\mathcal{O}_{F})\) equals \(-1\), and hence we have that \[\chi_{K/F}((\vartheta(\rho^{n+m}))p\mathcal{O}_{F})=(-1)^{n+m}. \tag{4.1}\] By Theorem 1.1 applied to \(\chi_{K/F}\), using (4.1), we have that \[L(0,\chi_{K/F})=(-1)^{n}\sum_{\tau\in S_{n-1}}w_{\tau}\sum_{m=1}^{p^{n}-1}(-1)^ {m}\sum_{i=1}^{\#(\mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F}))}\zeta\left( 0,A^{\tau},\tilde{\mathbf{x}}_{\tau}(i,m)\right).\] Now [16, Theorem 2.1] (see 2.7) yields \[\zeta(0,A^{\tau},\tilde{\mathbf{x}}_{\tau}(i,m)) =\frac{1}{n}\sum_{j=1}^{n}\zeta_{n}\bigg{(}0,(\sigma_{j}(f_{\tau,1}),\sigma_{j}(f_{\tau,2}),\cdots,\sigma_{j}(f_{\tau,n})),\sum_{k=1}^{n} \tilde{x}_{\tau,k}(i,m)\sigma_{j}(f_{\tau,k})\bigg{)}\] \[=\frac{(-1)^{n}}{n}\sum_{i=1}^{n}\sum_{\begin{subarray}{c}(1, \ldots,n)\in\mathbb{Z}_{\geq 0}^{n}\\ \sum_{j=1}^{n}l_{j}=n\end{subarray}}\sigma_{i}(f_{\tau,1}^{l_{1}-1})\sigma_{i }(f_{\tau,2}^{l_{2}-1})\cdots\sigma_{i}(f_{\tau,n}^{l_{n}-1})\prod_{k=1}^{n} \frac{B_{l_{k}}(\tilde{x}_{\tau,k})}{l_{k}!}\] \[=\frac{(-1)^{n}}{n}\sum_{\begin{subarray}{c}(1,\ldots,l_{n})\in \mathbb{Z}_{\geq 0}^{n}\\ \sum_{j=1}^{n}l_{j}=n\end{subarray}}\prod_{k=1}^{n}\frac{B_{l_{k}}(\tilde{x}_ {\tau,k}(i,m))}{l_{k}!}\sum_{i=1}^{n}\prod_{k=1}^{n}\sigma_{i}(f_{\tau,k}^{l_ {k}-1})\] \[=\frac{(-1)^{n}}{n}\sum_{\begin{subarray}{c}(1,\ldots,l_{n})\in \mathbb{Z}_{\geq 0}^{n}\\ \sum_{j=1}^{n}l_{j}=n\end{subarray}}\prod_{k=1}^{n}\frac{B_{l_{k}}(\tilde{x}_ {\tau,k}(i,m))}{l_{k}!}\operatorname{Tr}_{F/\mathbb{Q}}\bigg{(}\prod_{k=1}^{n} f_{\tau,k}^{l_{k}-1}\bigg{)},\] where \(\zeta_{n}(s,\omega,x)\) is the Barnes multiple zeta-function. Hence we have that \[L(0,\chi_{K/F}) =(-1)^{n}\sum_{\begin{subarray}{c}\tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}(-1)^{m}\sum_{i=1}^{\#( \mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F}))}\zeta\left(0,A^{\tau},\tilde{ \mathbf{x}}_{\tau}(i,m)\right)\] \[=(-1)^{n}\sum_{\begin{subarray}{c}\tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}\frac{(-1)^{m+n}}{n}\sum_{ i=1}^{\#(\mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F}))}\sum_{\begin{subarray}{c}(1, \ldots,l_{n})\in\mathbb{Z}_{\geq 0}^{n}\\ \sum_{j=1}^{n}l_{j}=n\end{subarray}}\prod_{k=1}^{n}\frac{B_{l_{k}}(\tilde{x}_ {\tau,k}(i,m))}{l_{k}!}\mathrm{Tr}_{F/\mathbb{Q}}\left(\prod_{k=1}^{n}f_{\tau,k }^{l_{k}-1}\right)\] \[=\frac{1}{n}\sum_{\begin{subarray}{c}\tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}(-1)^{m}\sum_{i=1}^{\#( \mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F}))}\sum_{\begin{subarray}{c}(1, \ldots,l_{n})\in\mathbb{Z}_{\geq 0}^{n}\\ \sum_{j=1}^{n}l_{j}=n\end{subarray}}\prod_{k=1}^{n}\frac{B_{l_{k}}(\tilde{x}_ {\tau,k}(i,m))}{l_{k}!}\mathrm{Tr}_{F/\mathbb{Q}}\left(\prod_{k=1}^{n}f_{\tau,k }^{l_{k}-1}\right)\] Next, we consider the factorization \[\zeta_{K}(s)=\zeta_{F}(s)L(s,\chi_{K/F}).\] For the Dedekind zeta function associated to any number field \(K\), the Taylor series expansion at \(s=0\) is given by \[\zeta_{K}(s)=\frac{-h_{K}R_{K}}{w_{K}}s^{r_{1}+r_{2}-1}+O(s^{r_{1}+r_{2}}).\] In our setting, \(F\) is totally real of degree \(n\) and \(K\) is totally imaginary of degree \(2n\). Since \(K\) has no real embeddings and \(F\) has no complex embeddings, \(r_{1,F}+r_{2,F}=n=r_{1,K}+r_{2,K}.\) Therefore, both \(\zeta_{K}(s)\) and \(\zeta_{F}(s)\) have zeroes of order \(n-1\) at \(s=0\). Using \(h_{F}=1\), we have that \[\frac{\zeta_{K}(s)}{\zeta_{F}(s)}=\frac{-2h_{K}R_{K}+O(s^{n})}{ \omega_{K}R_{F}+O(s^{n})}.\] Sending \(s\to 0\), the higher order terms \(O(s^{n})\) tend to zero, and so we have that \[h_{K}=\frac{w_{K}}{2}\cdot\frac{R_{F}}{R_{K}}\cdot L(0,\chi_{K/ F}).\] Using the identity (see e.g. [13, p. 406]) \[\frac{1}{2}\frac{R_{F}}{R_{K}}=\frac{1}{[\mathcal{O}_{F}^{\times }:\mathcal{O}_{F}^{\times,+}][\mathcal{O}_{F}^{\times,+}:N_{K/F}\mathcal{O}_{ K}^{\times}]},\] where \(N_{K/F}\mathcal{O}_{K}^{\times}\coloneqq\{N_{K/F}(x)|x\in\mathcal{O}_{K}^{ \times}\}\), and combining the work of Shintani [13] and [4, 15] with Theorem 1.1, we obtain \[h_{K}=\frac{1}{n}\cdot\frac{w_{K}}{[\mathcal{O}_{F}^{\times}: \mathcal{O}_{F}^{\times,+}][\mathcal{O}_{F}^{\times,+}:N_{K/F}\mathcal{O}_{K}^ {\times}]}\sum_{\begin{subarray}{c}\tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\] \[\times\left\{\sum_{m=1}^{p^{n-1}}(-1)^{m}\sum_{i=1}^{\#( \mathcal{O}_{F}\cap R^{T}(p\mathcal{O}_{F}))}\sum_{\begin{subarray}{c}(l_{1}, \ldots,l_{n})\in\mathbb{Z}_{\geq 0}^{n}\\ \sum_{j=1}^{n}l_{j}=n\end{subarray}}\prod_{k=1}^{n}\frac{B_{l_{k}}(\tilde{x}_ {\tau,k}(i,m))}{l_{k}!}\mathrm{Tr}_{F/\mathbb{Q}}\left(\prod_{k=1}^{n}f_{\tau, k}^{l_{k}-1}\right)\right\}.\] ### Proof of Corollary 1.3 Since \(K\) is totally real of degree \(2n\), we have that \(r_{1,K}+r_{2,K}=2n\) and hence the Taylor series expansion at \(s=0\) for the Dedekind zeta function associated to \(K\) is given by \[\zeta_{K}(s)=\frac{-h_{K}R_{K}}{w_{K}}s^{2n-1}+O(s^{2n}).\] Since \(F\) is also totally real and \([F:\mathbb{Q}]=n\), we have that \(r_{1,F}+r_{2,F}=n\) and hence \(\zeta_{F}(s)\) has a zero of order \(n-1\) at \(s=0\). It Taylor series expansion at \(s=0\) is given by \[\zeta_{F}(s)=\frac{-h_{F}R_{F}}{w_{F}}s^{n-1}+O(s^{n}).\] Since both \(K\) and \(F\) are totally real, \(w_{F}=w_{K}=2\). Recall that \(h_{F}=1\). Hence we have that \[L(s,\chi_{K/F})=\frac{\zeta_{K}(s)}{\zeta_{F}(s)}=\frac{h_{K}R_{ K}\cdot s^{n}+O(s^{2n})}{R_{F}\cdot s^{n-1}+O(s^{n})}.\] Since \(\zeta_{K}(s)\) has a zero of order \(2n-1\) and \(\zeta_{F}(s)\) has a zero of order \(n-1\) at \(s=0\), the \(n^{th}\) derivative of \(\zeta_{K}(s)/\zeta_{F}(s)\) evaluated at \(s=0\) gives the ratio of the first nonzero terms in their Taylor series expansions. Therefore, the \(n^{th}\) derivative of \(L(s,\chi_{K/F})\) evaluated at \(0\) is given by \[h_{K}\cdot\frac{R_{K}}{R_{F}}=\frac{L^{(n)}(0,\chi_{K/F})}{n!}.\] By Theorem 1.1 combined with (4.1), we have that \[L(s,\chi_{K/F})=(-1)^{n}\cdot N(p\mathcal{O}_{F})^{-s}\sum_{ \begin{subarray}{c}\tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}(-1)^{m}\sum_{i=1}^{ \#(\mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F}))}\zeta(s,A^{\tau},\tilde{ \mathbf{x}}_{\tau}(i,m)),\] and differentiating \(n\) times yields \[L^{(n)}(s,\chi_{K/F})=N(p\mathcal{O}_{F})^{-s}\sum_{k=0}^{n} (-1)^{k}\binom{n}{k}\left(\ln(N(p\mathcal{O}_{F}))\right)^{n-k}\\ \times\left\{\sum_{\begin{subarray}{c}\tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}(-1)^{m}\sum_{i=1}^{ \#(\mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F}))}\zeta^{(k)}(s,A^{\tau}, \tilde{\mathbf{x}}_{\tau}(i,m))\right\}.\] Therefore, \[h_{K}\cdot\frac{R_{K}}{R_{F}}=\frac{1}{n!}\sum_{k=0}^{n}(-1)^{k} \binom{n}{k}\left(\ln(N(p\mathcal{O}_{F}))\right)^{n-k}\sum_{ \begin{subarray}{c}\tau\in S_{n-1}\\ w_{\tau}\neq 0\end{subarray}}w_{\tau}\sum_{m=1}^{p^{n}-1}(-1)^{m}\sum_{i=1}^{ \#(\mathcal{O}_{F}\cap R^{\tau}(p\mathcal{O}_{F}))}\zeta^{(k)}\left(0,A^{\tau },\tilde{\mathbf{x}}_{\tau}(i,m)\right).\]
2307.00117
Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control
Our goal is for robots to follow natural language instructions like "put the towel next to the microwave." But getting large amounts of labeled data, i.e. data that contains demonstrations of tasks labeled with the language instruction, is prohibitive. In contrast, obtaining policies that respond to image goals is much easier, because any autonomous trial or demonstration can be labeled in hindsight with its final state as the goal. In this work, we contribute a method that taps into joint image- and goal- conditioned policies with language using only a small amount of language data. Prior work has made progress on this using vision-language models or by jointly training language-goal-conditioned policies, but so far neither method has scaled effectively to real-world robot tasks without significant human annotation. Our method achieves robust performance in the real world by learning an embedding from the labeled data that aligns language not to the goal image, but rather to the desired change between the start and goal images that the instruction corresponds to. We then train a policy on this embedding: the policy benefits from all the unlabeled data, but the aligned embedding provides an interface for language to steer the policy. We show instruction following across a variety of manipulation tasks in different scenes, with generalization to language instructions outside of the labeled data. Videos and code for our approach can be found on our website: https://rail-berkeley.github.io/grif/ .
Vivek Myers, Andre He, Kuan Fang, Homer Walke, Philippe Hansen-Estruch, Ching-An Cheng, Mihai Jalobeanu, Andrey Kolobov, Anca Dragan, Sergey Levine
2023-06-30T20:09:39Z
http://arxiv.org/abs/2307.00117v2
# Goal Representations for Instruction Following: ###### Abstract Our goal is for robots to follow natural language instructions like "put the towel next to the microwave." But getting large amounts of labeled data, i.e. data that contains demonstrations of tasks labeled with the language instruction, is prohibitive. In contrast, obtaining policies that respond to image goals is much easier, because any autonomous trial or demonstration can be labeled in hindsight with its final state as the goal. In this work, we contribute a method that taps into joint image- and goal- conditioned policies with language using only a small amount of language data. Prior work has made progress on this using vision-language models or by jointly training language-goal-conditioned policies, but so far neither method has scaled effectively to real-world robot tasks without significant human annotation. Our method achieves robust performance in the real world by learning an embedding from the labeled data that aligns language not to the goal image, but rather to the desired _change_ between the start and goal images that the instruction corresponds to. We then train a policy on this embedding: the policy benefits from all the unlabeled data, but the aligned embedding provides an _interface_ for language to steer the policy. We show instruction following across a variety of manipulation tasks in different scenes, with generalization to language instructions outside of the labeled data. Videos and code for our approach can be found on our website: [https://rail-berkeley.github.io/grif/](https://rail-berkeley.github.io/grif/). Keywords:Instruction Following, Representation Learning, Manipulation ## 1 Introduction Natural language has the potential to be an easy-to-use and powerful form of task specification in robotics. To follow language instructions, a robot must understand human intent, ground its understanding in the state and action spaces, and solve the task by interacting with the environment. Training robots to do this is challenging, especially given that language-annotated data is limited. Existing deep learning approaches require large amounts of expensive human language-annotated demonstrations and are brittle on instructions outside the training data. Visual goals (i.e., goal images), though less intuitive for humans, provide complementary benefits for task representation in policy learning. Goals benefit from being easy to ground since, as images, they can be directly compared with other states. More importantly, goal tasks provide additional supervision and enable learning from unstructured data through hindsight relabeling [1; 2; 3]. However, compared to language instructions, specifying visual goals is less practical for real-world applications, where users likely prefer to tell the robot what they want rather than having to show it. Exposing an instruction-following interface for goal-conditioned policies could combine the strengths of both goal- and language- task specification to enable generalist robots that can be easily commanded. While goal-conditioned policy learning can help digest unstructured data, non-robot vision-language data sources make it possible to connect language and visual goals for generalization to diverse instructions in the real world. To this end, we propose Goal Representations for Instruction Following (GRIF), an approach that jointly trains a language- and a goal- conditioned policy with aligned task representations. We term task representations _aligned_ because our objective encourages learning similar representations for language instructions and state transitions that correspond to the same semantic task. GRIF learns this representation structure explicitly through a contrastive task alignment term. Since task representations across language and image goal modalities have similar semantics, this approach allows us to use robot data collected without annotations to improve performance by the agent on image goal tasks when viewed as a goal-conditioned policy, and thus indirectly improve language-conditioned performance in a semi-supervised manner. An overview of GRIF is shown in Figure 1. We present an approach for learning a language interface for visuomotor control without extensive language labels. With this method, we demonstrate that the semantic knowledge from a pre-trained vision-language model (CLIP [4]) can be used to improved task representations and manipulation even though such models perform poorly at task understanding out-of-the-box. Our experiments show that aligning task representations to scene changes enables improved performance at grounding and following language instructions within diverse real-world scenes. ## 2 Related Work **Robotic control with language interfaces.** Early works in language-conditioned robotic control use hand-designed parse trees or probabilistic graphical models to convert instructions into symbolic states to configure the downstream planners and controllers [5; 6; 7; 8]. To generalize beyond limited human specifications, a growing number of works have trained conditional policies end-to-end to follow natural language instructions [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Combining recent advances large language models (LLMs) [21] with learned language-conditioned policies as a low-level API has have paved the way for broad downstream applications with improved planning and generalization [22; 23; 24; 25; 26; 27]. However, most of these methods need high-capacity policy networks with massive, costly labeled demonstration data. As a result, the learned policies often generalize poorly to unseen scenarios or can only handle limited instructions in real-world scenes. Unlike past work, we learn low-level language-conditioned control from less annotated data. Figure 1: **Left: Our approach learns representations of instructions that are aligned to transitions from the initial state to the goal. When commanded with instructions, the policy \(\pi\) computes the task representation \(z\) from the instruction and predicts the action \(a\) to solve the task. Our approach is trained with a small number of labeled demonstrations and large-scale unlabeled demonstrations. Right: Our approach can solve diverse tasks and generalize to vast environment variations.** **Vision-language pre-training.** Vision-language models (VLMs) enable textual descriptions to be associated with visual scenes [4; 28]. Through contrastive learning over internet-scale data, recent large-scale VLMs such as CLIP [4] have achieved unprecedented zero-shot and few-shot generalization capabilities, with a wide range of applications. Despite these advances, applying pre-trained VLMs to robotic control is not straightforward since control requires grounding instructions in motions instead of static images. Through training from scratch or fine-tuning on human trajectories [29; 30], recent approaches learn representations for visuomotor control [31; 32]. These works use language labels to to learn visual representations for control without directly using language as an interface to the policy. In CLIPort, Shridhar et al. [33] use pre-trained CLIP [4] to enable sample-efficient policy learning. Their approach selects actions from high-level skills through imitation, assuming access to predefined pick-and-place motion primitives with known camera parameters. In contrast, our approach learns to align the representation of the instruction and the representation of the transition from the initial state to the goal on labeled robot data, and uses these representations for control without assumptions about the observation and action spaces. Other approaches use VLMs to recover reward signals for reinforcement learning [34; 35; 36; 37; 3]. In contrast, our approach directly trains language-conditioned policy through imitation learning without the need for online interactions with the environment. **Learning language-conditioned tasks by reaching goals.** Alternatively, language-conditioned policies can be constructed or learned through goal-conditioned policies [38; 39]. Lynch and Sermanet [40] propose an approach that facilitates language-conditioned imitation learning by sharing the policy network and aligning the representations of the two conditional tasks. Based on the same motivation, we propose an alternative approach which explicitly extends the alignment of VLMs to specify tasks as changes in the scene. By tuning a contrastive alignment objective, our method is able to exploit the knowledge of VLMs [4] pre-trained on broad data. This explicit alignment improves upon past approaching to connecting images and language [41; 42] by explicitly aligning tasks instead merely jointly training on conditional tasks. In Sec. 5, we show our approach significantly improves the performance of the learned policy and enhances generalization to new instructions. ## 3 Problem Setup Our objective is to train robots to solve tasks specified by natural language from interactions with the environment. This problem can be formulated as a conditional Markov decision process (MDP) denoted by the tuple \((\mathcal{S},\mathcal{A},\rho,P,\mathcal{W},\gamma)\), with state space \(\mathcal{S}\), action space \(\mathcal{A}\), initial state probability \(\rho\), transition probability \(P\), an instruction space \(\mathcal{W}\), and discount \(\gamma\). Given the instruction \(\ell\in\mathcal{W}\), the robot takes action \(a_{t}\in\mathcal{A}\) given the state \(s_{t}\) at each time step \(t\) to achieve success. To solve such tasks, we train a language-conditioned policy \(\pi(a|s,\ell)\) on a combination of human demonstrations and autonomously collected trajectories. Since high-quality natural language annotations are expensive and time-consuming to obtain, we assume that only a small portion of the trajectories are labeled with the corresponding instructions. The robot has access to a combination of two datasets--a small-scale labeled dataset \(\mathcal{D}_{L}\) with annotated instructions and a large-scale unlabeled dataset \(\mathcal{D}_{U}\) consists of more diverse play data collected in an open-ended manner. Our goal is to train \(\pi(a|s,\ell)\) while taking advantage of both the labeled and unlabeled datasets. We formulate \(\pi(a|s,\ell)\) as a stochastic policy that predicts the Gaussian distribution \(\mathcal{N}(\mu_{a},\Sigma_{a})\). ## 4 Goal Representations for Instruction Following We propose Goal Representations for Instruction Following (GRIF) to interface visuomotor policies with natural language instructions in a semi-supervised fashion (Figure. 2). Although the language-conditioned policy cannot be directly trained on the unlabeled dataset \(\mathcal{D}_{U}\), we can facilitate the training through goal-conditioned tasks. Solving both types of tasks requires the policy to understand the human intent, ground it in the current observation, and predict necessary actions. Although the first steps involve understanding task specifications of different modalities (goal images and lan guage), the remaining steps of such processes can be shared between the two settings. To this end, we decouple the language-conditioned policy \(\pi(a|s,\ell)\) into a policy network \(\pi_{\theta}(a|s,z)\) and a language-conditioned task encoder \(f_{\varphi}(\ell)\), where \(z=f_{\varphi}(\ell)\) is the representation of the task specified by the instruction \(\ell\). To solve the goal-conditioned task, we also introduce a goal-conditioned task encoder \(h_{\psi}\). The policy network \(\pi_{\theta}\) is shared between the language-conditioned and goal-conditioned tasks. This approach relies on the alignment of task representations. While most existing VLMs align text with static images, we argue that the representation of the goal-conditioned tasks should be computed from the state-goal pair \((s_{0},g)\). This is because the instruction often focuses on the changing factors from the initial state to the goal rather than directly describing the entire goal image, e.g., "_move the metal pan to the left_". Therefore, the representations of goal-conditioned tasks are computed as \(z=h_{\psi}(s_{0},g)\) and we aim to train the encoders such that for \((s_{0},g,\ell)\) sampled from the same trajectory, the distance between \(f_{\varphi}(\ell)\) and \(h_{\psi}(s_{0},g)\) should be close and far apart otherwise. We illustrate our high-level approach in Figure 2. ### Explicit Alignment through Contrastive Learning We propose explicitly aligning the representations of goal-conditioned and language-conditioned tasks through contrastive learning [41]. Compared to implicitly aligning the task presentations through joint training of the two conditional policies, contrastive alignment requires that all relevant information for selecting actions be included in the shared task representation. This improves the transfer between the action prediction tasks for both goal and language modalities by preventing the policy from relying on features only present in one task modality in selecting actions. Using an InfoNCE objective [42], we train the two encoders \(f_{\varphi}\) and \(h_{\psi}\) to represent instructions \(\ell\) and transitions \((s_{0},g)\) according to their task semantics. More concretely, for \((s_{0},g)\) and \(\ell\) that correspond to the same task, we would like their embeddings \(z_{\ell}=f_{\varphi}(\ell)\) and \(z_{g}=h_{\psi}(s_{0},g)\) to be close in the latent space, while \(z_{\ell}\) and \(z_{g}\) corresponding to different tasks to be far apart. To compute the InfoNCE objective, we define \(\mathcal{C}(s,g,\ell)=\cos(f(\ell),h(s,g))\) with the cosine similarity \(\cos(\cdot,\cdot)\). We sample positive data \(s^{+},g^{+},\ell^{+}\sim\mathcal{D}_{L}\) by selecting the start state, end state, and language annotation of a random trajectory. We sample negative examples \(s^{-},g^{-}\sim\mathcal{D}_{L}\) by selecting the start state and end state of a random trajectory, and sample \(\ell^{-}\sim\mathcal{D}_{L}\) by selecting the language annotation of another random trajectory. For each positive tuple, we sample \(k\) negative examples and denote them as \(\{s^{-}_{i},g^{-}_{i}\}_{i=1}^{k}\) and \(\{\ell^{-}_{i}\}_{j=1}^{k}\). Then we can define the InfoNCE \(\mathcal{L}_{\text{task}}\): \[\mathcal{L}_{\text{lang}\to\text{goal}} =-\log\frac{\exp(\mathcal{C}(s^{+},g^{+},\ell^{+})/\tau)}{\exp( \mathcal{C}(s^{+},g^{+},\ell^{+})/\tau)+\sum_{i=1}^{k}\exp(\mathcal{C}(s^{-}_{ i},g^{-}_{i},\ell^{+})/\tau)}\] \[\mathcal{L}_{\text{goal}\to\text{lang}} =-\log\frac{\exp(\mathcal{C}(s^{+},g^{+},\ell^{+})/\tau)}{\exp( \mathcal{C}(s^{+},g^{+},\ell^{+})/\tau)+\sum_{j=1}^{k}\exp(\mathcal{C}(s^{+},g ^{+},\ell^{-}_{j})/\tau)}\] \[\mathcal{L}_{\text{task}} =\mathcal{L}_{\text{lang}\to\text{goal}}+\mathcal{L}_{\text{ goal}\to\text{lang}} \tag{1}\] where \(\tau\) is a temperature hyperparameter. \(\mathcal{L}_{\text{lang}\to\text{goal}}\) and \(\mathcal{L}_{\text{goal}\to\text{lang}}\) represent the log classification accuracy of our alignment in predicting goals from language and language from goals respectively. ### Weight Initialization with Vision-Language Models To handle tasks involving objects and instructions beyond those contained in the limited labeled dataset, we wish to incorporate prior knowledge from broader sources into the encoders \(f_{\varphi}\) and \(h_{\psi}\). For this purpose, we investigate practical ways to incorporate Vision-Language Models (VLMs) [4] pre-trained on massive paired images and texts into our encoders. Pre-trained VLMs demonstrate effective zero-shot and few-shot generalization capability for vision-language tasks [4; 43]. However, they are originally designed for aligning a single static image with its caption without the ability to understand the _changes_ in the environment that language tasks correspond to, and perform poorly on compositional generalization [44; 45], which is key to modeling changes in scene state. We wish to encode the change between images while still exploiting prior knowledge in pre-trained VLMs. To address this issue, we devise a mechanism to accommodate and fine-tune the CLIP [4] model for aligning the transition \((s_{0},g)\) with the instruction \(\ell\). Specifically, we duplicate and halve the weights of the first layer of the CLIP architecture so it can operate on pairs of stacked images rather than single images. Details on how we modify the pre-trained CLIP to accommodate encoding changes are presented in Appendix C.2. In practice, we find this mechanism significantly improves the generalization capability of the learned policy \(\pi_{\theta}(a|s,g)\). ### Policy Learning with Aligned Representations We train the policy jointly on the two datasets \(\mathcal{D}_{L}\) and \(\mathcal{D}_{U}\) with the aligned task representations. By sampling \((\ell,s_{t},a_{t})\) from \(\mathcal{D}_{L}\), we train the policy network \(\pi_{\theta}(a|z)\) to solve language-conditioned tasks with \(z=f_{\varphi}(\ell)\). And by sampling \((s_{0},g,s_{t},a_{t})\) from \(\mathcal{D}_{L}\cup\mathcal{D}_{U}\), \(\pi_{\theta}\) is trained to reach goals with \(z=h_{\psi}(s_{0},g)\). We train with behavioral cloning to maximize the likelihood of the actions \(a_{t}\). We investigate two ways to train the policy given the encoders \(f_{\varphi}\) and \(h_{\psi}\). The straightforward way is to jointly train the policy network \(\pi_{\phi}\) and the two encoders end-to-end. This process adapts the encoders with the policy network to encourage them to incorporate information that facilitates downstream robotic control, but can also backfire if the policy learns to rely on visual-only features that are absent in the language conditioned setting. Alternatively, we can freeze the pre-trained weights of the two encoders and only train the shared policy network \(\pi_{\phi}\) on the two datasets. In Section 5, we evaluate and discuss the performances of both options. ## 5 Experiments Our work started with the premise of tapping into large, goal-conditioned datasets. To build a language interface for goal-conditioned policy learning, we proposed to learn explicitly aligned task representations, and to align instructions to state changes rather than static goals. Lastly, we advocated for the use of pre-trained VLMs to incorporate larger sources of vision-language knowledge. Therefore, we aim to test the following hypotheses in our experiments: **H1:**: Unlabeled trajectories will benefit the language-conditioned policy on new instructions. **H2:**: Explicitly aligning task representations improves upon the implicit alignment from LangLrp-style joint training [40]. **H3:**: The prior knowledge in pre-trained VLMs can improve learned task representations. Figure 2: **Left:** We explicitly align representations between goal-conditioned and language-conditioned tasks on the labeled dataset \(\mathcal{D}_{L}\) through contrastive learning. **Right:** Given the pre-trained task representations, we train a policy on both labeled and unlabeled datasets. **H4:** Aligning _transitions_ with language enable better use of VLMs compared to conventional image-language contrastive methods [37, 46]. Our experiments are conducted in an table-top manipulation domain. For training, we use a labeled dataset \(\mathcal{D}_{L}\) containing 7k trajectories and an unlabeled \(\mathcal{D}_{U}\) containing 47k trajectories. Our approach learns to imitate the 6 DOF continuous gripper control actions in the data at 5Hz. The evaluation scenes and unseen instructions are shown in Figure 3. Additional details about the environment, the dataset, and the breakdown of results are described in Appendices B and E. ### Comparative Results We compare the proposed GRIF with four baseline methods on a set of 15 unseen instructions from all 3 scenes and report the aggregated results in Figure 3, with GRIF attaining the best performance across all scenes. The per-task success rates can be found in Appendix E. **LCBC**[9] uses a behavioral cloning objective to train a policy conditioned on language from \(\mathcal{D}_{L}\), similar to prior methods on instruction-conditioned imitation learning. **LLfP**[40] jointly trains a goal conditioned and language conditioned policy on partially labeled data, but does not learn aligned task representations. **R3M**[32] provides pre-trained state representations for robot manipulation that are predictive of language-conditioned rewards. We use this approach as a baseline by jointly training goal- and language-conditioned policies while using R3M state encodings as goal representations (i.e., \(h_{\psi}(s_{0},g)=\text{R3M}(g)\)). **BC-Z**[10] jointly trains language- and video-conditioned policies and uses an additional cosine similarity term to align video and language embeddings. This approach does not transfer directly into our goal-conditioned setting, but we create a baseline that adapts it to our setting by jointly training goal- and language-conditioned policies while aligning task representations with a cosine distance loss. The architecture choices are standardized across all methods for fair comparisons. Unless stated otherwise, all baselines use a ResNet-18 as the goal encoder \(h_{\psi}(s_{0},g)\). In our preliminary experiments, this architecture was found to give good performance when used to train goal-conditioned policies in our setting. For the language encoder \(f_{\varphi}(\ell)\), all baselines use a pre-trained and frozen MUSE model [47], as in previous work [40, 10]. We find that language-conditioned policies must make use of unlabeled trajectories to achieve non-zero success rates when generalizing to new language instructions in support of **H1**. LCBC does not use unlabeled data and fails to complete any tasks. R3M jointly trains goal- and language-conditioned policies, but it also fails all tasks. This is likely due to its goal encodings being frozen and unable to be implicitly aligned to language during joint training. Methods that use implicit or explicit alignment (GRIF, LLfP, BC-Z), are able to exploit unlabeled goal data to follow instructions to varying degrees of success. These comparisons suggest that the combined effect of using pre-trained CLIP to align transitions with language significantly improves language-conditioned capabilities. Our model significantly outperformed all baselines on 8 out of 15 tasks, achieving high Figure 3: Comparison of success rates \(\pm\)SE between the top three methods across all trials within the three scenes. Two other baselines LCBC and R3M (not shown) achieved 0.0 success in all evaluation tasks although they do succeed on tasks that are heavily covered in the training data. Statistical significance is starred. The initial observation and instructions of each scene are shown. success rates on several tasks where the baselines almost completely fail (_"place the knife in front of the microwave", "move the pan in front of the cloth", "put the knife on the purple cloth"_), while achieving similar performance to the next-best baseline on the remaining tasks. Where baselines failed, we often observed _grounding_ failures. The robot reached for incorrect objects, placed them in incorrect locations, or was easily distracted by nearby objects into performing a different task. ### Ablation Study We run a series of ablations to analyze the performance of GRIF and test the hypotheses. **No Align** ablates the effect of explicit alignment by removing the contrastive objective. We also unfreeze the task encoders so that they are implicitly aligned via joint training of the language- and goal-conditioned policies. **No CLIP** ablates the effect of using pre-trained CLIP by replacing the image and text encoders with a ResNet-18 and pre-trained MUSE language encoder. In **No Start**, the image task representaions only depend on goals as \(h_{\psi}(g)\), instead of depending on transitions as \(h_{\psi}(s_{0},g)\). This is the conventional way to connect goals and language with CLIP that is often used in previous work [46; 37]. For **GRIF (Labeled)**, we exclude \(\mathcal{D}_{U}\) to study whether using unlabeled data is important for performance. **GRIF (Joint)** trains the task alignment and policy losses jointly, taking gradients through the image encoder and freezing the language encoder. This is the end-to-end approach discussed in Section 4.3. We refer to our full approach without joint training as **GRIF (Frozen)** in the remainder of this section. As shown in Figure 4, explicit alignment, pre-trained CLIP, and transition-based task representations all play critical roles in achieving high success rates. Notably, the conventional approach of aligning static goals and instructions with CLIP (**No Start**) fails almost completely in our setting. This is in support of **H4** and confirms that transitions, and not goal images, should be aligned to language tasks. In **GRIF (Labeled)**, dropping \(\mathcal{D}_{U}\) significantly decreases success rates, further supporting **H1**. We observe that this is primarily due to a deterioration of manipulation skills rather than grounding, which is expected as grounding is mostly learned via explicit alignment on \(\mathcal{D}_{L}\). Regarding **H2** and **H3**, we observe that removing either alignment or CLIP results in a large drop in performance. We also observed that **No Align** outperforms its counterpart _LLfp_ by using the pre-trained CLIP model (after the modification in Sec. 4.2) in the task encoder. We hypothesize that this is because CLIP has already been explicitly aligned during pre-training, and some of its knowledge is retained during joint training with the policy even without GRIF's task alignment loss. Lastly, deciding to freeze the task encoders during policy training does not appear to significantly affect our model's performance. This is likely because the contrastive learning phase already learns representations that can represent the semantic task, so there is less to gain from further implicit alignment during joint training. ### Analysis on the Learned Task Representations For additional analysis, we evaluate our model's task grounding capabilities independently of the manipulation policy and compare it with ablations. Specifically, we evaluate how well our model can connect new language instructions to correct goals in a scene. This is important to downstream policy success: if the model is able to project the language to a representation \(f_{\varphi}(l)\) that is close to that of the correct (but unprovided) goal \(h_{\psi}(s_{0},g)\), then the policy will likely be able to execute the task since it has been trained on a large amount of goal-conditioned data. Figure 4: Success rates of ablations with one standard error. Our task representations are trained with a contrastive objective, offering a convenient way to compute alignment between language and goals. On an dataset of labeled held-out trajectories, we compute the similarities between all pairs of visual task representations \(h_{\psi}(s_{0}\), \(g)\) and language task representations \(f_{\varphi}(\ell)\). For each language instruction, we retrieve the top \(k\) most similar \((s_{0},g)\) transitions and compute the accuracy for the correct transition being retrieved. We compute this metric in fixed batches of 256 examples and average over the validation set to report a text-to-image retrieval accuracy. We compute this metric for representations from each of our ablations and report the results in Figure 5 to analyze why GRIF outperforms other approaches in our main experiments. Our task representations show significantly better generalization compared to using a conventional image-language alignment (**No Start**), despite it being CLIP's original pre-training objective. The alignment accuracy is also more than 50% higher than when using non-VLM encoders (**No CLIP**), suggesting potentially large gains in grounding capability through using VLMs. We also study the effect of the number of language annotations on our model's grounding capability. Even at less than half the number of language annotations (3k), GRIF outperforms all the ablations in Figure 5, achieving a retrieval accuracy of \(73\%\). Detailed results for this ablation are presented in Appendix F, showing our approach is robust to lower amounts of language supervision. ## 6 Discussion, Limitations, and Future Work Our approach to aligning image goals and language instructions enables a robot to utilize large amounts of unlabeled trajectory data to learn goal-conditioned policies, while providing a "language interface" to these policies via aligned language-goal representations. In contrast to prior language-image alignment methods, our representations align _changes_ in state to language, which we show leads to significantly better performance than more commonly used CLIP-style image-language alignment objectives. Our experiments demonstrate that our approach can effectively leverage unlabeled robotic trajectories, with large improvements in performance over baselines and methods that only use the language-annotated data. **Limitations and future work.** Our method has a number of limitations that could be addressed in future. For instance, our method is not well-suited for tasks where instructions say more about _how_ to do the task rather than _what_ to do (e.g., _"pour the water slowly"_)--such qualitative instructions might require other types of alignment losses that more effectively consider the intermediate steps of task execution. Our approach also assumes that all language grounding comes from the portion of our dataset that is fully annotated or a pre-trained VLM. An exciting direction for future work would be to extend our alignment loss to utilize non-robot vision-language data, such as videos, to learn rich semantics from Internet-scale data. Such an approach could then use this data to improve grounding on language not in the robot dataset and enable broadly generalizable and powerful robotic policies that can follow user instructions. Figure 5: Left: Comparison of the top-5 text to image retrieval accuracy of representations learned by different ablations. Right: Examples of retrieved image pairs given instructions. ## Acknowledgements We would like to acknowledge the funding provided by AFOSR FA9550-22-1-0273, ONR N00014-20-1-2383, NSF IIS-2150826, and ONR YIP N00014-20-1-2736.
2303.17822
Order and Chaos in the $SU(2)$ Matrix Model: Ergodicity and Classical Phases
We study the classical non-linear dynamics of the $SU(2)$ Yang-Mills matrix model introduced in [1] as a low-energy approximation to two-color QCD. Restricting to the spin-0 sector of the model, we unearth an unexpected tetrahedral symmetry, which endows the dynamics with an extraordinarily rich structure. Amongst other things, we find that the spin-0 sector contains co-existing chaotic sub-sectors as well as nested chaotic basins, and displays alternation between regular and chaotic dynamics as energy is varied. The symmetries also grant us a considerable amount of analytic control which allows us to make several quantitative observations. Next, by noting that several features of the model have natural thermodynamic interpretations, we switch from our original chaos-theoretic viewpoint to a more statistical perspective. By so doing, we see that the classical spin-0 sector has a rich phase structure, arising from ergodicity breaking, which we investigate in depth. Surprisingly, we find that many of these classical phases display numerous similarities to previously discovered quantum phases of the spin-0 sector [2], and we explore these similarities in a heuristic fashion.
Chaitanya Bhatt, Vijay Nenmeli, Sachindeo Vaidya
2023-03-31T06:38:23Z
http://arxiv.org/abs/2303.17822v1
# Order and Chaos in the \(Su(2)\) Matrix Model ###### Abstract We study the classical non-linear dynamics of the \(SU(2)\) Yang-Mills matrix model introduced in [1] as a low-energy approximation to two-color QCD. Restricting to the spin-0 sector of the model, we unearth an unexpected tetrahedral symmetry, which endows the dynamics with an extraordinarily rich structure. Amongst other things, we find that the spin-0 sector contains co-existing chaotic sub-sectors as well as nested chaotic basins, and displays alternation between regular and chaotic dynamics as energy is varied. The symmetries also grant us a considerable amount of analytic control which allows us to make several quantitative observations. Next, by noting that several features of the model have natural thermodynamic interpretations, we switch from our original chaos-theoretic viewpoint to a more statistical perspective. By so doing, we see that the classical spin-0 sector has a rich phase structure, arising from ergodicity breaking, which we investigate in depth. Surprisingly, we find that many of these classical phases display numerous similarities to previously discovered _quantum_ phases of the spin-0 sector [2], and we explore these similarities in a heuristic fashion. ## 1 Introduction Quantum chromodynamics is an \(SU(3)\) non-Abelian gauge theory that plays an indispensable role in the physics of strong interactions. It is however, subtle and complicated: not only is it nonlinear and possesses an infinite number of degrees of freedom, it also has an infinite-dimensional gauge group. Progress in understanding the theory has been made mostly in the perturbative regime, or by approximating the theory by simpler models. One such model is the \(SU(3)\) gauge matrix model (such as those studied in [3], [4], [5]) obtained as the extreme low-energy limit of the full gauge field theory on \(S^{3}\times\mathbb{R}\): it has been successful in predicting the masses of light hadrons with surprising accuracy [2]. In this work we will study an even simpler model, the \(SU(2)\) gauge matrix model and in particular, its classical dynamics. Although nonlinear, the model has a finite number of degrees of freedom: there are three rotational, three gauge and three non-compact gauge-invariant degrees of freedom. Angular momentum conservation naturally allows a decomposition of the full dynamics into non-rotating and rotating sectors. Here we will restrict our attention to the former, which we shall henceforth refer to as the'spin-0 sector' of the matrix model. Despite this restriction, the spin-0 sector still has a six-dimensional phase space. Coupled with the dearth of quantitative methods inherent to non-linear systems, even this reduced system seems, at first glance, intractable. However, as we shall see, the discovery of a hidden tetrahedral symmetry simplifies matters enormously. In this avatar, the model is a three-dimensional isotropic oscillator perturbed by cubic and quartic non-linearities. A theorem of Weinstein [6] assures that integrable Hamiltonian systems continue to have periodic orbits even when perturbed by a small nonlinearity. The existence of periodic orbits and a use of group theoretic methods for classifying them allows us to systematize our study. While Hamiltonian chaos is typically studied using the apparatus of Kolmogorov-Arnold-Moser (KAM) theory [7, 8, 9], physical insights are sometimes masked by the abstract nature of the necessary computations. A study of periodic orbits from the point of view of their (in)stability will prove to be ideal for our setup and will help us develop a much more intuitive feel for the dynamics. Second, the periodic orbits come with symmetries of their own ([10],[13]), which allows us to further simplify our analysis. In particular, we shall see that a good fraction of the orbits live on a four dimensional submanifold of the full phase space and can be separately studied using an appropriately reduced _effective_ four dimensional system. This is not unlike the Kepler problem, where the rotational symmetry renders generic orbits planar. The effects of the extra dimensions are mostly cosmetic, so that we lose no generality by treating these orbits using the effective four-dimensional system and eventually reverting back to the full model. Lastly, the symmetries also simplify the expressions governing time evolution along certain periodic orbits, to the point where analytic solutions can be obtained, and uncover dynamics that is far more intricate than one usually encounters. It turns out that the set of _all_ trajectories over the phase space can be partitioned into classes, with each class stemming from the destabilization of a specific type of periodic orbit. Although the idea of regarding chaotic trajectories as destabilisations of periodic orbits is not new, any'memory' of the parent orbit is usually rapidly erased in the chaotic domain, and perturbations about different periodic orbits quickly cease to be distinguishable from one another. What _is_ novel here is that such a'memory loss' does not occur as long as energies belonging to particular bands. As a result, for such energies, the phase space displays the peculiar feature of _multiple co-existing chaotic 'basins'_. Systems containing co-existing attractors are quite rare - the Rabinovich-Fabrikant model [14] being the prototypical example - and are normally rather artificial. It is thus extremely interesting to see this phenomenon arising naturally in the setting of a gauge matrix model. The spin-0 sector thus possesses an extraordinarily rich dynamics, worthy of a study even as a standalone non-linear system. Our eventual aim however, is to work out how such dynamics ties in with the physics of gauge theory. Such a mapping can be carried out by identifying chaotic and regular sectors of the non-linear system with _classical phases_ of the underlying gauge-matrix model [3]. While such themes will indeed feature in our analysis, albeit in a more nuanced manner, they will only form one half of a two stage procedure. This is because we have, in addition to our non-linear analysis, a thorough repository of the _quantum_ dynamics of the matrix model [2]. In particular, the quantum matrix model has been shown to admit quantum phases via superselection sectors; phases which, remarkably enough, bear some resemblance to the classical phases associated with certain classes of periodic orbits. The classical spin-0 sector of the full matrix model in some sense retains some'memory' of its innate quantum nature! These links between the classical and quantum regimes can be exploited both ways: in one direction, we can use techniques from the phase study of the quantum theory to better elucidate their classical counterparts. On the other hand, our classical-quantum correspondence is not perfect as we shall see, there are classical phases of the spin-0 sector which have no apparent quantum analogs. It is thus natural to use well-established methods, such as the Gutzweiller trace formula [17], to attempt to search for quantum counterparts to these classical phases or, should they not exist, to understand the limits of this correspondence. These questions are by no means trivial, and will constitute the subject of a future work. In this article, we will just provide a heuristic outline of the various connections between classical and quantum phases. To explain the peculiar features of the spin-0 sector dynamics, we shall use three distinct but interlocking diagnostic tools, designed for similar but not identical purposes. Our first tool involves quantifying the growth of fluctuations about individual periodic orbits. The resulting fluctuation equations are identical in form to those describing the eigenstates of a quantum particle in a certain periodic potential. This correspondence allows us to connect well-known results of band theory to novel analogs in the the study of fluctuations. As is well known from solid-state physics, the spectrum of a quantum particle in a periodic potential comprises several energy bands, separated from one another by band gaps [15]. As we will show, such features manifest on the nonlinear side of the correspondence as _alternations between regularity and chaos_ as we vary the energy. Although alternations between regularity and chaos (_intermittency_, as it is termed [16]) have been documented in literature, such alternations are usually irregular, with no clear-cut methods for identifying regions of stability or instability. In contrast, the analytic control (which we owe to the tetrahedral symmetry) we have over our fluctuation equations allows us to make far more precise statements on the locations of transition points. Specifically, we will, for a particular class of periodic orbits, work out the _exact_ energy at which the _first_ transition from instability to stability occurs. For this same class of orbits, we will also be able to obtain asymptotically valid expressions for transition points in the high energy limit. This analysis follows from a study of the _monodromy matrix_\(\mathcal{U}\)[18]. More precisely, it is the _spectrum_ of \(\mathcal{U}\) that proves to be a reliable indicator of orbit stability. In our case, it turns out the symmetries of the spin-0 sector and the associated simplification in time evolution allow us to assess orbit stability using just a _single_ spectral invariant (Trace \(\mathcal{U}\)) rather than its entire spectrum. As we will later see, this reduction will additionally grant us an unusually strong analytic handle over the chaotic dynamics, and will help us derive a good number of precise quantitative results. Next, given that we are dealing with a highly non-linear system, it is natural to consider Lyapunov exponents and Poincare sections - the standard indicators of chaos. While these constructs do not normally yield analytical information, the latter is an excellent qualitative diagnostic for chaos, while the former reliably quantifies the 'degree of chaos' present. Adapted to our system, Poincare sections wonderfully bring out the numerous substructures underlying the full dynamics, particularly the phenomena of ergodicity breaking and co-existing chaotic 'basins'. Lyapunov exponents complement the visual aids provided by the Poincare sections and also serve as an excellent independent identifier for ergodicity breaking. A third set of diagnostic tools is drawn from the thermodynamics of small systems. Statistical constructs such as temperature and entropy, while usually applied to many-body systems, can also be discussed in the context of chaotic dynamics owing to the common theme of ergodicity which underlies these constructs. We find that Gibbs entropy and temperature [19], first discussed in a non-linear dynamical context in [20], beautifully illustrate the ergodicity breaking inherent to our model. Additionally, it serves as a useful verification alongside the other diagnostic tools we have mentioned, and naturally lends weight to our interpretation of ergodicity breaking as classical phases. This article is organized as follows: In Section 2, we describe the Yang-Mills matrix model, its Lagrangian and Hamiltonian formalisms, and obtain the equations governing the spin-0 sector. In Section 3, we outline the symmetries of the spin-0 sector, investigate their topological and dynamical effects, and introduce the families of periodic orbits that they generate. Section 4 builds on this with a thorough enumeration of the structure and properties of these aforementioned families, adding an extra pair along the way. This is followed, in Section 5, with an extensive study of the stability properties of each family of orbits using monodromy matrix theory. We then pursue a traditional chaos study (Poincare sections and Lyapunov exponents) in Section 6, where we also tie these results to those of the monodromy analysis of the previous section. The thermodynamic viewpoint is pursued in Section 7, where we examine the relation between ergodicity (and its breaking) and Gibbs temperature. We also compare our observations with the results of our non-linear dynamical analysis of Section 6. Section 8 then explores the classical phase structure of the spin-0 sector, using the substantial collection of results developed in preceding sections. Section 9 suggests evidence for the links between the classical phases and a host of quantum phases uncovered in a previous work of one of the authors (SV). Section 10 provides a summary of this work and indicates directions for future work. ## 2 Setting up the \(Su(2)\) Matrix Model The \(SU(2)\) matrix model contains nine degrees of freedom grouped into a single matrix variable \(M\in M_{3}(\mathbb{R})\)[1, 2]. The dynamics of the system is governed by the Lagrangian \[L_{YM}=\frac{1}{2g^{2}}\left(E_{i}^{a}E_{i}^{a}-B_{i}^{a}B_{i}^{a}\right), \quad i,a=1,2,3. \tag{2.1}\] Here \(g\) is the Yang-Mills coupling, and \(E\) and \(B\) are the chromoelectric and chromomagnetic fields respectively, defined as \[E_{i}^{a}=\dot{M}_{ia}+\epsilon_{abc}M_{0b}M_{ic},\quad B_{i}^{a}=\frac{1}{2} \epsilon_{ijk}F_{jk}^{a},\quad F_{ij}^{a}=-\epsilon_{ijk}M_{ka}+\epsilon_{abc} M_{ib}M_{jc}. \tag{2.2}\] Since the action possesses an \(SU(2)\) gauge symmetry, we may use the associated gauge freedom to fix \(M_{0a}\) to zero. Rewriting the Lagrangian (2.1) in terms of the matrix variable \(M\), we obtain \[L_{YM}=\frac{1}{2g^{2}}\text{tr}(\dot{M}^{\text{T}}\dot{M})-\frac{1}{2g^{2}} \text{tr}(M^{\text{T}}M)+\frac{3}{g^{2}}\text{det}M-\frac{1}{4g^{2}}[\text{tr }(M^{\text{T}}M)]^{2}+\frac{1}{4g^{2}}\text{tr}[(M^{\text{T}}M)^{2}]. \tag{2.3}\] This Lagrangian is invariant under a left \(O(3)\) action (physical rotations plus parity) and a right \(SO(3)\) action (gauge transformations). The left and right actions give rise to two sets of conserved charges - the physical angular momentum \(J=\dot{M}M^{T}-M\dot{M}^{T}\), arising from the left \(SO(3)\) action, and the gauge angular momentum \(\Gamma=\dot{M}^{T}M-M^{T}\dot{M}\), associated with the action of the gauge group. A new set of coordinates \((R,A,S)\), similar to the coordinates of singular value decomposition (SVD) [21, 22] will prove to be very convenient. The matrix \(M\) decomposes as \(M=RAS^{T}\) with \(R\in O(3),\;S\in SO(3)\) and \(A\) a real diagonal matrix \(\text{diag}(a_{1},a_{2},a_{3})\). Introducing the angular velocities \(\Omega\equiv R^{T}\dot{R}\) and \(\Lambda\equiv S^{T}\dot{S}\), the Lagrangian naturally separates into a kinetic term \(T\) and a potential term \(U\), and may thus be expressed as \[L_{YM} =\frac{1}{g^{2}}(T-U),\quad\text{where} \tag{2.4}\] \[T =\frac{1}{2}\text{tr}(\dot{A}^{2}-A^{2}(\Omega^{2}+\Lambda^{2})+2 \Omega A\Lambda A)\quad\text{and}\] (2.5) \[U =U(a_{1},a_{2},a_{3})=\frac{1}{2}[(a_{1}-a_{2}a_{3})^{2}+(a_{2}-a _{3}a_{1})^{2}+(a_{3}-a_{1}a_{2})^{2}]. \tag{2.6}\] The Lagrangian is independent of the 'angular' coordinates \(R\) and \(S\), and in particular, the potential \(U\) depends solely on the variables \((a_{1},a_{2},a_{3})\). With the \((R,A,S)\) coordinates, the angular momentum \(J\) and the gauge angular momentum \(\Gamma\) take the form \[J=R(\Omega A^{2}+A^{2}\Omega-2A\Lambda A)R^{T},\quad\Gamma=S(\Lambda A^{2}+A^ {2}\Lambda-2A\Omega A)S^{T}. \tag{2.7}\] For the phase space formulation, we begin by defining the canonical momenta \[p_{A}=\frac{\partial L}{\partial\dot{A}}=\frac{1}{g^{2}}\dot{A},\quad p_{ \Omega}=\frac{\partial L}{\partial\Omega}=\frac{1}{g^{2}}R^{T}JR,\quad p_{ \Lambda}=\frac{\partial L}{\partial\Lambda}=\frac{1}{g^{2}}S^{T}\Gamma S. \tag{2.8}\] In terms of the (phase space) coordinates \((R,A,S,p_{\Omega},p_{A},p_{\Lambda})\), the Hamiltonian is \[H_{YM} =\langle p_{\Omega},\Omega\rangle_{\mathfrak{so}(3)}+\langle p_{ \Lambda},\Lambda\rangle_{\mathfrak{so}(3)}+\langle p_{A},\dot{A}\rangle_{ \mathfrak{so}(3)}-L, \tag{2.9}\] \[=\frac{g^{2}}{2}\langle p_{A},p_{A}\rangle_{\mathfrak{so}(3)}+ \frac{g^{2}}{2}\langle p_{\Omega},\Omega\rangle_{\mathfrak{so}(3)}+\frac{g^{2 }}{2}\langle p_{\Lambda},\Lambda\rangle_{\mathfrak{so}(3)}+\frac{1}{g^{2}}U(A),\] (2.10) \[\text{where }\langle\xi,\eta\rangle_{\mathfrak{so}(3)} \equiv\frac{1}{2}\text{tr}(\xi^{\mathrm{T}}\eta).\] The Gauss law requires us to fix \(\Gamma=0\), i.e. \(p_{\Lambda}=0\). We will thus omit any equations involving these variables. The equations of motion (EOM) are then \[\frac{dA}{dt}=\frac{\partial H}{\partial p_{A}},\quad\frac{dp_{A} }{dt}=-\frac{\partial H}{\partial A}, \tag{2.11}\] \[\frac{dp_{\Omega}}{dt}=[p_{\Omega},\Omega],\quad\Omega=\frac{ \partial H}{\partial p_{\Omega}}. \tag{2.12}\] Since \(\Omega\) is an antisymmetric \(3\times 3\) matrix, it can be completely specified by a real triplet \((\omega_{1},\omega_{2},\omega_{3})\) via \[\Omega=\begin{bmatrix}0&-\omega_{3}&\omega_{2}\\ \omega_{3}&0&-\omega_{1}\\ -\omega_{2}&\omega_{1}&0\end{bmatrix}, \tag{2.13}\] with the triplet transforming as a three vector \(\omega\) under \(SO(3)\) rotations. In terms of \(\omega_{i}\) and \(a_{i}\), we can explicitly rewrite the Hamiltonian as \[H_{YM} =\frac{g^{2}}{2}\left(p_{a_{1}}^{2}+p_{a_{2}}^{2}+p_{a_{3}}^{2} \right)+\frac{g^{2}}{2}\left(\frac{a_{2}^{2}+a_{3}^{2}}{(a_{2}^{2}-a_{3}^{2})^ {2}}p_{\omega_{1}}^{2}+\frac{a_{3}^{2}+a_{1}^{2}}{(a_{3}^{2}-a_{1}^{2})^{2}}p_ {\omega_{2}}^{2}+\frac{a_{1}^{2}+a_{2}^{2}}{(a_{1}^{2}-a_{2}^{2})^{2}}p_{ \omega_{3}}^{2}\right)\] \[+\frac{1}{2g^{2}}\left((a_{1}-a_{2}a_{3})^{2}+(a_{2}-a_{3}a_{1}) ^{2}+(a_{3}-a_{1}a_{2})^{2}\right). \tag{2.14}\] On canonically rescaling the coordinates and momenta as \(a_{i}\to ga_{i},p_{a_{i}}\rightarrow\frac{p_{a_{i}}}{g}\), we obtain \[H_{YM} =\frac{1}{2}\left(p_{a_{1}}^{2}+p_{a_{2}}^{2}+p_{a_{3}}^{2}\right) +\frac{1}{2}\left(\frac{a_{2}^{2}+a_{3}^{2}}{(a_{2}^{2}-a_{3}^{2})^{2}}p_{\omega _{1}}^{2}+\frac{a_{3}^{2}+a_{1}^{2}}{(a_{3}^{2}-a_{1}^{2})^{2}}p_{\omega_{2}}^ {2}+\frac{a_{1}^{2}+a_{2}^{2}}{(a_{1}^{2}-a_{2}^{2})^{2}}p_{\omega_{3}}^{2}\right)\] \[+\frac{1}{2}\left((a_{1}-ga_{2}a_{3})^{2}+(a_{2}-ga_{3}a_{1})^{2}+ (a_{3}-ga_{1}a_{2})^{2}\right). \tag{2.15}\] With these coordinates, the EOM 2.11,2.12 become \[\dot{a}_{1} =p_{a_{1}},\quad\dot{a}_{2}=p_{a_{2}},\quad\dot{a}_{3}=p_{a_{3}}, \tag{2.16}\] \[\dot{p}_{a_{1}} =-\frac{1}{2}\bigg{(}\frac{2a_{1}p_{\omega_{2}}^{2}}{\left(a_{1 }^{2}-a_{3}^{2}\right)^{2}}-\frac{4a_{1}\left(a_{1}^{2}+a_{3}^{2}\right)p_{ \omega_{2}}^{2}}{\left(a_{1}^{2}-a_{3}^{2}\right)^{3}}+\frac{2a_{1}p_{\omega_{3 }}^{2}}{\left(a_{1}^{2}-a_{2}^{2}\right)^{2}}-\frac{4a_{1}\left(a_{1}^{2}+a_{2 }^{2}\right)p_{\omega_{3}}^{2}}{\left(a_{1}^{2}-a_{2}^{2}\right)^{3}}\bigg{)}\] \[-\frac{2g^{2}a_{1}a_{2}^{2}-6ga_{3}a_{2}+2g^{2}a_{1}a_{3}^{2}+2a_{ 1}}{2},\] (2.17) \[\dot{p}_{a_{2}} =-\frac{1}{2}\bigg{(}\frac{2a_{2}p_{\omega_{1}}^{2}}{\left(a_{2 }^{2}-a_{3}^{2}\right)^{2}}-\frac{4a_{2}\left(a_{2}^{2}+a_{3}^{2}\right)p_{ \omega_{1}}^{2}}{\left(a_{2}^{2}-a_{3}^{2}\right)^{3}}+\frac{4a_{2}\left(a_{1 }^{2}+a_{2}^{2}\right)p_{\omega_{3}}^{2}}{\left(a_{1}^{2}-a_{2}^{2}\right)^{3} }+\frac{2a_{2}p_{\omega_{3}}^{2}}{\left(a_{1}^{2}-a_{2}^{2}\right)^{2}}\bigg{)}\] \[-\frac{2g^{2}a_{2}a_{1}^{2}-6ga_{3}a_{1}+2g^{2}a_{2}a_{3}^{2}+2a_{ 2}}{2},\] (2.18) \[\dot{p}_{a_{3}} =-\frac{1}{2}\bigg{(}\frac{4a_{3}\left(a_{2}^{2}+a_{3}^{2}\right) p_{\omega_{1}}^{2}}{\left(a_{2}^{2}-a_{3}^{2}\right)^{3}}+\frac{2a_{3}p_{ \omega_{1}}^{2}}{\left(a_{2}^{2}-a_{3}^{2}\right)^{2}}+\frac{4a_{3}\left(a_{1 }^{2}+a_{3}^{2}\right)p_{\omega_{2}}^{2}}{\left(a_{1}^{2}-a_{3}^{2}\right)^{3} }+\frac{2a_{3}p_{\omega_{2}}^{2}}{\left(a_{1}^{2}-a_{3}^{2}\right)^{2}}\bigg{)}\] \[-\frac{2g^{2}a_{3}a_{1}^{2}-6ga_{2}a_{1}+2g^{2}a_{2}^{2}a_{3}+2a_{ 3}}{2},\] (2.19) \[\dot{p}_{\omega_{1}} =-\frac{\left(a_{2}^{2}-a_{3}^{2}\right)\left(-3a_{1}^{4}+\left(a _{2}^{2}+a_{3}^{2}\right)a_{1}^{2}+a_{2}^{2}a_{3}^{2}\right)g^{4}p_{\omega_{2}} p_{\omega_{3}}}{\left(a_{1}^{2}-a_{2}^{2}\right)^{2}\left(a_{1}^{2}-a_{3}^{2} \right)^{2}},\] (2.20) \[\dot{p}_{\omega_{2}} =\frac{\left(a_{1}^{2}-a_{3}^{2}\right)\left(-3a_{1}^{4}+a_{3}^{ 2}a_{2}^{2}+a_{1}^{2}\left(a_{2}^{2}+a_{3}^{2}\right)\right)g^{4}p_{\omega_{1} }p_{\omega_{3}}}{\left(a_{1}^{2}-a_{2}^{2}\right)^{2}\left(a_{2}^{2}-a_{3}^{2} \right)^{2}},\] (2.21) \[\dot{p}_{\omega_{3}} =-\frac{\left(\left(a_{2}^{2}+a_{3}^{2}\right)a_{1}^{4}-\left(a_{ 2}^{4}+3a_{3}^{4}\right)a_{1}^{2}+3a_{2}^{2}a_{3}^{4}-a_{2}^{4}a_{3}^{2}\right)g ^{4}p_{\omega_{1}}p_{\omega_{2}}}{\left(a_{1}^{2}-a_{3}^{2}\right)^{2}\left(a_{ 2}^{2}-a_{3}^{2}\right)^{2}}. \tag{2.22}\] From these equations, it is easy to see that we have a consistent set of solutions with \(p_{\omega_{i}}\)'s set to zero. Physically, this corresponds to the irrotational sector of the matrix model, and it is these equations that we will study under the name of the spin-0 sector. Explicitly, the equations governing the dynamics of the spin-0 sector are then \[\dot{a}_{1}(t)=p_{a_{1}}(t),\quad\dot{a}_{2}(t)=p_{a_{2}}(t),\quad\dot{a}_{3}(t )=p_{a_{3}}(t), \tag{2.23}\] \[\dot{p}_{a_{1}}(t) =-\frac{2g^{2}a_{1}(t)a_{2}(t)^{2}-6ga_{3}(t)a_{2}(t)+2g^{2}a_{1}(t )a_{3}(t)^{2}+2a_{1}(t)}{2}, \tag{2.24}\] \[\dot{p}_{a_{2}}(t) =-\frac{2g^{2}a_{2}(t)a_{1}(t)^{2}-6ga_{3}(t)a_{1}(t)+2g^{2}a_{2}(t )a_{3}(t)^{2}+2a_{2}(t)}{2},\] (2.25) \[\dot{p}_{a_{3}}(t) =-\frac{2g^{2}a_{3}(t)a_{1}(t)^{2}-6ga_{2}(t)a_{1}(t)+2g^{2}a_{3}(t )a_{2}(t)^{2}+2a_{3}(t)}{2}. \tag{2.26}\] These equations emerge from the variation of the Hamiltonian \[H_{0}=\frac{1}{2}\left(p_{a_{1}}^{2}+p_{a_{2}}^{2}+p_{a_{3}}^{2}\right)+\frac{1}{2 }\left(a_{1}^{2}+a_{2}^{2}+a_{3}^{2}-6ga_{1}a_{2}a_{3}+g^{2}(a_{1}^{2}a_{2}^{2}+a_ {2}^{2}a_{3}^{2}+a_{3}^{2}a_{1}^{2})\right), \tag{2.27}\] which is simply the full Hamiltonian (2.15), with the \(p_{\omega_{i}}\) fixed to zero. For the remainder of this article, we will always assume zero angular momentum and work exclusively with equations (2.23)-(2.27). Symmetries of the Spin-0 Sector ### The Action of the Tetrahedral Group Since the Hamiltonian is independent of the 'angular' coordinates \(R\) and \(S\), the non-trivial dynamics is in the evolution of the \(a_{i}\)'s. Remarkably, Hamiltonian (2.27) is _further_ invariant under the action of a discrete group. Explicitly, the action of an arbitrary element of this discrete symmetry group on the phase space variables is given by compositions of the following 1. \(a_{i}\to a_{P(i)},p_{a_{i}}\to p_{a_{P(i)}}\), where \(P\) is an element of the permutation group \(S_{3}\). 2. \(a_{i}\to s_{i}a_{i}\), \(p_{a_{i}}\to s_{i}p_{a_{i}}\), where \(s_{i}\) is \(-1\) for two values of \(i\) and \(1\) for the remaining \(i\). For example, \((a_{1},a_{2},a_{3})\to(a_{1},-a_{2},-a_{3}),\ (p_{a_{1}},p_{a_{2}},p_{a_{3}})\to(p_{a_{1}},-p_{a_{2}},-p_{a_{3}})\). Transformations of the second kind form a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) subgroup of the full symmetry group, while transformations of the first kind constitute an \(S_{3}\) subgroup. Both sets of transformations clearly do not commute. The full symmetry group can in fact be shown to be a semi-direct product of these two subgroups and is isomorphic to the tetrahedral group \(T_{d}\). The Hamiltonian further possesses an additional \(\mathbb{Z}_{2}\) time-reversal symmetry \(T:p_{a_{i}}\to-p_{a_{i}}\). Along with the time-reversal group \(\mathcal{T}\), the full discrete symmetry group of the spin-0 sector is thus \(T_{d}\times\mathcal{T}\). We emphasize that the \(T_{d}\) symmetry of Hamiltonian (2.27) is a non-trivial consequence of the SVD and in particular, bears no relation to the continuous rotational symmetries of the original Lagrangian (2.1). This unexpected symmetry will play a crucial role in understanding the dynamics of the spin-0 sector in several ways, and will in particular hand us far more analytic control than is usually available in non-linear systems. ### Equipotential Surfaces of the Spin-0 Sector The tetrahedral symmetry is best seen by looking at the equipotential surfaces of the Hamiltonian of the spin-0 sector. Equipotential surfaces for various energies have been displayed in Figure 1. There are two points of interest to note: 1. The tetrahedral symmetry, while present at all energies, is less visible at intermediate energies (Figure 1c) and apparently transits to an octahedral symmetry at high energies (Figure 1d). This transition is only approximate, and can be attributed to the decreasing significance of the cubic term in the potential at high energies. 2. The _topology_ of the equipotential surface changes as we cross a certain critical energy \(E_{c}\). Equipotentials at'subcritical' energies (Figure 1a) are disconnected and are composed of a central lobe and a set of four side lobes. 'Supercritical equipotentials' (Figures 1b-1d), in contrast, are connected surfaces in configuration space. From the geometry of the equipotentials, it is clear that the critical energy \(E_{c}\) is simply the energy \(E\) at which the number of solutions of the equation \(V(a,a,a)=E\) is exactly one. Solving this, we obtain \(E_{c}=\frac{3}{32g^{2}}\). Given this topological feature of the potential, it is natural to partition the dynamics into'subcritical' and'supercritical' regimes, and study each one separately. Indeed, we will later find that the dynamics of the two regimes are quite different, with each zone displaying peculiarities of different kinds. ### Symmetries and Periodic Orbits The EOMs (2.23)-(2.26) are highly nonlinear and, as we shall see, lead to chaotic dynamics. Chaotic Hamiltonian systems are frequently studied using techniques closely associated with the Kolmogorov-Arnold-Moser (KAM) theorem [7, 8, 9]. Such KAM investigations involve splitting the Hamiltonian into an integrable portion and non-integrable perturbations, and then using perturbative methods to study the dynamical effects of these corrections. The Hamiltonian (2.27) governing the dynamics of the spin-0 sector has a natural interpretation as a perturbed system of three decoupled simple harmonic oscillators (SHOs), with \(g\) playing the role of a perturbation parameter. However, it turns out that \(g\) is _not_ the ideal candidate for the perturbation parameter. To see this, we note that if the 6D phase space vector \((a_{1}(t),a_{2}(t),a_{3}(t),p_{a_{1}}(t),p_{a_{2}}(t),p_{a_{3}}(t))\) is a solution to the EOM with \(g=1\) and energy \(E\), then \((a_{i},\frac{p_{a_{i}}}{g^{2}})\) is also a solution to the EOM with coupling \(g\) and energy \(\frac{E}{g^{2}}\). As a result, the qualitative features of solutions - orbit shapes, time averages, measures of chaos/stability, to name a few - depend not on the specific values of energy and coupling, but a particular combination thereof. The above scaling arguments show that \(g^{2}E\) is the correct choice. Thus we may as well set \(g\) to 1 and observe the entire spread of dynamics by varying just the energy. It is worth emphasising that with this convention, we have \(E_{c}=\frac{3}{32}\). Hamiltonian systems possess periodic orbits sufficiently close to an integrable limit [6]. Models with tetrahedral symmetry have been thoroughly studied and their orbits classified in [10, 13]. Similar approaches involving simplification of periodic orbit analysis by discrete group symmetries have been applied to the Henon-Heiles system [11]. In fact, the spin-0 sector of the full matrix model can itself be regarded as an instance of a specific class of higher dimensional analogs of the Henon-Heiles system, first put forward in [13]. The \(T_{d}\times\mathcal{T}\) symmetry of the Hamiltonian of the spin-0 sector implies the existence of multiple families of periodic orbits. Most of these orbits persist at low energies, but get destroyed on increasing energy and moving away from the integrable limit. We will refer to these as _non-linear normal modes_ (NLNMs). The NLNMs of the spin-0 sector can be classified by symmetry properties. More precisely, the NLNMs may be classified according to their stabilizers \(G\). They fall into five classes, listed in table 1. Figure 1: Configuration space equipotentials of the spin-0 sector Hamiltonian (Here \(\mathcal{T}_{2}=\{1,C_{2}T\}\) and \(\mathcal{T}_{s}=\{1,C_{s}T\}\).) The presence of NLNMs is formally established by considering a'reduced' phase space, obtained by quotienting the full six-dimensional phase space by the orbits of the decoupled SHO limit. Correspondences can then be drawn between properties of objects living in the original phase space and their counterparts residing on the reduced phase space. In particular, the above NLNMs of (2.27) can be mapped to critical points of an appropriate Hamiltonian living in the reduced phase space. Morse theoretic methods can then be used to demonstrate the existence of fixed points of the reduced Hamiltonian, or alternately NLNMs of the full Hamiltonian (2.27). An additional family of twelve orbits corresponding to non-critical points of the Hamiltonian, with stabilizer \(C_{s}\wedge\mathcal{T}_{2}\), can also be shown to exist for the spin-0 sector. The full details of this procedure can be found in [13]. Representative plots for each family of orbits have been shown in Figure 2. \begin{table} \begin{tabular}{|l|l|l|} \hline Conjugacy class of stabilizer & Shorthand notation & Number of modes \\ \hline \(D_{2d}\times\mathcal{T}\) & \(A_{4}\) & 3 \\ \hline \(C_{3v}\times\mathcal{T}\) & \(A_{3}\) & 4 \\ \hline \(C_{2v}\times\mathcal{T}\) & \(A_{2}\) & 6 \\ \hline \(S_{4}\wedge\mathcal{T}_{2}\) & \(B_{4}\) & 6 \\ \hline \(C_{3}\wedge\mathcal{T}_{s}\) & \(B_{3}\) & 8 \\ \hline \end{tabular} \end{table} Table 1: Periodic Orbits of the Spin-0 Sector Figure 2: Configuration Space Plots of NLNMs ### Nested Non-Linearity and Reduced Dynamical Systems One would definitely expect the larger dimensionality of the phase space to present difficulties. Once again, the symmetries of the spin-0 sector come to our aid. They do so by essentially constraining trajectories to lower dimensional subsets of the full phase space. Trajectories constrained in such a manner can then be described by the dynamics of a _reduced_ system living on a lower dimensional subset of the full phase space. Happily, it turns out that a thorough study of relevant reduced dynamics is, with some modifications, enough to reproduce several salient features of the _full_ six-dimensional model. As an example of reduced dynamics, let us consider trajectories with all \(a_{i}\)'s initially set to a common value \(a_{0}\) and all \(p_{a_{i}}\)'s initially equal to a common value \(p_{a_{0}}\). The tetrahedral symmetry of the EOM ensures that these relations will be undisturbed by time evolution. Such trajectories form a subclass of all the possible orbits and are solutions of a reduced system nested in the full model. This reduced system is governed by the dynamical equations \[\dot{a}(t)=p_{a}(t),\quad\dot{p}_{a}(t)=-a(t)+3a(t)^{2}-2a(t)^{3}, \tag{3.1}\] where, \(a/p_{a}\) denotes the common value of the coordinates/momenta. This is simply the dynamics of a particle in the one dimensional double well \(V_{DW}(a)=\frac{1}{2}(a(a-1))^{2}\). Formally, subsets of the phase space which are mapped to (subsets of) themselves by time evolution are referred to as _invariant sets_. We have thus simply identified a two dimensional invariant subset of our model - the set of phase space points with all coordinates equal and all momenta equal. Note that the dynamics in this invariant set is governed by a Hamiltonian, in fact the Hamiltonian obtained by setting coordinates and momenta in (2.27) to a common pair \(a,p_{a}\). In this case, the resulting reduced dynamics is regular, as it should be - the reduced Hamiltonian is two-dimensional and therefore integrable. A far more interesting invariant set is obtained by setting just _two_ of the coordinates and their corresponding momenta to common values. Once again, the \(T_{d}\) symmetry of the EOM (2.23)-(2.26) render these relations time invariant. Assuming, without loss of generality, that \(a_{1}\) serves as the 'lone' coordinate, so that \(a_{2}=a_{3}=a\) and \(p_{a_{2}}=p_{a_{3}}=p_{a}\), the equations governing the reduced dynamics are then \[\dot{a}_{1}(t)=p_{a_{1}}(t),\quad\dot{p}_{a_{1}}(t)=-a_{1}(t)(1+2 a(t)^{2})-3a(t)^{2}, \tag{3.2}\] \[\dot{a}(t)=p_{a}(t),\quad\dot{p}_{a}(t)=-a(t)(1+a_{1}(t)^{2}+a(t) ^{2})-3a_{1}(t)a(t). \tag{3.3}\] The reduced dynamics in this case resides on a four dimensional subset of the phase space, specifically the subset defined by the relations \(a_{2}=a_{3}\) and \(p_{a_{2}}=p_{a_{3}}\). We shall henceforth distinguish the full six dimensional dynamics from these reduced four dimensional subsystems by referring to the latter as 'Reduced Dynamical Systems' (RDSs). In particular, we can choose to fix any two coordinates (and their corresponding momenta) equal to one another and the resulting reduced dynamics for any choice will qualify as an RDS. Since any two choices are related by a symmetry transform, we will fix the convention \(a_{2}=a_{3}=a\) and \(p_{a_{2}}=p_{a_{3}}=p_{a}\) for any explicit computations hereafter. As it turns out, several of the NLNMs are constrained to lie on RDS subspaces. For this reason, a thorough study of the RDSs suffices to explain a good fraction of the full six-dimensional dynamics. Surprisingly, the RDS dynamics _also_ have ties to the quantum phases of the spin-0 sector of the \(SU(2)\) matrix model, as we shall later see. Periodic Orbits and their Classification Having built up the kinematical aspects of the model, we shall now proceed with our analysis in the following three stage fashion: 1. Enumerate the periodic orbits and understand their geometry and dynamics. This requires some qualification, which we do below. 2. Individually study their stability and destabilization. 3. Correlate the destabilization of these orbits with the generically observed chaotic dynamics. The lack of analytic control inherent to non-linear systems makes it impossible to identify _all_ of the periodic orbits. For our purposes however, it will suffice to confine our attention to those orbits whose destabilization has noticeable imprints on the chaotic dynamics. As it turns out, these sets of orbits are composed of NLNMs _and_ two sets of orbits that stem from geometric rather than group-theoretic considerations. These two families of 'geometric' orbits, along with the NLNMs, can together provide convincing explanations for all the observed peculiarities of the chaotic dynamics, and will thus be the focus of our study. We thus begin with an analysis of the various classes of NLNMs, following which we shall briefly explore the origins and properties of the geometric orbits. ### NLNMs We find numerically that all but two families of NLNMs exist only at low energies and are rapidly destroyed as we move away from the integrable regime. Only the \(A_{3}\) and the \(A_{4}\) orbits are present at _all_ energies (they are protected by their high symmetry), and their stability properties display surprising subtleties. We will elaborate on this in section 5. We will thus devote individual subsections to each of these classes, and follow this up with an enumeration of the basic properties of the remaining NLNMs. #### 4.1.1 \(A_{4}\) Orbits While the equations of motion (2.23-2.26) are highly non-linear, _all_ non-linear corrections to a given coordinate's evolution involve _only_ the remaining two coordinates - there are no non-linear'self interactions'. As a result, setting two of the coordinates to zero at some point in time renders the _instantaneous evolution_ of the last coordinate purely harmonic. In fact, by setting their corresponding momenta to zero as well, we can actually 'freeze' these coordinates at zero and render the dynamics of the third 'lone' coordinate _completely_ harmonic. Such trajectories are classified as \(A_{4}\) orbits, and despite their characterization as NLNMs, evolve harmonically with time. Mathematically, the \(A_{4}\) orbits evolve as \((a_{i}(t),p_{a_{i}}(t))=(A\sin(t+\phi),0,0,A\cos(t+\phi),0,0)\), or suitable permutations thereof. Individual orbits of the \(A_{4}\) type are thus completely specified by an amplitude \(A\) (having energy \(E=\frac{A^{2}}{2}\)) and a phase \(\phi\). \(A_{4}\) orbits clearly exist at all energies and, for subcritical energies, are confined to the central lobe of the allowed configuration space. A representative orbit is shown in Figure 2a. Since the \(A_{4}\) orbits have two coordinates and their corresponding momenta set to zero, they lie on RDS subspaces. More precisely, each RDS possesses harmonic orbits with the common coordinate and the common momentum frozen to zero. The projection of an \(A_{4}\) orbit onto the corresponding RDS is shown alongside the relevant constant energy RDS hypersurface in Figure 3 #### 4.1.2 \(A_{3}\) Orbits We have already encountered \(A_{3}\) orbits earlier in equation (3.1). Their dynamics is governed by a double well potential \(V_{DW}(a)=\frac{1}{2}(a(a-1))^{2}\). These trajectories and their images under the \(T_{d}\times\mathcal{T}\) action are collectively referred to as \(A_{3}\) orbits. Initial conditions depicting an \(A_{3}\) orbit must thus be of the form \((a_{i},p_{a_{i}})=(a_{0},a_{0},a_{0},p_{a_{0}},p_{a_{0}},p_{a_{0}})\) (or its transform under \(T_{d}\times\mathcal{T}\)). As with \(A_{4}\), individual \(A_{3}\) orbits are uniquely specified by the two parameters \(a_{0}\) and \(p_{0}\) which together fix both the energy of the orbit and a suitable zero reference. As solutions to a quartic potential, the \(A_{3}\) orbits are periodic and their time evolution may be expressed in terms of elliptic integrals. Additionally, depending on whether or not the total energy exceeds the 'well depth' \(\frac{1}{2}\), trajectories either spread across both basins of the double well (the'supercritical' regime) or lie confined to one of the two basins ('the subcritical' case). Correspondingly, the matrix model possesses _subcritical_\(A_{3}\) orbits for all \(E<E_{c}\) that are confined to either the central lobe or one of the side lobes \(1\) and supercritical orbits for all \(E>E_{c}\), which live in both central and side lobes. Symmetry considerations tell us that we have eight \(A_{3}\) orbits for any subcritical energy and \(4\) for any supercritical energy. A representative subcritical orbit is shown in Figure 2b. Once again, these orbits can be embedded in RDSs with the lone and common coordinates (and momenta) set equal to one another. The projection of an \(A_{3}\) orbit onto the corresponding RDS is shown alongside the relevant constant energy RDS hypersurface in Figure 4. Figure 4: RDS Projected \(A_{3}\) Orbit Figure 3: RDS Projected \(A_{4}\) Orbit #### 4.1.3 Other NLNMs Unlike the \(A_{4}\) or \(A_{3}\) orbits, the remaining classes of NLNMs exist only for low energies and are rapidly destroyed as we leave the integrable regime. At energies where they _do_ exist, initial conditions leading to such orbits can be implicitly specified by relations between the coordinates and momenta derived from [13]. In the list below, we enumerate the required relations for each class of orbits. We also list the numerically obtained energies at which these orbits cease to exist. 1. \(A_{2}\): \(a_{1}=p_{a_{1}}=0,a_{2}=a_{3},p_{a_{2}}=p_{a_{3}}\) and \(T_{d}\times\mathcal{T}\) transformations thereof. These orbits are destroyed at \(E\simeq 0.001\). 2. \(B_{4}\): \(a_{1}=p_{a_{1}}=0,a_{2}=-p_{a_{3}},a_{3}=+p_{a_{2}}\) and \(T_{d}\times\mathcal{T}\) transformations thereof. These orbits are destroyed at \(E\simeq 0.01\). 3. \(B_{3}\): \(a_{2}=\frac{1}{2}\left(-a_{1}+\sqrt{3}p_{a_{1}}\right),a_{3}=\frac{1}{2} \left(-a_{1}-\sqrt{3}p_{a_{1}}\right),p_{a_{2}}=\frac{1}{2}\left(-\sqrt{3}a_{1 }-p_{a_{1}}\right),p_{a_{3}}=\frac{1}{2}\left(\sqrt{3}a_{1}-p_{a_{1}}\right)\) and \(T_{d}\times\mathcal{T}\) transformations thereof. These orbits are destroyed at \(E\simeq 0.01\). 4. Non-critical NLNMs: \(a_{2}=a_{1},a_{3}=\sqrt{5}p_{a_{1}},p_{a_{2}}=p_{a_{1}},p_{a_{3}}=-\sqrt{5}a_ {1}\) and \(T_{d}\times\mathcal{T}\) transformations thereof. These orbits are destroyed at \(E\simeq 0.006\). Again, individual orbits of each class are uniquely specified by two parameters, which together fix the energy and provide a suitable zero-reference. Representative figures are shown in Figures 2c-2f. Amongst these classes of orbits, only the \(A_{2}\) and non-critical orbits have two coordinates and their corresponding momenta set to common values and thus possess RDS analogs. The \(B_{4}\) and \(B_{3}\) orbits, by contrast, are genuinely non-planar NLNMs. ### Geometric Orbits The methods we will utilize for finding geometric orbits was first used in the context of the Henon-Heiles system [12]. As stated earlier, the study of the NLNMs alone is not sufficient for a comprehensive understanding of the dynamics. We also find two families of geometric orbits which do not arise from stabilizer subgroups of the full \(T_{d}\times\mathcal{T}\) action. We call them geometric because they emerge from constraints imposed by the requirement of continuity of certain phase space observables over equipotentials of the _RDSs_. In contrast to the NLNMs, the geometric orbits are initially defined over the RDSs and then translated to the full spin-0 sector using a canonical inclusion map. Despite these differences, both the geometric orbits and the NLNMs have their origins in the symmetries of their respective systems. Consequently, we must begin our search for the former by investigating the symmetries of the RDSs. The tetrahedral symmetry of the spin-0 sector reduces to a more modest \(\mathbb{Z}_{2}\) symmetry for the RDSs. The sole non-trivial symmetry transformation induced by the action of this reduced symmetry group is, in phase space, simply \(a_{1}\to a_{1},a\rightarrow-a,p_{a_{1}}\to p_{a_{1}},p_{a} \rightarrow-p_{a}\). This abstract action translates to a geometric symmetry of the RDS equipotentials about the \(a_{1}\) axis. These equipotentials are described by contours of the form \[\frac{p_{a_{1}}^{2}}{2}+p_{a}^{2}+\frac{1}{2}(a_{1}^{2}+2a^{2}-6a_{1}a^{2}+2a_ {1}^{2}a^{2}+a^{4})=E, \tag{4.1}\] where the LHS is simply the Hamiltonian (2.27) with the replacements \(a_{2},a_{3}\to a\) and \(p_{a_{2}},p_{a_{3}}\to p_{a}\). The structure of the equipotentials of the spin-0 sector thus directly translate to the equipotentials of the RDSs, which therefore also undergo a topology change at \(E_{c}=\frac{3}{32}\). Representative equipotentials are shown in Figure 5. The key to constructing geometric orbits lies in utilizing the symmetries of the equipotentials in conjunction with those of the trajectories. The latter can be neatly formulated in terms of relevant constructs which we term _return maps_. The return maps and the precise algorithms for constructing geometric orbits are outlined in the following subsections. Following [12], we will often refer to them as \(\Pi_{1}\) and \(\Pi_{2}\) orbits. #### 4.2.1 \(\Pi_{1}\) Orbits The return map required for constructing a \(\Pi_{1}\) orbit of energy \(E^{0}\) is defined over the surface of the \(E^{0}\) equipotential of the RDS. Specifically, given a point \((a_{1}^{0},a^{0})\) on this equipotential, we consider the unique trajectory starting from rest at this point, i.e. \(p_{a_{1}}^{0}=p_{a}^{0}=0\). This trajectory, or more accurately its configuration space projection, traces out a curve confined to the interior of the \(E^{0}\) equipotential which (in principle) crosses the \(a_{1}\) axis, at some time \(t_{0}\). The return map \(\mathcal{R}\) is defined to output the angle made by the tangent to the curve at \(t=t_{0}\) with the \(a_{1}\) axis. The crucial observation behind constructing \(\Pi_{1}\) orbits can be concisely formulated in terms of the return map. Specifically, _points \(Q\) on the equipotential satisfying \(\mathcal{R}(Q)=\pm\frac{\pi}{2}\) generate periodic orbits_. This follows from the action of the full symmetry group \(\mathbb{Z}_{2}\times\mathcal{T}\). Consequently, the question of generating \(\Pi_{1}\) orbits reduces to one of finding solutions to the equation \(\mathcal{R}(Q)=\pm\frac{\pi}{2}\). Since we have, for each energy, a pair of \(A_{3}\) orbits yielding return map outputs of \(\frac{\pi}{4}\) and \(\frac{3\pi}{4}\), the intermediate value theorem guarantees at least one solution to the above equation. As it is a trivial task to locate the intersections of the \(A_{3}\) orbits with the \(E^{0}\) equipotential, we may then use these as reference points to initiate a binary search algorithm to obtain solutions to the above equation. Numerically, we can then establish the existence of a single \(\Pi_{1}\) orbit for any energy. These orbits, initially constructed over the RDS phase space, can be trivially extended to the full spin-0 sector. Representative pictures are shown in Figure 6. Figure 5: Equipotential Surfaces of an RDS #### 4.2.2 \(\Pi_{2}\) Orbits A second set of geometric orbits can be constructed by formulating a different type of return map, essentially the same as our earlier one, but defined over the \(a_{1}\) axis rather than over equipotential surfaces. More precisely, given an arbitrary energy \(E_{0}\), we consider generic points on the \(a_{1}\) axis with \(p_{a_{1}}\) set to zero initially and \(p_{a}\) fixed by the energy constraint. As before, this trajectory generates a curve whose angle with the \(a_{1}\) axis is then captured by this second return map \(\mathcal{R}_{2}\). Once again, solutions to the equation \(\mathcal{R}_{2}=\pm\frac{\pi}{2}\) yield periodic orbits, this time _closed_ orbits in configuration space, which we categorize as \(\Pi_{2}\). Again, we can set up binary search methods for numerically solving the generating equation, with reference points being the intersections of the \(E_{0}\) equipotential with the \(a_{1}\) axis. Unlike the \(\Pi_{1}\) orbits however, there are no continuity arguments for justifying the presence of the \(\Pi_{2}\) orbits. Indeed, numerical evaluations tell us that the \(\Pi_{2}\) orbits cease to exist beyond a threshold energy \(E_{\Pi_{2}}\simeq 26\), a second unexpected energy scale of the spin-0 sector. That said, \(E_{\Pi_{2}}\) lies in the far supercritical regime, so that \(\Pi_{2}\) orbits do persist over a good range of energies. Representative orbits are shown in Figure 7. Figure 6: \(\Pi_{1}\) Orbits ## 5 Monodromy Analysis of Periodic Orbits Having enumerated the features of relevant periodic orbits, we will next outline the methods we shall use for assessing their stability. Our strategy rests on the properties of a construct known as the _monodromy matrix_[18], which we define below. Consider an \(n\)-dimensional non linear system \(\dot{x}(t)=F(x,t)\). Let \(x_{p}(t)\) be a periodic solution of this system with time period \(T_{p}\). An infinitesimal fluctuation \(\delta x(t)\) about \(x_{p}(t)\) can be shown to _linearly_ evolve as \[\delta\dot{x}(t)=\nabla F\big{(}x_{p}(t)\big{)}\cdot\delta x(t). \tag{5.1}\] \(\nabla F\) is simply the Jacobian \(J\) of the transformation \(x\to F(x)\). We may also express this evolution in terms of a linear time evolution operator \(U(t)\) that maps an arbitrary initial fluctuation \(\delta x(0)\) to \(\delta x(t)\). \(U\) is thus a time dependent square matrix of dimension \(n\). The monodromy matrix \(\mathcal{U}\) is then simply defined as \(\mathcal{U}\equiv U(T_{p})\). In other words, the monodromy matrix tells us what happens to an infinitesimal fluctuation as it cycles the periodic orbit once. The eigenvalues of \(\mathcal{U}\) yield information on the stability of the periodic orbits [23]. Since \(\mathcal{U}\) is a real-valued matrix, its eigenvalues must come in complex conjugate pairs. A periodic orbit is unstable iff at least one of its eigenvalues lies _strictly_ outside the unit circle \(|z|=1\). For Hamiltonian systems, the symplectic structure of the function \(F\) can be used to show that the eigenvalues of the corresponding \(\mathcal{U}\) come in reciprocal pairs: \(\frac{1}{\lambda}\) is an eigenvalue if \(\lambda\) is. In addition, Hamiltonian systems _always_ have at _least_ two unit eigenvalues [23]. The corresponding eigenvectors are either directed along the periodic trajectory or connect the periodic trajectory to one of infinitesimally higher/lower energy. To summarize, the following properties are inherent to \(\mathcal{U}\)'s arising from Hamiltonian systems: 1. At least two eigenvalues are unity. 2. If \(\lambda\) is an eigenvalue, then so are \(\frac{1}{\lambda}\) and \(\overline{\lambda}\). Figure 7: \(\Pi_{2}\) Orbits Eigenvalues of \({\cal U}\) are usually computed numerically, since most periodic orbits can only be found numerically to begin with. Analytic results may become available only when we have explicit expressions for the time evolution of the orbit in question. In our case, it turns out that the symmetry and the analytically tractable time evolution of the \(A_{3}\) and \(A_{4}\) orbits simplify monodromy computations enormously and some analytic statements _can_ be made. It is not possible to obtain exact expressions for the time evolution of any of the remaining NLNMs or the geometric orbits. Nevertheless, the symmetries of the latter and their persistence over a large range of energy endows them with unexpected stability properties which we explore numerically. In the subsequent subsections, we will thus extensively analyze the stability of the \(A_{3}\), \(A_{4}\) and \(\Pi\) orbits. ### \(A_{4}\) Orbits The \(A_{4}\) orbits are the simplest to analyze, since their harmonic nature leads to a straightforward time dependence. With our chosen conventions, we will work exclusively with \(A_{4}\) orbits that have \(a_{2}\) and \(a_{3}\) frozen to \(0\), and \(a_{1}\) varying sinusoidally with unit angular frequency. To find \({\cal U}\), we must first set up the equations governing infinitesimal fluctuations about such \(A_{4}\) orbits. An arbitrary fluctuation about a generic trajectory may be quantified by a six-dimensional phase space vector \(\delta x(t)\equiv\left(\delta a_{1}(t),\delta p_{a_{1}}(t),\delta a_{2}(t), \delta p_{a_{2}}(t),\delta a_{3}(t),\delta p_{a_{3}}(t)\right)\). The fluctuation equations (5.1) and the functional form of the \(A_{4}\) orbits derived in section 4.1.1 then yield \[\delta\dot{a}_{1}(t) =\delta p_{a_{1}}(t),\quad\delta\dot{a}_{2}(t)=\delta p_{a_{2}}( t),\quad\delta\dot{a}_{3}(t)=\delta p_{a_{3}}(t), \tag{5.2}\] \[\delta\dot{p}_{a_{1}}(t) =-\delta a_{1}(t),\] (5.3) \[\delta\dot{p}_{a_{2}}(t) =-\delta a_{2}(t)(1+A^{2}\cos^{2}t)+3A\cos t\ \delta a_{3}(t),\] (5.4) \[\delta\dot{p}_{a_{3}}(t) =-\delta a_{3}(t)(1+A^{2}\cos^{2}t)+3A\cos t\ \delta a_{2}(t). \tag{5.5}\] A complete decoupling can be achieved by the canonical rotation \(a_{\pm}\equiv\frac{a_{2}\pm a_{3}}{\sqrt{2}}\). The fluctuation equations then read \[\delta\dot{a}_{1}(t) =\delta p_{a_{1}}(t),\quad\delta\dot{a}_{+}(t)=\delta p_{a_{+}}( t),\quad\delta\dot{a}_{-}(t)=\delta p_{a_{-}}(t), \tag{5.6}\] \[\delta\dot{p}_{a_{1}}(t) =-\delta a_{1}(t),\] (5.7) \[\delta\dot{p}_{a_{+}}(t) =-\Big{(}1+A^{2}\cos^{2}t-3A\cos t\Big{)}\delta a_{+}(t),\] (5.8) \[\delta\dot{p}_{a_{-}}(t) =-\Big{(}1+A^{2}\cos^{2}t+3A\cos t\Big{)}\delta a_{-}(t). \tag{5.9}\] The geometry of the \(A_{4}\) orbits thus naturally induces a separation of perturbations into 'longitudinal' modes (\(\delta a_{\pm},\delta p_{a_{\pm}}=0\)) and 'transverse' modes (\(\delta a_{1},\delta p_{a_{1}}=0\)). The fluctuation equations (5.6)-(5.9) pick out the unique basis in which the two transverse modes decouple from one another. The (almost) identical forms of the equations governing the evolution of \(\delta a_{+}\) and \(\delta a_{-}\) simply confirm that there is no discernible structural difference between the two transverse modes. We must now attempt to make sense of the fluctuation equations (5.6)-(5.9). In principle, we could do this by using these equations to obtain formal expressions for \({\cal U}\) and then numerically solve for its eigenvalues. As it turns out, the symmetries of the \(A_{4}\) orbits heavily simplify the calculations, so that a full computation of \({\cal U}\) is not necessary. It is useful to view the generic fluctuation equations (5.1) as a single _matrix_ equation \(\delta\dot{x}(t)=J(t)\delta x(t)\). This has the formal solution \[\delta x(t)=T\{e^{\int_{0}^{t}J(s)ds}\}\delta x(0), \tag{5.10}\] where \(T\), the time ordering operator, accounts for the non-commutativity of \(J\)'s evaluated at different times. Since the \(A_{4}\) orbits are \(2\pi\) periodic, the monodromy matrix \(\mathcal{U}\) is simply \(T\{e^{\int_{0}^{2\pi}J(s)ds}\}\). We can obtain explicit expressions for \(J\) by reading off its matrix elements from the fluctuation equations (5.6)-(5.9). The \(J\) matrix splits as a direct sum \(J(t)=J_{1}(t)\oplus J_{+}(t)\oplus J_{-}(t)\) in the \(\{a_{1},p_{a_{1}},a_{+},p_{a_{+}},a_{-},p_{a_{-}}\}\) basis, where \[J_{1}(t) =\begin{pmatrix}0&1\\ -1&0\end{pmatrix}, \tag{5.11}\] \[J_{+}(t) =\begin{pmatrix}0&1\\ -(1+A^{2}\cos^{2}t-3A\cos t)&0\end{pmatrix},\] (5.12) \[J_{-}(t) =\begin{pmatrix}0&1\\ -(1+A^{2}\cos^{2}t+3A\cos t)&0\end{pmatrix} \tag{5.13}\] Since the matrices \(J_{1},J_{+},J_{-}\) lie on different blocks of \(J\), \(\mathcal{U}\) also splits as \(\mathcal{U}=\mathcal{U}_{1}\oplus\mathcal{U}_{+}\oplus\mathcal{U}_{-}\), where \(\mathcal{U}_{1/+/-}=T\{e^{\int_{0}^{2\pi}J_{1/+/-}(t)dt}\}\). Since \(J_{1}\) is just \(i\) times the Pauli matrix \(\sigma_{2}\), we obtain \(\mathcal{U}_{1}=\mathbb{I}_{2}\). In fact, we could have arrived at this result without any calculation whatsoever. Since the spin-0 sector is a Hamiltonian system, the \(A_{4}\) monodromy matrix must have two eigenvectors of unit eigenvalue, one describing time-translations along a single \(A_{4}\) orbit, and the other connecting the \(A_{4}\) orbit in question to one with infinitesimally higher/lower energy. It is not hard to see that the required eigenvectors are precisely the longitudinal modes: longitudinal fluctuations with \(\delta p_{a_{1}}=0\) clearly just shift one's position along a given orbit, while longitudinal fluctuations with \(\delta a_{1}=0\) simply changes the momentum slightly. This alters the energy of the trajectory while retaining its identity as an \(A_{4}\) orbit. Thus the non-trivial features of the stability of the \(A_{4}\) orbits reside in the \(\mathcal{U}_{\pm}\) matrices. We can further simplify using the symmetry between \(\delta a_{+}\) and \(\delta a_{-}\). Since \(J_{-}(t+\pi)=J_{+}(t)\) and the integral of a periodic function over a single period is independent of the lower limit of integration, we have \[\mathcal{U}_{+}=T\{e^{\int_{0}^{2\pi}J_{+}(t)dt}\}=T\{e^{\int_{\pi}^{3\pi}J_{ +}(t)dt}\}=T\{e^{\int_{\pi}^{3\pi}J_{-}(t+\pi)dt}\}=T\{e^{\int_{0}^{2\pi}J_{-} (t)dt}\}=\mathcal{U}_{-}. \tag{5.14}\] The last equality makes use of the substitution \(t\to t+\pi\). Thus, while the blocks \(J_{+}(t)\) and \(J_{-}(t)\) differ in form, their time ordered integrals are exactly the same. As a result, we may confine our attention to either one of the transverse modes. There exists a final simplification. Since \(\mathcal{U}_{+}=\mathcal{U}_{-}\) and eigenvalues of \(\mathcal{U}\) must come in conjugate pairs and reciprocal pairs, we can constrain its spectrum to be of the form \(\{1,1,\mu,\lambda,\mu,\lambda\}\), where \(\mu\) and \(\lambda\), the eigenvalues of \(\mathcal{U}_{+}\) (or \(\mathcal{U}_{-}\)), must satisfy either of the two following conditions: 1. \(\mu\) **and \(\lambda\) are real:** In this case, we have \(\mu=\frac{1}{\lambda}\). Barring the trivial cases \(\mu=\lambda=\pm 1\), either \(\mu\) or \(\lambda\) will lie outside the unit circle, leading to an unstable orbit. So \(|\lambda+\mu|=|\lambda+\frac{1}{\lambda}|>2\). 2. \(\mu\) **and \(\lambda\) are complex conjugates:** Now we have \(\lambda=\frac{1}{\mu}=\overline{\mu}\), so that \(|\mu|=|\lambda|=1\). Barring the trivial cases \(\mu=\lambda=\pm 1\), \(\mu\) and \(\lambda\) are thus complex conjugates lying _on_ the unit circle, resulting in a stable orbit. In this case, we may represent the pair \(\{\mu,\lambda\}\) as \(\{e^{i\theta},e^{-i\theta}\}\) for some \(\theta\) in \((0,2\pi)\) so that \(|\mu+\lambda|=2|\cos\theta|<2\). Thus, we see that the (in)stability of any \(A_{4}\) orbit is beautifully captured by _a single number_: \(\gamma\equiv\mu+\lambda\). Explicitly, the orbit is stable (unstable) depending on whether \(|\gamma|<2\,(>2)\) with transitions occurring when \(|\gamma|=2\). Note that in terms of \(\mathcal{U}\), we have \(\gamma=\frac{\operatorname{Tr}\mathcal{U}-2}{2}\) The stability of a periodic orbit is thus captured by _trace_ of \(\mathcal{U}\), rather than its full spectrum. This is a standard feature of four-dimensional Hamiltonian systems [24]. No such simplifications exist for higher dimensional systems. We again emphasise that it is the special symmetries of the \(A_{4}\) orbits (and more generically the Hamiltonian of the spin-0 sector) (2.27) that have produced this extreme simplification. Having substantially simplified our computations, we now turn to numerics. We compute \(\gamma\) as a function of energy in the range \(E\in(0,500)\). Figure 8 depicts \(\gamma\) as a function of energy \(E\) in the region \(0<E<E_{c}\). The key takeaway is that \(\gamma\) never dips below 2, so that subcritical \(A_{4}\) orbits are unstable without exception. Additionally, the increase of \(\gamma\) with \(E\) suggests an increase in the 'amount of instability'. This notion is indeed true, and can be precisely quantified by chaos theory measures, such as Lyapunov exponents, which we will analyze in section 6.2. Our results for subcritical \(A_{4}\) orbits are not surprising, as one would expect heightened instabilities with increasing energies. The supercritical regime displays a much more surprising behaviour, as is clear from Figure 9. From these plots, we see that the stability of supercritical \(A_{4}\) orbits is characterized by _oscillations between stability and instability_ with a monotonically decreasing frequency. These transitions seem to repeat ad infinitum. Curiously, stability plots of a very similar nature have been observed in literature, albeit in the seemingly unrelated context of solitonic solutions of the non-linear Schrodinger equation [25]. The connections between such themes and our gauge matrix model need to be better understood. Before seeking analytic explanations for these transitions, it must be noted that the \(\gamma\)-\(E\) plots are just _one_ of many signatures of these stability flips. Indeed, we shall encounter more signatures as we proceed with our analysis. One particular signature, however, is worthy of immediate attention. Since the \(a_{i}\)'s are after all the fundamental observables of our theory, it is natural to look for the imprints of these stability flips on their time evolution. Parametric plots of trajectories in configuration space provide a beautiful way to illustrate these effects. We construct configuration space parametric plots at energies marginally above and marginally below a transition energy, with initial conditions deviating very slightly from the initial conditions required for relevant \(A_{4}\) orbits. The results are displayed for the first transition point \(E=\frac{3}{2}\) (we will derive this value later) in Figure 10. We see that energies _marginally_ above \(E=\frac{3}{2}\) yield perfectly regular trajectories barely distinguishable from their \(A_{4}\) parent orbits, while energies _marginally_ below the transition point yield chaotic trajectories which rapidly fill a sizeable fraction of the available configuration space. Figure 11 shows an analogous flip in stability in the opposite direction (stable below the transition energy, unstable above it). Note, in this case, that the transition point is approximate. Figure 8: \(\gamma\) vs \(E\) for subcritical \(A_{4}\) orbits Figure 9: \(\gamma\) vs \(E\) for supercritical \(A_{4}\) orbits We shall now use the fluctuation equations (5.2) to better understand the stability and establish some quantitative results. It is useful to eliminate the \(\delta p\)'s from the fluctuations equations and regard them as second order in \(\delta a\)'s. The equivalence of \(a_{+}\) and \(a_{-}\) fluctuations means that we may restrict our studies to just one of these modes. Without loss of generality, we choose to work with the \(a_{+}\) modes, whose fluctuation equation reads \[\delta\ddot{a}_{+}(t)+\big{(}1-3A\cos t+A^{2}\cos^{2}t\big{)}\delta a_{+}(t)=0 \tag{5.15}\] Figure 11: Stable to unstable flip for \(A_{4}\) Figure 10: Unstable to stable flip for \(A_{4}\) Rescaling \(t=2s\), we obtain \[\delta\ddot{a}_{+}(s)+\big{(}\eta+2\alpha\cos 2s+2\beta\cos 4s\big{)}\delta a_{+}(s )=0, \tag{5.16}\] with \(\alpha=-6A,\beta=A^{2}\) and \(\eta=4+2A^{2}\), which is the standard form of the Whittaker-Hill (WH) equation (see [26] for example). This WH equation has exactly the form of a Schrodinger equation with a periodic potential, typically encountered in Bloch theory of solids. Hence we expect to see a band structure with bands and band gaps corresponding to stability snd instability. Floquet theory tells us that _any_ solution to the WH equation can be expressed in the form \(e^{px}g(x)\) for some complex number \(p\) and a periodic function \(g(x)\). This is usually about as far as we can go, as closed form expressions are generally not available. However, we can make progress towards finding the locations of transition points, as this requires a study of only the _periodic_ solutions to the WH equation. This is because \(|\gamma|=2\) corresponds to \({\cal U}_{\pm}\) being \(\pm\mathbb{I}_{2}\) which in turn leads to periodic behaviour of the fluctuations. The WH equation is usually solved by an expansion into a sine or cosine series followed by solving recursion relations that emerge between the Fourier coefficients. As such an approach no doubt reminds the reader of the more common Frobenius methods, it is natural to question whether we can carry over techniques from power series expansions to our case. In particular, since Frobenius type problems often have parameter choices that lead to _finite_ termination of the recursion series, we may naturally wonder whether such truncations are possible for the WH equation too. This is unfortunately not the case as the pertinent recurrence relations involve _five_ coefficients at a time. However, a remarkable transformation, \(\delta z(s)\equiv\delta a_{+}(s)e^{\sqrt{\beta}\cos 2s}\), of our WH equation yields the differential equation [26, 27] \[\delta\ddot{z}(s)+4\sqrt{\beta}\sin 2s\,\delta\dot{z}(s)+[\eta+2\beta+\big{(}2 \alpha+4\sqrt{\beta}\big{)}\cos 2s]\,\delta z(s)=0, \tag{5.17}\] the _Ince equation_, which can be solved by _three_ term recursions. If \(\beta=\frac{\alpha^{2}}{4(p+1)^{2}}\) for some \(p\in\mathbb{Z}^{+}\), \(\eta\) can be chosen in order to make the recursion relation eventually terminate. In such situations, the Ince equation possesses finite series solutions, known as Ince polynomials, which can then be recast, via the \(\delta z\to\delta a_{+}\) transform, to closed periodic solutions (though not _polynomial solutions_) of the WH equation. In our case, the coefficients \(\alpha,\beta\) and \(\eta\) are _additionally_ constrained to be related to one another via the amplitude \(A\). It turns out that \(\alpha\) and \(\beta\) indeed satisfy the necessary relations for finite solutions, with \(p=2\). However, the restrictions on \(\eta\) only grant us finite solutions for _one_ value: \(A=\sqrt{3}\). This corresponds to a stability flip at \(E=\frac{A^{2}}{2}=\frac{3}{2}\). The corresponding Ince polynomial can be worked out to be \(1+\frac{2}{\sqrt{3}}\cos 2s\). Reverting to the WH equation, we obtain \[\delta a_{+}(s)=\left(1+\frac{2}{\sqrt{3}}\cos 2s\right)e^{-\sqrt{3}\cos 2s}. \tag{5.18}\] A second, linearly independent periodic solution for this equation for the WH equation can be obtained using the well-known variation of parameters method. Suitably applied to our case, this method tells us that if \(w_{1}(s)\) is a solution to the WH equation at \(E=\frac{3}{2}\), then so is \(w_{1}(s)\int_{0}^{s}(1/w_{1}(t)^{2})dt\). We may thus write a second independent solution to the WH equation at \(E=\frac{3}{2}\) in quadrature form as \[\delta a_{+}(s)=\left(1+\frac{2}{\sqrt{3}}\cos 2s\right)e^{-\sqrt{3}\cos 2s} \int_{0}^{s}\frac{e^{2\sqrt{3}\cos 2s}}{\left(1+\frac{2}{\sqrt{3}}\cos 2s \right)^{2}}ds \tag{5.19}\] This integral unfortunately cannot be evaluated in terms of elementary functions, but we nevertheless have a passable inventory of the solutions of the WH equation at this energy. The numerically obtained \(\gamma-E\) plots confirm that \(E=\frac{3}{2}\) is indeed a transition point. While we cannot evaluate the precise locations of any other transition points, it is possible to ascertain their asymptotic behaviour. We see that at large enough energies, the coefficient of \(\cos 2s\) in (5.16) dies out far more rapidly (as a function of \(A\)) than either of the other two coefficients. So we can derive asymptotic expressions for transition points by neglecting this term in the large \(A\) limit. Reverting back to \(t=2s\), we then see that the WH equation reduces to the far simpler Mathieu equation \[\delta\ddot{a}_{+}(t)+\left(1+\frac{A^{2}}{2}+\frac{A^{2}}{2}\cos 2t\right)a_{+}( t)=0. \tag{5.20}\] Our problem now simplifies to studying the periodic solutions of the _Mathieu_ equation (see [26] for example). While still non-trivial, this is at least a well documented problem with at least a few known simple analytical results. In general, the Mathieu equation in its standard form \[\delta\ddot{y}(x)+\big{(}a-2q\cos 2x\big{)}y(x)=0 \tag{5.21}\] has periodic solutions only for special set of parameter values \((a,q)\). These sets are described by two _Mathieu characteristic functions_, a pair of functions defined from \(\mathbb{Z}\times\mathbb{R}\) to \(\mathbb{R}\) that take in a pair \((n,q)\) and yield a unique value for \(a\) that in turn gives the \(n^{\rm th}\) (odd/even) Mathieu function as a periodic solution to the Mathieu equation with parameter set \((a,q)\). Given that \((a,q)=(1+\frac{A^{2}}{2},-\frac{A^{2}}{4})\) for us, we see that the locations of the \(n^{\rm th}\) family of transition points are asymptotically given by solutions to the equations \[\xi_{1}(n,-\frac{A^{2}}{4})=1+\frac{A^{2}}{2} \tag{5.22}\] and \[\xi_{2}(n,-\frac{A^{2}}{4})=1+\frac{A^{2}}{2} \tag{5.23}\] where \(\xi_{1,2}\) are the Mathieu characteristic functions of the first/second kind. Recasting the above equations in terms of the energy \(E\), we obtain \[\xi_{1}(n,-\frac{E}{2})=1+E \tag{5.24}\] and \[\xi_{2}(n,-\frac{E}{2})=1+E \tag{5.25}\] Transition points computed in this manner can be compared with numerically obtained results (see Appendix A), and we observe excellent agreement between the two sets of values. ### \(A_{3}\) Orbits We consider the \(A_{3}\) orbits specified by initial conditions of the form \((a_{0},a_{0},a_{0},p_{a_{0}},p_{a_{0}},p_{a_{0}})\). The fluctuation equations for \(A_{3}\) orbits are \[\delta\dot{a}_{1}(t) =\delta p_{a_{1}}(t),\quad\delta\dot{a}_{2}(t)=\delta p_{a_{2}}( t),\quad\delta\dot{a}_{3}(t)=\delta p_{a_{3}}(t), \tag{5.26}\] \[\delta\dot{p}_{a_{1}}(t) =-\delta a_{1}(t)(1+2a(t)^{2})+(3a(t)-2a(t)^{2})(\delta a_{2}(t)+ \delta a_{3}(t)),\] (5.27) \[\delta\dot{p}_{a_{2}}(t) =-\delta a_{2}(t)(1+2a(t)^{2})+(3a(t)-2a(t)^{2})(\delta a_{3}(t)+ \delta a_{1}(t)),\] (5.28) \[\delta\dot{p}_{a_{3}}(t) =-\delta a_{3}(t)(1+2a^{2})+(3a(t)-2a(t)^{2})(\delta a_{1}(t)+ \delta a_{2}(t)), \tag{5.29}\] which decouple in the canonical basis \(\{b_{1},b_{2},b_{3}\}=\{\frac{a_{1}+a_{2}+a_{3}}{\sqrt{3}},\frac{a_{1}-a_{2}}{ \sqrt{2}},\frac{a_{2}-a_{3}}{\sqrt{2}}\}\) into three pairs of independent equations: \[\delta\dot{b}_{1}(t) =\delta p_{b_{1}}(t),\quad\delta\dot{b}_{2}(t)=\delta p_{b_{2}}(t),\quad\delta\dot{b}_{3}(t)=\delta p_{b_{3}}(t), \tag{5.30}\] \[\delta\dot{p}_{b_{1}}(t) =-(1+6a(t)^{2}-6a(t))\delta b_{1}(t),\] (5.31) \[\delta\dot{p}_{b_{2}}(t) =-(1+3a(t))\delta b_{2}(t),\] (5.32) \[\delta\dot{p}_{b_{3}}(t) =-(1+3a(t))\delta b_{3}(t). \tag{5.33}\] We thus obtain once more a block decomposition of the full monodromy matrix \(\mathcal{U}\) into three blocks \(\mathcal{U}_{1/2/3}\). Denoting the matrices corresponding to the \(b\) variables by \(J_{1/2/3}\), we see that \(J_{1}\) (and consequently \(\mathcal{U}_{1}\)) describes the evolution of fluctuations _along_ the orbit. We therefore do not expect any non trivial results from this sector. The \(J_{2}\) and \(J_{3}\) matrices are _manifestly_ equal. As a result, the full simplification of the previous subsection carries through for the \(A_{3}\) orbits as well. We need only study \(\gamma\), the trace of the \(\mathcal{U}_{2}\,(=\mathcal{U}_{3})\) matrix. From a physical standpoint, we thus expect the same trends as were observed for the \(A_{4}\) orbits. Since our perturbations once again take the form of time-independent Schrodinger equation characterized by a periodic potential (in this case an elliptic integral), we anticipate alternating _bands_ of stability and instability. Indeed, we find that we have oscillations between stability and instability, with the separation between adjacent transition points varying geometrically as we approach the critical energy \(E_{c}\) from either side. Specifically, we will show that 1. For subcritical energies, the quantities \(1-\frac{E_{n}^{1}}{E_{c}}\) and \(1-\frac{E_{n}^{2}}{E_{c}}\) where \(E_{n}^{1}(E_{n}^{2})\) are the energies corresponding to the \(n^{\text{th}}\) transition of \(\gamma\) from \(2^{+}\) to \(2^{-}(2^{-}\) to \(2^{+})\) form a geometric series, with common ratio \(\delta_{1}=e^{\frac{\pi}{2\sqrt{5}}}\) as we approach \(E_{c}\) from below. 2. For supercritical energies, the quantities \(1-\frac{E_{n}^{1}}{E_{c}}\) and \(1-\frac{E_{n}^{2}}{E_{c}}\) where \(E_{n}^{1}(E_{n}^{2})\) are the energies corresponding to the \(n^{\text{th}}\) transition of \(\gamma\) from \(2^{+}\) to \(2^{-}(2^{-}\) to \(2^{+})\) form a geometric series, with common ratio \(\delta_{2}=e^{\frac{\pi}{\sqrt{5}}}\) as we approach \(E_{c}\) from above. The plots of \(\gamma\) vs \(E\) thus exhibit a self-similar structure as shown in Figures 12 and 13. Analogous phenomena, studied in [24], were described as 'Feigenbaum-like'. While such self-similar structures and 'Feigenbaum like' oscillations have been previously observed for Hamiltonian systems [24], the spin-0 sector is, to our knowledge unique, as it contains not one, but _two_ independent self-similar cascades, one for the supercritical and one for subcritical regimes. Furthermore, these ratios are _distinct_ (albeit simply related). We now present a rigorous derivation of the preceding results, following extensively the methods adopted in [24]. Eliminating the \(p\)'s from the fluctuation equations (5.30)-(5.33), we are left with a single non-trivial second order fluctuation equation \[\delta\ddot{a}(t)+\big{(}1+3a(t)\big{)}\delta a(t)=0. \tag{5.34}\] Figure 12: \(\gamma\) vs \(E\) for subcritical \(A_{3}\) orbits Figure 13: \(\gamma\) vs \(E\) for supercritical \(A_{3}\) orbits The \(a\) appearing in the above equation describes the periodic time evolution along the \(A_{3}\) orbits, and can be explicitly expressed in terms of elliptic integrals, with time period \[T_{p}(E)=\left\{\begin{array}{ll}\left(\frac{6}{E}\right)^{1/4}F\left(\sec^{-1} \left(\frac{1+\eta}{\sqrt{\eta^{2}-1}}\right),\sqrt{\frac{1+\eta}{2}}\right),& \mbox{if }E<E_{c}\\ 2\left(\frac{6}{E}\right)^{1/4}K\left(\sqrt{\frac{1+\eta}{2}}\right),&\mbox{ if }E>E_{c}\end{array}\right\}. \tag{5.35}\] Here \(F\) and \(K\) are the incomplete and complete elliptic integrals of the first kind respectively, and \(\eta\equiv\sqrt{E_{c}/E}\). These time periods diverge at \(E_{c}\), i.e. \(\eta=1\). The near critical behaviour of \(\gamma\) depends on the nature of the divergence of \(T_{p}(E)\). This is best brought out using the integral representation of \(F\): \[F(\alpha,k)=\int_{0}^{\sin(\alpha)}\frac{dx}{\sqrt{(1-x^{2})(1-k^{2}x^{2})}}. \tag{5.36}\] We are interested in the singular behaviour of the integral as \(\alpha\rightarrow\pi/2\) and \(k\to 1\). The latter portion of the denominator splits as \((1-kx)(1+kx)\) so that when \(k\sim 1\), the divergence of the integral stems solely from the \(1-kx\) and the additional \(1-x\) term from the first term under the square root. We thus immediately see that A) The \(1+kx\) term can simply be replaced by \(1+x\) as it contributes nothing to the divergence and B) since \(1-kx\sim 1-x\) when \(k\sim 1\) and since the product of these terms is nested under a square root, we see that we should naively _expect_ the integral to diverge as \(\log(1-\sin(\alpha))\). Replacing the non singular \(1+kx\) term by \(1+x\) leads to the analytically tractable integral \[F(\alpha,k)\sim\int_{0}^{\sin(\alpha)}\frac{dx}{(1+x)\sqrt{(1-x)(1-kx)}}. \tag{5.37}\] The series of substitutions \(x\to y\equiv\frac{1}{1+x},y\to z\equiv 2y-1\) and use of standard integrals then also us to evaluate this integral as \[F(\alpha,k)\sim\left(\frac{1}{\sqrt{2(1+k)}}\ln[z+\frac{1-k}{2(1+k)}+\sqrt{ \left(z+\frac{1-k}{2(1+k)}\right)^{2}-\frac{1}{4}\!\left(\frac{1-k}{1+k} \right)^{2}}]\right)\!\!\left|{}_{\frac{1-\sin(\alpha)}{1+\sin(\alpha)}}^{1}. \tag{5.38}\] It is easy to see that only the lower limit contributes to the divergence, so that we may further write \[F(\alpha,k)\sim-\Bigg{(}\frac{1}{\sqrt{2(1+k)}}\ln(z+\frac{1-k}{2(1+k)}+\sqrt {\left(z+\frac{1-k}{2(1+k)}\right)^{2}-\frac{1}{4}\!\left(\frac{1-k}{1+k} \right)^{2}})\Bigg{|}_{\frac{1-\sin(\alpha)}{1+\sin(\alpha)}}. \tag{5.39}\] We now apply the above to (5.35): 1. **Supercritical**: Setting \(\alpha\) to \(\frac{\pi}{2}\) and \(k\) to \(\sqrt{\frac{1+\eta}{2}}\), we obtain \[T_{p}(E)\sim 2\sqrt{2}\ln\frac{1}{1-\eta}\sim 2\sqrt{2}\ln\frac{1}{1-\eta^{2}}= 2\sqrt{2}\ln\frac{1}{1-\frac{E_{c}}{E}}.\] (5.40) as \(\eta\to 1^{-}\). 2. **Subcritical**: Setting \(\alpha\) to \(\sec^{-1}\left(\frac{1+\eta}{\sqrt{\eta^{2}-1}}\right)\) and \(k\) to \(\sqrt{\frac{1+\eta}{2}}\), we obtain \[T_{p}(E)\sim\sqrt{2}\ln\frac{1}{\eta-1}\sim\sqrt{2}\ln\frac{1}{\eta^{2}-1}= \sqrt{2}\ln\frac{1}{\frac{E_{c}}{E}-1}\] (5.41) as \(\eta\to 1^{+}\). We next study the variation of \(\gamma\) with the time period. Following [24], we see that \(\gamma\) can be expressed as a trigonometric Fourier series in \(T_{p}\), with the leading Fourier coefficient yielding the only non-trivial contribution in the limit of \(\eta\to 1\). We thus have \[\gamma(T_{p})\sim 2\cos\omega T_{p} \tag{5.42}\] where \(\omega\) can be worked out as follows: since in the limit of \(\eta\to 1\), \(a\) spends an increasingly large amount of time near the saddle point \(\frac{1}{2}\), we may estimate the asymptotic period by simply replacing \(a\) by \(\frac{1}{2}\) in (5.26)-(5.29). Then the dynamical equations (5.34) simply reduce to those of an oscillator with period \(\Delta T=2\sqrt{\frac{2}{5}}\pi\). Thus, we have \(\omega=\sqrt{\frac{5}{2}}\). We can now compute the geometric ratios for the \(A_{3}\) oscillations. From (5.42), we see that the transition points are evenly spaced in intervals of \(\Delta T\) when viewed as functions of the time period \(T_{p}\). The logarithmic dependence of \(T_{p}\) with \(|1-\eta|\), captured by (5.40) and (5.41) tells us that the locations of the transition points, as measured by the quantity \(1-\eta\), must asymptotically form a geometric series. It is also easily seen from (5.40) and (5.41) that the relevant common ratios are \(e^{\frac{\Delta T}{2\sqrt{2}}}=e^{\frac{\pi}{\sqrt{5}}}\) for the supercritical oscillations and \(e^{\frac{\Delta T}{\sqrt{2}}}=e^{\frac{\pi}{2\sqrt{5}}}\) for the subcritical oscillations. Figures 14 and 15 demonstrate the flip from stability to instability and vice versa as the energy is varied across a transition point. ### \(\Pi_{1}\) Orbits The wealth of results obtained for \(A_{3}\) and \(A_{4}\) ultimately traces back to the high symmetry of these orbits. These symmetries are enough to completely decouple the fluctuation equations, which eventually lead to a significant simplification. Since the geometric orbits do _not_ originate from Weinstein's theorem, they are less symmetric and the fluctuation equations remain partially coupled. As a result, a single spectral invariant (like \(\gamma\)) is not enough to capture the stability properties of the geometric orbits. Nevertheless, as the geometric orbits reside on RDS phase spaces, a partial decoupling of the fluctuations can indeed be accomplished, with two modes spanning fluctuations confined to the relevant RDS subspace, and the third mode generating fluctuations orthogonal to this subspace. Consequently, some simplifications can be made before reverting to numerics. The equations describing fluctuations about \(\Pi_{1}\) orbits are given by \[\delta\dot{a}_{1}(t)= \ \delta p_{a_{1}}(t),\quad\delta\dot{a}_{2}(t)=\delta p_{a_{2}}(t),\quad\delta\dot{a}_{3}(t)=\delta p_{a_{3}}(t), \tag{5.43}\] \[\delta\dot{p}_{a_{1}}(t)= -\delta a_{1}(t)[1+2a(t)^{2}]+3a(t)[\delta a_{3}(t)+\delta a_{2}( t)]-2a_{1}(t)a(t)[\delta a_{2}(t)+\delta a_{3}(t)],\] (5.44) \[\delta\dot{p}_{a_{2}}(t)= -\delta a_{2}(t)[1+a(t)^{2}+a_{1}(t)^{2}]+3a(t)\delta a_{1}(t)+3 a_{1}(t)\delta a_{3}(t)\] \[-2a(t)[a(t)\delta a_{3}(t)+a_{1}(t)\delta a_{1}(t)],\] (5.45) \[\delta\dot{p}_{a_{3}}(t)= -\delta a_{3}(t)[1+a_{1}(t)^{2}+a^{2}(t)]+3a_{1}(t)\delta a_{2}( t)+3a(t)\delta a_{1}(t)\] \[-2a(t)[a_{1}(t)\delta a_{1}(t)+a(t)\delta a_{2}(t)], \tag{5.46}\] where \(a(t),a_{1}(t)\) and their momenta describe time evolution along the unperturbed \(\Pi_{1}\) orbit. Using the Figure 15: Unstable to stable flip for \(A_{3}\) canonical rotation, \(a_{\pm}=\frac{a_{2}\pm a_{3}}{\sqrt{2}}\), as before, we may restate these equations as \[\delta\dot{a}_{1}(t) =\delta p_{a_{1}}(t),\quad\delta\dot{a}_{+}(t)=\delta p_{a_{+}}(t), \quad\delta\dot{a}_{-}(t)=\delta p_{a_{-}}(t), \tag{5.47}\] \[\delta\dot{p}_{a_{1}}(t) =-\delta a_{1}(t)(1+2a(t)^{2})+3a(t)(\delta a_{3}(t)+\delta a_{2} (t))-2a_{1}(t)a(t)(\delta a_{2}(t)+\delta a_{3}(t)),\] (5.48) \[\delta\dot{p}_{a_{+}}(t) =-\delta a_{+}(t)(1+a(t)^{2}+a_{1}(t)^{2})+3\sqrt{2}a(t)\delta a_{ 1}(t)+3a_{1}(t)\delta a_{+}(t)\] \[\quad-2\sqrt{2}a(t)(a_{1}(t)\delta a_{1}(t))-2a(t)^{2}\delta a_{+ }(t),\] (5.49) \[\delta\dot{p}_{a_{-}}(t) =-\delta a_{-}(t)(1+a(t)^{2}+a_{1}(t)^{2})-3a_{1}(t)\delta a_{-}( t)+2a(t)^{2}\delta a_{-}(t), \tag{5.50}\] Since we had earlier restricted ourselves to a concrete instance of an RDS (see section 3.4) by fixing \(a_{2}\) to \(a_{3}\), and \(p_{a_{2}}=p_{a_{3}}\), it is evident that fluctuations with \(\delta a_{-}=\delta p_{a_{-}}=0\) yield trajectories that deviate from the \(\Pi_{1}\) orbit, but are confined to the phase space of the _RDS_. On the other hand, fluctuations with \(\delta a_{-}\neq 0\) destroy the equality of \(a_{2}\) and \(a_{3}\). Such fluctuations lead to trajectories that are not confined to the RDS, but span the full six-dimensional phase space of the spin-0 sector. In short, an arbitrary fluctuation can be split into an 'orthogonal' mode perpendicular to the RDS phase space, and a pair of coupled 'tangential' modes living in the RDS phase space. The independence of the orthogonal modes from the tangential modes results in the factorization of the monodromy matrix into a 2+4 block diagonal form. Unlike with the NLNMs, no further simplifications can be made at this point and numerical evaluations are the only way forward. As before, the fluctuation equations (5.47)-(5.50) retain the form of a Schrodinger equation, albeit with a two-component 'wavefunction' unlike the previous two instances. Numerics once again reveal the presence of bands: there exist energy bands displaying regular behaviour, with chaos ensuing outside these bands. However it turns out that the bands are finite in number, as opposed to the cases of \(A_{4}\) and \(A_{3}\). Specifically, we find that \(\Pi_{1}\) orbits are always unstable for subcritical energies and undergo just _four_ stability flips, with two 'bands' of stability from \(2\lesssim E\lesssim 3\) and \(5\lesssim E\lesssim 10\). Note that since \(\operatorname{Tr}\mathcal{U}\) does not directly correlate with stability as it did for \(A_{3/4}\), stability can only be ascertained by looking at the full spectrum of the monodromy matrix. The requisite numerics is not very illuminating, so we do not present the full calculations here. Graphical evidence for these flips (demonstrated in Figures 16 and 17) comes from the Lyapunov exponent plots, which we will display in section 6.2. ### \(\Pi_{2}\) Orbits \(\Pi_{2}\) orbits are investigated using the same methodology as \(\Pi_{1}\) orbits. We shall not go over our procedures again, and will simply state the results of our numerics. We find that subcritical \(\Pi_{2}\) orbits are always stable under generic fluctuations. Supercritically, we find two stable but small bands, the first near \(E\sim 3\) and the second near \(E\sim 4\). These results will be corroborated by plots of Lyapunov exponents in section 6.2. Figure 16: Unstable to stable flip for \(\Pi_{1}\) orbits Figure 17: Stable to unstable flip for \(\Pi_{1}\) orbits ## 6 Progression to Chaos In the previous section, we analyzed the stability of several sets of orbits by drawing information from their monodromy matrices. Here, we will pursue another traditional tool to study chaos, utilising the standard techniques of Poincare sections and Lyapunov exponents. In so doing, we will come across numerous novel and peculiar features which, using our prior monodromy analysis, will correlate beautifully to the periodic orbits and ultimately the symmetries of the spin-0 sector. ### Poincare Sections As defined in [28], a Poincare section for an \(N\)-dimensional Hamiltonian system is a \(2N-2\) dimensional slice through a \(2N-1\) dimensional constant energy hypersurface. Poincare sections are thus most effective for four dimensional Hamiltonian systems, and are in general not useful for higher-dimensional systems. However, we find that a simple variation of the usual construction can serve as an excellent visual aid. Specifically, we locate points on a given trajectory where a particular coordinate/momentum is zero. We then project the collection of such points onto a hyperplane spanned by three of the five remaining coordinates/momenta. With this construct (which we continue to refer to as a Poincare section), the usual rules for distinguishing regular trajectories from chaotic ones no longer hold. In particular, regular orbits could yield (our version of) Poincare sections that are a collection of randomly scattered points. This is not a matter of concern for us since our current aim is to study only chaotic trajectories, having carried out an extensive study of regular solutions earlier. We will work exclusively with trajectories that monodromy computations certify as unstable. We consider small fluctuations about periodic orbits and construct Poincare sections at various energies by projecting points on these trajectories having \(p_{a_{1}}=0\) onto the hyperplane spanned by the \(a_{i}\)'s. Since the \(A_{4}\) and \(\Pi_{1}\) orbits are always unstable at subcritical energies, we expect the Poincare sections to be sets of randomly scattered points. While we do find that the sections are indeed scattered and locally random, there are large scale patterns. These patterns depend solely on the 'parent orbit'. Poincare sections for subcritical \(A_{4}\) and \(\Pi_{1}\) orbits are shown in Figure 18. We thus conclude that we have a set of co-existing chaotic basins, one for each family of unstable orbits! To illustrate a second peculiar feature of the dynamics, we recall that in addition to the chaotic dynamics of the full spin-0 sector, unstable trajectories confined to RDS phase spaces may well display chaotic dynamics of their own. This leads to chaotic basins embedded in a four-dimensional subset Figure 18: Poincaré sections at subcritical energies nested within the full six-dimensional chaotic dynamics! This extraordinary feature of the dynamics is the combined result of the large dimensionality and the tetrahedral symmetry. This 'nested' chaos, as part of a genuine four-dimensional system, can be analyzed using Poincare sections in the usual sense. We thus generate Poincare sections for chaotic trajectories of the RDS, both to study the nested chaos and to look for similarities to the full six-dimensional dynamics. In particular, since most of the interesting orbits of the full spin-0 sector have RDS analogs, we would naturally expect a similar substructure of multiple chaotic basins, one for each class of orbit. This substructure is indeed replicated in the RDSs, as evidenced by Figure 20. Next, we see that the supercritical regime appears to comprise of just a single chaotic basin (Figure 19). The mechanism responsible for separating chaotic subsectors in the subcritical regions is no longer operative, so that fluctuations about unstable periodic orbits rapidly grow and eventually cover the entire available phase space, losing memory of their initial conditions. Analogous results hold for the supercritical regimes of the RDSs, as is seen from the Poincare sections of Figure 21. We now turn to Lyapunov exponents which will provide additional confirmation for our already established results while also motivating the 'thermodynamic' perspective we will encounter in section 7. Figure 19: Poincaré sections at supercritical energies Figure 21: Poincare Sections at Supercritical Energies (\(E\)=1) Figure 20: Poincare Sections at Subcritical Energies ### Lyapunov Exponents Recall that the maximal Lyapunov exponent (LE) at a phase point \(P\) is defined as \[\lambda_{P}\equiv\lim_{T\to\infty}\lim_{||\delta x(0)||\to 0}\frac{1}{T}\log\left(\frac{|| \delta x(T)||}{||\delta x(0)||}\right), \tag{6.1}\] where \(\delta x\) is a small fluctuation about a given trajectory \(x(t)\) starting at \(P\). We compute the LEs for the \(A_{4}\) and \(\Pi_{1}\) basins separately by considering arbitrary fluctuations about these orbits. The results for the subcritical zone are shown in Figure 22. Consistent with our interpretation as co-existing chaotic basins, we see that the exponents of the \(\Pi_{1}\) and \(A_{4}\) orbits differ from one another. The stability of all other periodic orbits at nearly all subcritical energies means that we may confine our analysis to these two families of orbits. Since the \(\Pi_{2}\) and \(A_{3}\) orbits can destabilize for \(E>E_{c}\), the supercritical analysis must include these orbits as well. The LEs for the basins corresponding to each of these orbits have been shown in Figures 23-26. There are three features of interest here. 1. The exponents for each basin frequently alternate between regions of steady concave growth and regions where the exponent is identically zero. A comparision with the monodromy matrix computations shows that the regions of zero exponents precisely correspond to the stable bands of the relevant periodic orbits. 2. The non-zero portions of each of the four curves fit nicely onto one another. Chaotic trajectories are thus characterized by a _single_ Lyapunov exponent at large enough supercritical energies. Barring stability-instability transitions of the periodic orbits, this agrees with our earlier assertion of a _single_ chaotic basin at sufficiently high supercritical energies. 3. The non-zero sectors of the exponent plots are neatly captured by a \(E^{\frac{1}{4}}\) fit. The exponents thus steadily develop an algebraic dependence on \(E\), at least to the leading order. The same scaling has been observed in [29]. Figure 23: LEs and Fits for \(A_{4}\) orbits Figure 24: LEs and Fits for \(A_{3}\) orbits Figure 25: LEs and Fits for \(\Pi_{1}\) orbits ## 7 On Thermalization in the Matrix Model For chaotic systems that are also ergodic, the Birchoff-Khinchin theorem holds: for almost any dynamical observable, the time-average is equal to the ensemble average. Berdichevsky has suggested that for such ergodic systems, the laws of equilibrium statistical mechanics may be adapted, making the systems amenable to thermodynamic discussion. The Hamiltonian (2.27) describes a small system: the phase space is only six-dimensional. Nevertheless, as we have demonstrated in the previous sections, the system becomes chaotic as the energy \(E\) (or more accurately, \(g^{2}E\)) increases. Thermodynamics of small systems is a subject of active research [19], the starting point of this discussion being the formula for entropy first given by Gibbs [30] (see also [31]) for a microcanonical ensemble. If \(\Gamma(E)\) is the volume of the region \(H\leq E\), then the Gibbs entropy \(S_{\Gamma}\) is \[S_{\Gamma}=\ln\Gamma+\text{const.} \tag{7.1}\] The two other definitions of entropy \[S =\ln\frac{\partial\Gamma}{\partial E}\delta E+\text{const.} \tag{7.2}\] \[S =\ln\frac{\partial\Gamma}{\partial E}+\text{const.} \tag{7.3}\] agree with (7.1) in the limit when the number of degrees of freedom \(N\) becomes large [32]. However, only (7.1) satisfies the equipartition theorem for small systems [19]. Given the expression for entropy, one can define a temperature \(T\) as \[\frac{1}{T}=\frac{\partial S}{\partial E}. \tag{7.4}\] For _ergodic_ systems, there exists another definition of temperature coming from the equipartition theorem: \[T=\left\langle p_{i}\frac{\partial H}{\partial p_{i}}\right\rangle=\left\langle p _{i}^{2}\right\rangle,\ i=1,2,\cdots N. \tag{7.5}\] Here \(\left\langle\cdot\right\rangle\) denotes temporal average over a time interval \(\tau\) as \(\tau\rightarrow\infty\). Figure 26: LEs for \(\Pi_{2}\) orbits In this section, we will explore in some detail issues related to thermalization in our model, and its relation to earlier discussions of stability and ergodicity. As in the earlier discussion, we will use group theory to guide us in this exploration. ### Equipartition Theorem and Ergodicity We can use the equipartition theorem and ergodicity to decide if our system has thermalized. Our procedure is as follows: 1. Choose an energy \(E\) and compute the phase space volume \(\Gamma(E)\), the volume of the region \(H\leq E\). 2. Use (7.1) to compute the Gibbs entropy \(S_{\Gamma}\), and the Gibbs temperature \(T_{\Gamma}=\left(\frac{\partial S_{\Gamma}}{\partial E}\right)^{-1}\). 3. Generate 10 random sets of initial conditions corresponding to the energy \(E\). 4. Calculate the temporal averages \(\langle p_{1}^{2}\rangle\), \(\langle p_{2}^{2}\rangle\), \(\langle p_{3}^{2}\rangle\) and \(\langle p_{i}p_{i}\rangle/3\) for each of these initial conditions, and their means and standard deviations. 5. Compare \(T_{\Gamma}\) with the above temporal averages (see Figure 27). The figures do not include the error bars because they are negligible compared to the mean values. Also, excluding the error bars provides clarity. The extent of agreement between \(\langle p_{1}^{2}\rangle\), \(\langle p_{2}^{2}\rangle\), \(\langle p_{3}^{2}\rangle\) and \(T_{\Gamma}\) tells us the extent of 'thermalization' in the system. As Figure 27 shows, this agreement is excellent. It is surprising to see that thermodynamic ideas like temperature and equipartition come together as an equality even in a system as small as ours. ### Ergodicity breaking The analysis of chaos presented in the previous sections was quite nuanced because we were able to study that using group theoretic and geometric arguments. Now we analyze the same from a thermodynamical and statistical point of view and find their imprints here as well. Figure 27: Temperatures vs energy for random initial conditions The procedure we use to study ergodicity of various orbits is similar to that described in the previous subsection, the only difference being that in step 3, we generate random initial conditions _belonging to a specific orbit_. Before looking at the results, it is worth mentioning that if an orbit is stable, it is not ergodic. If it is unstable, it may or may not be ergodic. For \(A_{4}\) and \(A_{3}\) orbits, we have knowledge about stability from our previous monodromy matrix considerations, and this information had better agree with the thermodynamic considerations that follow. Remarkably, we find that they do. There is a similar connection between sensitivity to initial conditions and ergodicity. If the Lyapunov exponent corresponding to some orbit is zero (within limits of numerical accuracy), we 'almost' always expect it to be non-ergodic. We say 'almost' because an orbit may have a negligible Lyapunov exponent and still be ergodic. #### 7.2.1 Ergodicity of \(A_{4}\) orbits \(A_{4}\) orbits, by our monodromy matrix results, are unstable upto energy \(E=\frac{3}{2}\), after which there is an alternation of stable bands and band gaps, with their lengths increasing with energy. The two plots in Fig. 28 clearly agree with these results: there is ergodicity till \(E=\frac{3}{2}\), after which ergodicity is broken because the orbit is stable. The next stable (non-ergodic) region appears for energies \(4.6\lesssim E\lesssim 5\). This is also in agreement with curves of Lyapunov exponents vs energy: ergodicity is broken whenever the Lyapunov exponent vanishes (Figure 23). #### 7.2.2 Ergodicity of \(A_{3}\) orbits \(A_{3}\) orbits exhibit a self-similar structure where the bands keep getting narrower as one approaches the critical energy \(E_{c}\). This structure is apparent in the ergodicity plots below as well: ergodicity is absent (i.e. the orbit is stable) till \(E\simeq 0.07\). But note that despite the presence of an unstable band near \(E\simeq 0.075\), ergodicity is still broken. Stability implies ergodicity breaking, but instability does not necessarily imply ergodicity restoration. Further bands are also visible and in agreement with monodromy matrix results. Again, we mention that ergodicity is broken whenever the Lyapunov exponent vanishes, so these plots agree with Lyapunov exponent considerations as well. Figure 28: Temperatures vs energy for \(A_{4}\) orbits #### 7.2.3 Ergodicity of \(A_{2}\) orbits For \(A_{2}\) and the rest of the orbits remaining, we do not have the monodromy matrix tool at our disposal, so it is not possible to study the correlation between stability and ergodicity. We can, however, study ergodicity and its breaking. Our plots in Fig. 30 extend to regions of energy high enough so that the periodic orbits do not exist at all. This is possible to do because despite the orbits losing periodicity, we still have initial conditions Figure 29: Temperatures vs energy for \(A_{3}\) orbits from Section 4.1.3. We do this in order to investigate eventual fate of periodic orbits. The plots in Fig. 30 do not seem to possess a neat band structure as in the case with \(A_{4}\) and \(A_{3}\). However, we have the following conclusions for \(A_{2}\): 1. Ergodicity is clearly broken for small energies \(0.001\lesssim E\lesssim 0.01\). 2. Ergodicity is restored afterwards except for an energy region in the range \(0.05\lesssim E\lesssim 0.08\). Ergodicity breakage is clearly visible. 3. Ergodicity is restored for energies \(E\gtrsim 0.1\). Figure 30: Temperatures vs energy for \(A_{2}\) orbits #### 7.2.4 Ergodicity of \(B_{4}\) orbits Fig. 31 shows that \(B_{4}\) orbits are non-ergodic till around energy \(E\simeq 0.16\), above which they are ergodic. Figure 31: Temperatures vs energy for \(B_{4}\) orbits #### 7.2.5 Ergodicity of \(B_{3}\) orbits \(B_{3}\) orbits are ergodic in the energy range considered (\(0\lesssim E\simeq 5\)). Figure 32: Temperatures vs energy for \(B_{3}\) orbits #### 7.2.6 Ergodicity of non-critical NLNMs Non-critical NLNMs (Non Linear Normal Modes) remain non-ergodic till energy \(E\simeq 0.11\) and become ergodic above this energy (Fig. 33). #### 7.2.7 Ergodicity of \(\Pi_{1}\) orbits \(\Pi_{1}\) orbits are important because they complement the chaotic basin formed by \(A_{4}\) orbits at energies \(E\lesssim E_{c}\). Figure 34 shows that at these energies, they do indeed have the same temperature as the corresponding \(A_{4}\) orbits. However, ergodicity is broken below energy \(E\simeq 0.06\), despite \(\Pi_{1}\) orbits being unstable for all subcritical energies, as shown by the monodromy matrix plot (not presented here) as well as Lyapunov exponent considerations. Again, we see that stability implies ergodicity breaking but instability does not necessarily imply ergodicity restoration. Figure 33: Temperatures vs energy for non-critical orbits #### 7.2.8 Ergodicity of \(\Pi_{2}\) orbits \(\Pi_{2}\) orbits are found to be non-ergodic for all subcritical energies and ergodic for all supercritical energies considered. Figure 34: Temperatures vs energy for \(\Pi_{1}\) orbits Figure 35: Temperatures vs energy for \(\Pi_{2}\) orbits ### Other Ergodic Averages The equipartition theorem is, more generally, \[\left\langle x_{i}\frac{\partial H}{\partial x_{j}}\right\rangle=T\delta_{ij},\ i,j=1,2,...,2N \tag{7.6}\] where \(x_{i}\) are _any_ phase space coordinates. We can compute these averages in addition to the \(p_{i}^{2}\), to confirm ergodicity. Computations show that the quantities \(\left\langle a_{i}\frac{\partial H}{\partial a_{j}}\right\rangle\) and \(\left\langle p_{a_{i}}\frac{\partial H}{\partial p_{a_{j}}}\right\rangle\), agree exactly in the ergodic regime. For example, the following plot depicts \(\left\langle p_{a_{1}}p_{a_{2}}\right\rangle\) and \(\left\langle a_{1}\frac{\partial H}{\partial a_{1}}\right\rangle\) for \(A_{3}\) orbits, for \(0.25\leq E\leq 0.5\). In the region \(0.325\lesssim E\lesssim 0.45\) the system can clearly be seen to be ergodic and, remarkably, agrees with Fig. 29c. Outside this region, ergodicity is broken. Similar plots of time averages for other orbits and energies also confirm the expected connection between ergodicity and the equipartition theorem. It is striking that the system obeys the general version of the equipartition theorem (Eq. 7.6). ## 8 Classical Phases of the Matrix Model We have investigated the dynamical behaviour of a large number of subsectors of our model in different regimes using a variety of techniques. We note that the unusual diversity in dynamics - subsectors, nested dynamics and ergodicity breaking - is highly reminiscent of an underlying phase structure and associated phase transitions. In fact, ordered and chaotic regimes have indeed been identified as distinct classical phases, particularly in the context of matrix models [3], [29]. Additionally, the exotic dynamics uncovered here hints at an uncommon rich phase structure. There is an even more suggestive reason to believe that a phase study is the way to go, which we shall outline later on. In this section, we will just press forward with this viewpoint and outline the phase structure of the matrix model. Phases are usually identified by regions in an appropriate phase diagram, labelled by a set of independent variables. Taking the quintessential example of ice-water-steam phase diagram, pressure, volume and temperature serve as the distinguishing parameters. The most obvious parameter that we could utilise for the matrix model is, of course, the energy. Although slightly unusual in a more 'physical' sense (where temperature is the natural choice), energy is a natural variable to use in the more abstract context of non-linear systems. Alternatively, our stand simply reflects the'microcanonical' nature of our setup, as outlined in section 7. Generally, energy is sufficient to capture the phase structure, with low energies yielding regular behaviour and chaos taking over later on. As we have seen however, the matrix model may display several distinct types of dynamics even at a given energy. An exact characterization using just the energy is therefore incomplete. Furthermore, there is no precise list of variables which, together with the energy, _do_ completely characterize the phase structure. Our previous analysis tells us that the symmetries of the Hamiltonian are the key players, but that is about as far as we can go. Nevertheless, the absence of such a list does not prevent us from _enumerating_ the numerous existing phases, following the generic methodology of identifying ordered and chaotic regimes as distinct classical phases. With this viewpoint, we see that the multiple chaotic subsectors in the subcritical range (and their regular counterparts) have a natural interpretation as _co-existing_ classical phases. The disjoint Poincare sections of Figure 18 neatly illustrate the 'chaotic \(A_{4}\) phase' and the 'chaotic \(\Pi_{1}\)' coexisting with the '\(\Pi_{2}\) phase' and the '\(A_{3}\) phase' (not shown in the figure). The phenomenon of co-existing phases is a fairly well-known one, with water-steam-ice [34] serving as a well documented example. As phases are typically distinguished by differing expectations of certain interesting observables, it is natural to list out such observables for our model as well. Since the Poincare sections of the \(A_{4}\) sector are more concentrated near the edges of the allowed configuration space, while those of the \(\Pi_{1}\) sector group near the centre, it is reasonable to expect that the squares of the \(a_{i}\)'s (the second moments, so to speak) serve as distinguishing observables. Indeed computing the time average of these observables for \(\Pi_{1}\) based trajectories and \(A_{4}\) based trajectories of equal energy yield noticeably different results. A more sophisticated distinguishing observable is, of course, the Lyapunov exponent. The computations of section 6.2 indeed corroborate this view, with the exponents of the \(A_{4}\) sector being marginally lower than their \(\Pi_{1}\) counterparts. Additionally, as seen from the monodromy plots (see Figure 12), the \(A_{3}\) orbits describe chaotic bands of their own at suitable subcritical energies, implying that we can have three chaotic phases intermixing with one another at certain \(E<E_{c}\). Next, translating the phenomenon of nested chaos to our phase centred viewpoint implies the existence of yet another collection of phases, this time dimensionally distinct from our earlier sets. Since the RDSs inherit nearly all of the peculiarities of the full dynamics, the structure of this lower dimensional collection of phases is just as intricate as the full 6D phase structure. Indeed, one can draw correspondences between the RDS phases and those of the full model. The notion of 'lower-dimensional' phases in a physical system is rather unusual, though not unheard of, with edge states in topological physics serving as a good example. It is therefore interesting to see such themes emerge naturally in the context of a gauge matrix model. Much like the subcritical regime, nested phases are also a feature of the supercritical regime, with supercritical nested phases appropriately inheriting the phase structure of their parent 6D phases. It is interesting to note that the notion of symmetry breaking persists in this model despite the symmetries of the RDS only encompassing a small subgroup of the full tetrahedral group. Fascinating as this game of coexistence and mergers is, it involves only the chaotic phases of the model. The transitions between ordered and chaotic phases are no less interesting. We have already encountered numerous signatures of these transitions, via monodromy plots, Lyapunov exponents and thermodynamics. These analyses neatly corroborate one another and clearly indicate alternations be tween ordered and chaotic regimes, and thus, between ordered and chaotic _phases_. Specifically, the phase structure involves an _alternation_ between individual regular phases and the 'global' chaotic phase. These alternations happen at energies that are specific to the parent orbit in question. As regards the (breaking of) the symmetries of the system, we thus see that each the symmetry of the parent orbits is after all _not_ completely lost at high energies, but is retained solely by the _ordered_ phases, insofar as they exist at high energies. As we have seen, symmetries bifurcate the dynamics into a host of basins, one each for the \(A_{3},A_{4},\Pi_{1}\) and the \(\Pi_{2}\) orbits. The transitions for the first two of this set continue ad infinitum, implying that these symmetry classes persist at arbitrarily high energies. In contrast, the \(\Pi_{1}\) and \(\Pi_{2}\) orbits cease to alternate in stability at high enough energies, so that any memory of these symmetry classes is erased at suitably high energies. As before, the nested dynamics presents the same systematics, despite its reduced symmetries. Curiously enough, we will see later that this notion of finite versus infinite alternations has some ties to the quantum dynamics of the model. ## 9 Quantum Connections While the previous sections have firmly established the \(SU(2)\) QCD matrix model as a classical non-linear system of great interest, its primary usage as a tool, is in a quantum setting. From a pure gauge theory point of view, what then do we learn about the quantum theory from perusing its classical aspects? Given that we know of certain features of the quantum theory [1], it is thus worth investigating how the'memory' of these quantum features is retained in the classical limit. On the flip side, one might also be interested in using the above classical analysis to search for more elusive quantum features. Some quantum aspects of the \(SU(2)\) matrix model coupled to massless quarks have already been studied in the 'Born-Oppenheimer' limit of the theory: in this limit, the quarks are the fast degrees of freedom, and the gauge field the slow mode. The quarks are quantized in the background of the classical gauge field, and the gauge field is then quantized. The quarks produce an emergent Berry connection (a vector potential) as well as a scalar potential on the gauge configuration space. The gauge field is then quantized taking these additional emergent potentials into account. Inclusion of the quark leads to an unexpected benefit even for investigations of the pure gauge theory: it provides for a much more refined understanding of the gauge configuration space. One can show that in terms of \[x={\rm Tr}(M^{T}M),\quad y=\det M,\quad z=\frac{1}{16}\Big{(}2{ \rm Tr}(M^{T}MM^{T}M)-[{\rm Tr}(M^{T}M)]^{2}\Big{)} \tag{9.1}\] the function \(F(M)=F(x,y,z)\) obeys the inequality \[F(M)=\frac{1}{2}\left(2x^{4}z+x^{3}y^{2}-64x^{2}z^{2}-144xy^{2}z- 54y^{4}+512z^{3}\right)\geq 0. \tag{9.2}\] With \[{\bf g}_{3}\equiv\frac{\det M}{\big{(}\frac{1}{3}{\rm Tr}(M^{T}M) \big{)}^{3/2}},\qquad{\bf g}_{4}\equiv\frac{1}{16}\left[\frac{2{\rm Tr}(M^{T}M )^{2}}{\big{(}\frac{1}{3}{\rm Tr}(M^{T}M)\big{)}^{2}}-9\right], \tag{9.3}\] the condition \(F\geq 0\) becomes \[\Delta=\frac{1}{2}\Big{(}27{\bf g}_{3}^{2}-54{\bf g}_{3}^{4}+162 {\bf g}_{4}-432{\bf g}_{3}^{2}{\bf g}_{4}-576{\bf g}_{4}^{2}+512{\bf g}_{4}^{ 3}\Big{)}\geq 0. \tag{9.4}\] In other words, \(F\geq 0\) (or equivalently \(\Delta\geq 0\)) gives us the set of all gauge-invariant spin-zero gauge field configurations. This parametrization of the gauge configuration space explicitly brings out the fact that it has corners \((A,B\) and \(C)\) and edges \((AB,BC\) and \(AC)\). We can plot the region bounded by the above inequality: The \({\bf g}_{3}-{\bf g}_{4}\) plot is an 'arrowhead' curve consisting of configuration space points satisfying \(\Delta\geq 0\). In terms of the coordinates \((R,A,S)\), the functions \(g_{3}\) and \(g_{4}\) take a rather simple form \[g_{3}(a_{1},a_{2},a_{3})\equiv\frac{a_{1}a_{2}a_{3}}{\left(\frac{a_{1}^{2}+a_{ 2}^{2}+a_{3}^{2}}{3}\right)^{\frac{3}{2}}} \tag{9.5}\] and \[g_{4}(a_{1},a_{2},a_{3})\equiv\frac{9(a_{1}-a_{2}+a_{3})(a_{2}-a_{3}+a_{1})(a_ {3}-a_{1}+a_{2})(a_{1}+a_{2}+a_{3})}{16(a_{1}^{2}+a_{2}^{2}+a_{3}^{2})^{2}}. \tag{9.6}\] It was argued in [2] that quarks 'condense' at these corners and edges, leading to quantum phases. These phases, obtained via superselection sectors can be distinguished using two scale invariant configuration space functions \(g_{3}\) and \(g_{4}\) defined as above. The figures below provides a graphical depiction of the quantum phases. The quantum phases are distinguished by their relative positions on the \(g_{3}-g_{4}\) plot, with the interior of the arrowhead depicting a 'bulk phase' while the sides of the arrowhead model 'edge' phases. The three tips of the arrowhead also represent distinct phases, with the phases corresponding to the two lower tips of the arrowhead related to one another by a parity transform. While the \(g_{3}-g_{4}\) plot and relevant machinery concerned were developed in a purely quantum setting, it turns out to be very useful for discussing aspects of classical dynamics as well. Specifically, Figure 37: (Scaled) Configuration space of \(SU(2)\) gauge matrix model we may associate each classical trajectory with a given trajectory traversing the boundary and interior of the arrowhead. Identifications between classical and quantum phases can then be made by comparing _classically_ generated \(g_{3}-g_{4}\) plots with the pictorial hierarchy of quantum phases mentioned in the above paragraph. For instance, a general chaotic trajectory unsurprisingly covers the bulk of the \(g_{3}-g_{4}\) plot and thus is evidently in loose correspondence with the 'bulk' quantum phase. On the other hand, the "2 equal \(a\)'s" trajectories that make up the 4-dimensional RDS are, from the definition of the \(\Delta\) function, confined to lie on the edges of the arrowhead and thus are in loose correspondence with the edge phases of the model. That the correspondence is not exact is obvious as, for instance, generic trajectories may have, at some points of times, two equal \(a\)s thereby landing themselves on the edges of the \(g_{3}-g_{4}\) rather than the bulk. Additionally, as we have mentioned, the arrowhead comprises numerous disconnected edge phases in addition to three 'point phases', while generic trajectories in the 4-dimensional RDS span the entire arrowhead, so that they mix the edge phases and cross over the point phases at least partly. The 4D restrained \(\Pi_{1}\) orbits for instance cover only the right half (or only the left half, in case of parity reversal) of the \(g_{3}-g_{4}\) plots although even they encompass four quantum phases. While far from perfect, such correspondences are about as much as we may expect from a preliminary analysis and nevertheless have _some_ semblance to a deeper correspondence, telling us that we are after all on the right track. There is also a reasonably clear correspondence between the \(A_{3}\) and \(A_{4}\) orbits (or more precisely the phases they map to) and the "point phases" of the quantum model. Indeed, \(g_{3}-g_{4}\) plots of the _exact_\(A_{3}\) and \(A_{4}\) trajectories are _perfectly_ confined to the top tip (\(A_{4}\)) and lower right/left tips (\(A_{3}\)) of the arrowhead. Chaotic dynamics about these orbits is associated with space filling \(g_{3}g_{4}\) plots while band gaps are only associated with minor spillovers from the tips of the arrowhead. The \(A_{4}\) and the \(A_{3}\) orbits are, at least at first glance, the apparent classical remnants of the quantum point phases. Interestingly enough, these are the only two periodic orbits whose phases underwent an infinite cascade of flips. On conjecture at least, this cascade has _something_ to do with quantum properties of the matrix model. Figure 38: \(g_{3}-g_{4}\) plots for the \(A_{3}\) orbits ## 10 Conclusions In this article, we pursued a detailed study of the classical dynamics of the spin-0 sector of an \(SU(2)\) gauge-matrix model. The presence of an unexpected tetrahedral symmetry greatly enriched the resulting dynamics, endowing the system with several distinctive features such as co-existing chaotic basins, ergodicity breaking and nested chaos. The tetrahedral symmetry also allowed us to better adapt standard techniques to bring out the salient features of the model. We utilized a three-pronged approach comprising monodromy analysis, chaos-theoretic studies, and statistical mechanical methods. The last of these motivated a transition from a non-linear dynamical perspective to a thermodynamic one, wherein we identified the regular and chaotic sectors of the model as classical phases. The intricacies of the classical dynamics translated into a rich phase structure consisting of co-existing chaotic phases protected by their respective symmetries at subcritical energies. The underlying protective mechanism seemed to degrade at suitably high supercritical energies, culminating with a merger into a single supercritical chaotic phase. Also observed were quasi-periodic transitions between ordered and chaotic phases and a collection of lower dimensional _nested_ phases. Surprisingly, a selection of classical phases bore tantalizing resemblances to _quantum_ phases stemming from superselection sectors. This correspondence had benefits for both sides. In one direction, the quantum sector naturally yielded refined tools (i.e. the \(g_{3}-g_{4}\) plots) for identifying classical phases. In the other direction, the classical phase structure could potentially give signatures for further investigations of the quantum phase structure of the matrix model. Broadly speaking, the questions we aim to answer going forward fall into three categories, the first of which involves investigating the classical dynamics of the spin-0 sector in even more depth. From a non-linear dynamical standpoint, several features of the dynamics beg for deeper explorations. For one, we are yet to understand the mechanism behind the localization of the co-existing chaotic subsectors for subcritical energies. It is also unclear why this mechanism ceases to work at sufficiently high energies. Relevant thermodynamic problems include a better enumeration of the properties of the classical phases, via appropriately chosen observables, and a detailed study of the transitions between these phases. In particular, given that ergodicity breaking is a key ingredient for the emergence of the intricate phase structure of the model, it would be interesting to search for connections to color glasses in non-abelian gauge theories [33]. The second class of questions we wish to explore center around the relations between the classical and quantum phases. Our current understanding of the correlations between the classical phases generated by the \(A_{3/4}\) orbits and their quantum counterparts is rather heuristic. A more rigorous study of Figure 39: \(g_{3}-g_{4}\) plots for the \(A_{4}\) orbits their connections, possibly via the Gutzweiller trace formula, is thus called for. Another interesting pathway involves searching for quantum analogs of the phases generated by the remaining NLNMs or the geometric orbits. Lastly, as illuminating as the spin-0 sector is, its study is only the first half of a broader endeavour. After all, a complete study of the classical dynamics of the full matrix model requires including the effects of angular momentum. We plan to add back the rotational degrees of freedom and analyze the resulting dynamics in a future work. A natural follow up would be to probe the connections between the full classical dynamics and the corresponding quantum analog. Although our present discussion has centred on the \(SU(2)\) matrix model, it seems unlikely that the peculiarities of the dynamics will disappear as we go over to the \(SU(3)\) model. We expect at least some of these features to persist for \(SU(3)\) models, with interesting consequences for real-world QCD. **Acknowledgements** The work of CB was supported by the PMRF programme. VN acknowledges that a substantial portion of the research was carried out before his affiliation with the Cavendish Laboratory. ## Appendix A Asymptotic Locations of A4 Stability Transition Points
2307.16761
SMT-Solving Induction Proofs of Inequalities
This paper accompanies a new dataset of non-linear real arithmetic problems for the SMT-LIB benchmark collection. The problems come from an automated proof procedure of Gerhold--Kauers, which is well suited for solution by SMT. The problems of this type have not been tackled by SMT-solvers before. We describe the proof technique and give one new such proof to illustrate it. We then describe the dataset and the results of benchmarking. The benchmarks on the new dataset are quite different to the existing ones. The benchmarking also brings forward some interesting debate on the use/inclusion of rational functions and algebraic numbers in the SMT-LIB.
Ali K. Uncu, James H. Davenport, Matthew England
2023-07-31T15:32:16Z
http://arxiv.org/abs/2307.16761v1
# SMT-Solving Induction Proofs of Inequalities ###### Abstract This paper accompanies a new dataset of non-linear real arithmetic problems for the SMT-LIB benchmark collection. The problems come from an automated proof procedure of Gerhold-Kauers, which is well suited for solution by SMT. The problems of this type have not been tackled by SMT-solvers before. We describe the proof technique and give one new such proof to illustrate it. We then describe the dataset and the results of benchmarking. The benchmarks on the new dataset are quite different to the existing ones. The benchmarking also brings forward some interesting debate on the use/inclusion of rational functions and algebraic numbers in the SMT-LIB. I 2021 Inequalities, Induction Proofs, Satisfiability Modulo Theories, Computer Algebra, Rational Functions ## 1 Introduction Satisfiability Modulo Theories (SMT) fuses powerful modern SAT solvers with software from specialised theory domains to tackle satisfiability problems where the logical atoms are statements in that domain. The SMT-LIB [2] defines a common language for SMT-solvers to use and maintains a set of benchmarks organised according to the various theory domains. In many cases, the algorithms for those domains have been traditionally implemented in computer algebra systems (although as described in [1], such algorithms require adaptation before they can be used efficiently within SMT). There is continuing progress in algorithms for such domains, driven in part by the connections built between symbolic computation and the satisfiability checking communities, by the SC-Square project [1] and others. One of the SMT theory domains most closely aligned with symbolic computation, and the domain we consider, is qf_nra. In this case the solver seeks to answer a question on the existence of real variables \(x_{1},\ldots,x_{k}\) to solve a logical formula in which each atom is a (potentially non-linear) polynomial constraint. There is a significant number of benchmarks in the SMT-LIB for this domain, however, there are relatively few sources of these examples, and vast majority come from a single theorem-proving application. In this paper we describe a new collection of examples which we have contributed, originating from inductive proof of some inequalities. We seek to (a) broaden the QF_NRA benchmark set to allow for better development of solvers; and (b) encourage further additions from other new application domains by demonstrating how well solvers can do on such problems. ### SMT for QF_Nra In the QF_NRA domain solvers tackle satisfiability problems whose atoms are of the form \(p\sigma 0\) where \(p:=p(x_{1},x_{2},\ldots,x_{k})\in\mathbb{Q}[x_{1},x_{2},\ldots,x_{k}]\) is a polynomial in variables \(x_{1},\ldots,x_{k}\) with rational coefficients, and \(\sigma\in\{>,<,\geq,\leq,=,\neq\}\). Such problems are usually tackled in the Lazy SMT paradigm where a SAT-solver proposes solutions to the logical structure which are then checked for validity in the theory domain: deciding whether the corresponding set of polynomials constraints can be satisfied together. In other words, we check the following, where \(p_{i}\) and \(\sigma_{i}\) are defined as above: \[\exists x_{1},x_{2},\ldots,x_{k}\;(p_{1}\sigma_{1}0\wedge p_{2}\sigma_{2}0 \wedge p_{n}\sigma_{n}0). \tag{1}\] Note that this is fully conjunctive and will involve only a subset of the atoms in the original formula. The answer will either be a set of assignments for the variables \((x_{1},\ldots,x_{k})\) to _witness_ the existence, or a confirmation that this is _unsatisfiable_. The unsatisfiable confirmation confirms that there is no single point in \(\mathbb{R}^{k}\) space that could satisfy the relations. There are many ways of tackling this conjunction. One expensive but well established method is to calculate the Cylindrical Algebraic Decomposition (CAD) [3] of the variable space to be sign-invariant for the polynomials \(p_{i}\), and then check the regions for existence of such a point. CAD was developed for the more general problem of Quantifier Elimination (QE) over the reals. Such a heavy procedure is clearly far more work than required, at least when the problem is satisfiable and we need find only a single point. An adaptation of CAD for this purpose was presented in [4] to allow for early termination and repeated calls. Another approach is to re-purpose the theory of CAD so that it better aligns to the satisfiability methodology, of searching for a model and learning from conflict. This is the approach taken in the Cylindrical Algebraic Covering method of [5] and the NLSAT algorithm of [6]. Both build model solutions gradually variable by variable: the former learns by identifying that a model cannot be extended and using CAD projection to rule out an interval around the top dimension of the current model; the latter learns by building an entire CAD cell whose description defines new atoms of the logical formula. In addition to these CAD based methods there is a variety of incomplete methods implemented which may will not always give a solution, but when they do are often far faster, e.g. Virtual Term Substitution [7] or incremental linearization [8]. Although these techniques all root from formal arguments, there is not a clear answer as to which method is best in general or on a particular instance, and so there is a need for careful benchmarking. Also, different problems sets have their own _flavour_, and so benchmarking on them individually is valuable too. This set, rooting from mathematical inequalities, is internally diverse. Although the number of the variables do not go beyond 6 at any time, the natural appearance of rational functions in some problems and the degree of a single variable spiking in some other problems can be considered as some different characteristics within this sample set. ### Problems from inequalities We will discuss a family of QF_NRA problems new to the SMT-LIB, and to the best of the authors' knowledge, not tackled before using SMT. In 2005, Gerhold and Kauers presented an algorithm that attempts induction proofs with great success [9]. Their original formulation, and later Kauers' Prove Inequality function in the Mathematica package SumCracker[10], uses CAD to make these proofs. This method and the implementation has been used successfully applied in many works to automatically prove combinatorics and special function related inequalities [9, 10, 11, 12, 13, 14, 15, 16]. These applications utilised computer algebra, but the underlying algorithm is actually asking a sequence of satisfiability questions that terminates with a positive answer if it can be shown that a logical structure of the form (1) is unsatisfiable. Later in the paper we will sketch the main ideas behind this procedure by proving the following result. Let \(k\) and \(n\) be positive integers, and let \(x_{1},x_{2},\ldots x_{n}\in\mathbb{R}^{+}\). Then if \(x_{1}+x_{2}+\cdots+x_{n}=n\), we have that \[\sum_{i=1}^{n}x_{i}^{k}\,\frac{x_{i}^{4}+x_{i}^{2}+1}{x_{i}^{2}+x_{i}+1}\geq n \prod_{i=1}^{n}x_{i}. \tag{2}\] This problem appeared as a generalization of a Monthly Problem in the American Mathematical Monthly [17]. Until now the inequality (2) only had a human proof. That proof required using an inequality on carrying positive exponents inside a finite sum, followed by an inequality of Chebyshev, and then an inequality between arithmetic-geometric means. However, by following the Gerhold-Kauers method, we will prove a stronger version of this inequality without any prior knowledge and with minimal human interference. ### New dataset We have put together a dataset of 300 problems in the SMT-LIB language, derived from the application of the Gerhold-Kauers method to examples given in [9, 10, 11, 13] and (2). These have been submitted for inclusion in the 2022 release of the SMT-LIB. Unlike other problem sets in the SMT-LIB, a quarter of these problems have constraints that involve rational functions instead of purely polynomial constraints. This new characteristic also calls into question how best to pre-process such objects. There are at least two ways of clearing any non-constant denominators to get an equivalent expression with polynomial constraints, and then there is the handling of zeros of any denominators. For every problem involving a rational function, we generated two equivalent problems in polynomials, where we handled the denominators in a different way. We will observe different solver behaviour depending on the conversion method used. ### Plan of the paper The organization of this paper is as follows. In Section 2, we will briefly introduce the Gerhold-Kauers method by using it on the new example. Then in Section 3 we will discuss different ways of clearing denominators before presenting our dataset and some benchmarking results for it in Sections 4 and 5. We finish with some conclusions. ## 2 The Gerhold-Kauers method to use CAD for Induction Proofs ### General idea A mathematical induction proof - at its core - is a finite set of initial conditions together with the logical structure of the problem implying the correctness of the next step. We require a discrete parameter, say \(n\), for the indexing of the initial conditions. Let, our general claim, \(\phi\) be a logical formula (or a collection of logical formulae) in \(n\) (without loss of generality \(n\in\mathbb{Z}^{+}\)) and possibly other variables. We would like to prove the correctness of \(\phi\) by complete induction of \(n\). The construction of \(\phi\) is done through a difference field construction: we will brush over that and invite any interested readers to visit Gerhold and Kauers' original paper [9]. It is not clear from which starting point and with how many (if any) initial conditions one can gather satisfactory knowledge to prove the induction step. Hence one needs to start with the selection of a \(t\) and \(r\) (both being most likely 1) and attempt to show \[\psi\wedge(\phi\wedge s(\phi)\wedge\cdots\wedge s^{r-1}(\phi))\Rightarrow s^{ r}(\phi), \tag{3}\] where \(\psi\) is a conjunction of all known assumptions on the parameters, and \(s^{k}(\phi)\) is the \(k\)-th shift (in \(n\)) of the original statement \(\phi\). Let \([\phi]_{k}\) be the explicit evaluation of \(\phi_{n}\) at the instance \(k\in\mathbb{Z}_{\geq 0}\). If we can also confirm that each initial condition \([\phi]_{k}\) for \(k=t,\ldots,t+r-1\) holds together with (3) then we get an induction proof for all \(n\geq t\). In their paper [9], Gerhold and Kauers decide to attempt refuting (3) by instead attempting to deduce that \[\psi\wedge(\phi\wedge s(\phi)\wedge\cdots\wedge s^{r-1}(\phi))\wedge\neg s^{r }(\phi) \tag{4}\] would be false. Moreover, they do it in an efficient and iterative way by checking \([\phi]_{n}\)'s at each step and extending \(r\) if (4) is still satisfiable for some selection of variables. A possible variable selection might be far away from the original problem, however, such an instance triggers the algorithm to iterate (pick a larger \(r\)) and repeat the process. ### A New Proof of (2) As an initial step towards the proof of (2), let us start with the claim that, for \(x_{i}>0,i=1,\ldots,n\), if \(\sum_{i=1}^{n}x_{i}=n\), then \[\prod_{i=1}^{n}\,x_{i}\leq 1. \tag{5}\] The case \(n=1\) is obvious. The difference ring construction would define x in the place of \(x_{n}\). Then we define another three variables \(X\), \(Y\), and \(Z\) and their shifts in \(n\): \(s(X)=X+1\), \(s(Y)=Y+s(x)\), and \(s(Z)=Zs(x)\), where \(s(\cdot)\) is the shift of the variable inside (the next element in the sequence) and \(s(x)\) is kept as a new variable added to the problem. Here \(X\) simulates \(n\), \(Y\) simulates the sum \(\sum_{i=1}^{n}x_{i}\) and \(Z\) simulates the product \(\prod_{i=1}^{n}\,x_{i}\). Assuming \(t=r=1\), the logical statement we are trying to refute is \[\big{(}x>0\wedge s(x)>0\wedge X=Y\wedge s(X)=s(Y)\big{)}\wedge(Z \leq 1)\wedge\neg(s(Z)\leq 1) \tag{6}\] \[= \big{(}x>0\wedge s(x)>0\wedge X=Y\wedge X+1=Y+s(x)\big{)}\wedge(Z \leq 1)\wedge(Zs(x)>1),\] together with the initial condition check \([Z]_{1}=1\leq 1\). It is very easy to see that the first logical sentence implies \(s(x)=1\) and that together with the last two clauses yields a contradiction. Therefore, confirming the claim for the initial conditions \([X]_{1},[Y]_{1}\) and \([Z]_{1}\), the induction step holds and our claim is true for generic \(n\geq 1\). Similarly, we can prove \[\sum_{i=1}^{n}\frac{x_{i}^{4}+x_{i}^{2}+1}{x_{i}^{2}+x_{i}+1}\geq n, \tag{7}\] under the assumptions \(x_{i}>0\) for \(i=1,\ldots,n\) and \(\sum_{i=1}^{n}x_{i}=n\). To simulate the sum on the left-hand side of (7) and its iterations, is given as \([\hat{Z}]_{1}=x_{1}^{2}-x_{1}+1\) and \(s(\hat{Z})=\hat{Z}+(s(x)^{4}+s(x)^{2}+1)/(s(x)^{2}+s(x)+1)\). For the proof, one can put together the logical formula to be refuted, similar to (6), with \(X\), \(Y\) and the new variable \(\hat{Z}\) and easily show that to be contradiction. In the same vein, we can prove \[\sum_{i=1}^{n}(x_{i}-1)\frac{x_{i}^{4}+x_{i}^{2}+1}{x_{i}^{2}+x_{i}+1}\geq 0, \tag{8}\] following similar steps as above with the new variable \(\tilde{Z}\), where \(s(\tilde{Z})=\tilde{Z}+(s(x)-1)(s(x)^{4}+s(x)^{2}+1)/(s(x)^{2}+s(x)+1)\). The next step needed to prove (2) is to show \[\sum_{i=1}^{n}x_{i}^{j-1}(x_{i}-1)\frac{x_{i}^{4}+x_{i}^{2}+1}{x_{i}^{2}+x_{i}+ 1}\geq 0, \tag{9}\] for any \(j\geq 1\). For any fixed positive integer \(j\) this can be done with a logical solver for qf_NRA. In this example the logical formula to evaluate becomes \[x>0\wedge s(x)>0\wedge X=Y\wedge X+1=Y+s(x)\wedge\overline{Z}\geq 0\wedge \overline{Z}>s(x)^{j}(s(x)^{3}-2s(x)^{2}+2s(x)+1),\] where \(\overline{Z}\) simulates the sum on the left-hand side of (9). From the logical structure, we can deduce that \(s(x)=1\) and later conclude that \((\overline{Z}\geq 0)\wedge(\overline{Z}<0)\) would yield a contradiction and prove (9). However, this is only possible to achieve on a computer for explicitly chosen positive integers \(j\). Otherwise, since the input would not be a collection of polynomials/rational functions, we could not apply CAD (or other QE methods). This is where we need a human touch to prove (9) for arbitrary \(j\) using (8). The case of all \(x_{i}=1\) is trivially true. Otherwise, since \(\sum_{i=1}^{n}x_{i}=n\), there exists at least one \(a\in\{1,2,\ldots,n\}\) such that \(x_{a}>1\) and at least one \(b\in\{1,2,\ldots,n\}\) such that \(x_{b}<1\). Let \(A\) be the set of all such indices between 1 and \(n\) such that \(x_{a}>1\). Similarly, let \(B\) be the set of all indices of all \(x_{b}\) such that \(0<x_{b}<1\). \(A\) and \(B\) are both finite sets since all the indices are chosen from 1 to \(n\). For non-empty \(A\) and \(B\), notice that \(x_{a}^{j-1}\geq x_{a}>1\) and \(0<x_{b}^{j-1}\leq x_{b}\) for any \(a\in A\) and \(b\in B\). So by multiplying the \(i\)-th summand of (8) with \(x_{i}^{j-1}\), we either keep the summand the same (if \(j=1\)) or increase the contribution of the positive terms if \(i\in A\). Similarly, by multiplying the \(i\)-th summand of (8) with \(x_{i}^{j-1}\), we either keep the summand the same (if \(j=1\)) or shrink the contribution of the negative terms if \(i\in B\). Since (8) is assumed to hold and this modification to the summands increases the positive contribution while decreasing the negative contribution of the summands, the inequality (9) is holds for any positive integer \(j\) as well. If we sum (9) over \(j=1,\ldots,k\) we get \[\sum_{i=1}^{n}(x_{i}^{k}-1)\frac{x_{i}^{4}+x_{i}^{2}+1}{x_{i}^{2}+x_{i}+1}\geq 0.\] Adding (7) to this yields \[\sum_{i=1}^{n}x_{i}^{k}\,\frac{x_{i}^{4}+x_{i}^{2}+1}{x_{i}^{2}+x_{i}+1}\geq n, \tag{10}\] under the same assumptions of the original problem (2): \(k,n\in\mathbb{Z}^{+}\), \(x_{i}>0\) and \(x_{1}+\cdots+x_{n}=n\). Finally, using inequality (5) on the right-hand side of (10), we prove (2). One highlight is that we proved these inequalities without any prior knowledge of any mathematical inequalities. Moreover, note that (10) is a sharper inequality than (2). ### Implementation An implementation of this method has been completed by Kauers: the ProveInequlity procedure in the SumCracker Mathematica package [10] does the identification of the variables to be included and their shifts automatically, and ships the statement to be refuted directly to the CAD implementation of Mathematica. The proof of (8), in Mathematica, then turns into a single command: ProveInequlity[SUM[(x[k]-1)(1+x[k]^2+(x[k])^4)/(1+x[k]+(x[k])^2), {k,1,n}]>=0,Using->{x[n]>0,SUM[x[k],{k,1,n}]==n},Free->{x},Variable->n] which terminates with an answer in milliseconds. ### Suitability for SMT A key-point to stress is that Gerhold-Kauers method actually generates and answers satisfiability problems, in the form (1), with all the existential quantifiers hidden but there. At each attempt, the Gerhold-Kauers method checks the initial conditions and it looks to see if the refuted induction-step (4) is unsatisfiable. Furthermore, any known information about the pieces of \(\phi\) can be tagged alongside of (4) and get fed to the CAD machinery to further restrict the search space and get the desired unsatisfiable answer. Their implementation simply used the CAD implementation of Mathematica to see in which regions (4) can be satisfied. However, neither where this formulae is satisfied, nor the cylindrical structure of the decomposition to refute satisfiability, is essential to the problem. Thus CAD could be safely replaced with any SMT solver that can tackle QF_NRA and may benefit from the incremental data-structures their internal machinery usually possess. ## 3 Appearance and Handling of Rational Functions In the automated proof sketches of (7) and (8), we already saw the possibility of rational functions arising. The shifts of the variables \(\hat{Z}\) and \(\tilde{Z}\), which were used to simulate the sum and their shifts, introduced a rational function in the induction hypothesis clauses. In those examples above, the rational function could be simplified to a polynomial expression, but this is not true in general. We see that the satisfiability problems coming from this method naturally introduces rational functions. Rational functions inclusion and handling in satisfiability problems seems to be a somewhat sensitive topic in the SMT community and there are discussions about whether and how best to allow SMT-LIB to include rational functions in its language. While we leave that discussion for later, we will mention some possible pre-processing ways that can help us remedy the situation in a mathematically consistent way. Assume that we are given a satisfiability problem where one of the clauses includes multivariate rational functions after simplifications. For example \[\frac{P(\mathbf{x})}{Q(\mathbf{x})}\;\sigma\;\frac{F(\mathbf{x})}{G(\mathbf{x})}\] where \(P,Q,F,G\in\mathbb{Q}[\mathbf{x}]\)1, \(\gcd(P,Q)=\gcd(F,G)=1\), and \(\sigma\in\{>,<,\geq,\leq,=,\neq\}\). We can simplify this problem to 0, by subtraction followed by any simplifications which handle a problem of the form Footnote 1: In this discussion, the rational field \(\mathbb{Q}\) can be replaced by the reals \(\mathbb{R}\), but here we restrict ourselves to stay within the limits of the SMT-LIB language. \[\frac{f(\mathbf{x})}{g(\mathbf{x})}\colon=\frac{P(\mathrm{x})G(\mathrm{x})-F (\mathrm{x})Q(\mathrm{x})}{Q(\mathrm{x})G(\mathrm{x})}\;\sigma\;0, \tag{11}\] with \(\gcd(f(\mathbf{x}),g(\mathbf{x}))=1\). Handling rational functions in a mathematically consistent way is straightforward when the relation is an equation or an inequation. If \(\sigma\) is \(=\) or \(\neq\) we can simplify (11) as \[f(x)\;\sigma\;0\wedge g(x)\neq 0.\] There are two equivalent formulations of (11) in the polynomial language when \(\sigma\) is an inequality. One way is to avoid any sign considerations for the denominator polynomial \(g(\mathbf{x})\) and multiply both sides of the relation (11) with its square. However, the poles of the original rational function should not be forgotten and be reflected in the outcome. This way the equivalent formulation of (11) is \[f(\mathbf{x})g(\mathbf{x})\;\sigma\;0\wedge g(\mathbf{x})\neq 0. \tag{12}\] The disadvantage of this method is the likely rise in the degrees of the variables. When \(f(\mathbf{x})\) and \(g(\mathbf{x})\) are multiplied together some variables can get out of reach of the degree dependent QE techniques, such as virtual term substitutions [7]. Another possibility is to consider the sign of \(g(\mathbf{x})\) and split the problem into two pieces driven by the guards \(g(\mathbf{x})>0\) and \(g(\mathbf{x})<0\). The statement we get using this approach is \[(g(\mathbf{x})>0\wedge f(\mathbf{x})\;\sigma\;0)\vee(g(\mathbf{x})<0\wedge 0 \;\sigma\;f(\mathbf{x})). \tag{13}\] Although this time the degrees of the variables stay lower, the size of the logical problem has grown. If the satisfiability problem starts with \(n\) clauses including rational functions this problem would split it to a disjunction of \(2^{n}\) statements. We suggest that the handling of rational functions be left to the SMT solvers. If users make this choice they may inadvertently disadvantage a solver. We elaborate on this later in SS5.4. ## 4 Dataset and Benchmarking ### Dataset We went through most examples given in [9, 10, 11, 13] and equations (5), (7), and (8) (the parts of the proof of (2) which can be proven automatically) to describe them as non-linear arithmetic satisfiability problems in the SMT-LIB language, creating a dataset of 300 new SMT-LIB benchmarks. This was done by translating the original CAD calls of the ProveInequality procedure to SMT-LIB using the smtlib package in Maple [18]. This package already identifies the existence of a rational function in a clause and adds the denominator-is-nonzero clause to the problem. When problems involved a rational function in these calls then we also created two additional equivalent formulations of the problem, by clearing out the denominators in the basic way (12) and in the disjunctive way (13) as demonstrated above. The original examples with only polynomials and these two later polynomial-made examples were submitted in the call for new benchmarks for the 2022 SMT Competition2. Footnote 2: [https://smt-comp.github.io/2022/](https://smt-comp.github.io/2022/) In one group of our problems (the SignPattern problems from [9]) the original problems contains a \(\sqrt{5}\). Mathematica's CAD implementation could handle these, but algebraic numbers are not permitted within the definition of QF_NRA in general. Thus we introduced the clauses \(y^{2}=5\wedge y>0\) to bring the problem into QF_NRA. We note that the iteration of these clauses created high exponents for the pseudo-variable \(y\)s: this was left for the solvers to handle. ### Solvers The SMT solvers used in this benchmarking are Z3 (v 4.8.8) [19] and Yices (v 2.6.4) [20], which both utilise the NLSAT algorithm [6] for QF_NRA; and CVC5-Linux [21] (v 1.0.0) which uses the Cylindrical Algebraic Coverings algorithm for QF_NRA[5]. These three were selected as the strongest performers on QF_NRA in recent years. We also evaluated some of the tools in Computer Algebra Systems, Maple and Mathematica: the versions used are Maple 2022 and Mathematica 12.0.0. In Maple, we used the RegularChains:-QuantifierElimination command [22] to eliminate the calls in (1) format. We also used the soon to be released Maple package QuantifierElimination[23]. The former utilises CAD constructed via triangular sets technology and the latter CAD with Lazard projection interlaced with cubic virtual term substitutions. In Mathematica, we used the CAD command [24], as was used by Kauers' ProveInequality originally; the QE function Resolve which utilises also other QE methods such as virtual term substitution; and the meta-solver Reduce which makes use of Mathematica's other solving tools in addition. Besides Maple's RegularChains implementation, all the other functions and solvers accepted inputs with rational functions. ### Benchmarking Methodology In general we followed the methodology explained in [25]. All benchmarks were undertaken on a computer running openSUSE Leap 15.3 with 16GB of RAM and an Intel Xeon CPU E5-1650 v3 running at 3.50 GHz. All functions were given 20 minutes to attempt each of these problems. We display our results visually using survival plots. To produce these we first solve each problem \(q_{i}\), noting the time \(t_{i}\) (up to our chosen threshold of 1200 seconds). Then for each solver we sort the \(t_{i}\) into increasing order. discard the timed-out problems, and plot points \((k,\sum_{i=1}^{k}t_{i})\). This approach does not guarantee that the same problems are returned with an answer in the chosen threshold from different implementations. However, for the cumulative problem set survival plots effectively encapsulate a lot of information about the success rate and the total time taken to solve for the successful answers. ## 5 Analysis ### Overall performance Figures 1 and 2 show the survival plots on different scales of the solver time. It is clear that, for this dataset, Z3 is superior: it timed-out only in one example. It is then followed by Yices (timed out on 2 examples), then the various implementations in Mathematica (4, 7 and 9 time outs for Resolve, Reduce and CAD respectively) followed by CVC5 (16 timeouts). The two Maple functions performed far less well: RegularChains did not accept rational functions at Figure 1: Survival plot of benchmarks with the time scale up to 8000 seconds. all, and both functions took far longer amounts of time to reach their conclusions. We do note that Maple has also available direct calls to Z3 via SMTLIB:-Satisfiable. It is not surprising that the SMT-solvers excel on satisfiability problems compared to full QE implementations using CAD. Satisfiability is a sub-problem of QE with lower complexity. What is surprising is that Mathematica's QE is competitive with the SMT-solvers. The local projections used [26] may offer similar benefits to the model based SMT searches of [6], [5]; and we also note Mathematica has access to sophisticated logical simplification routines [27]. The DEWCAD project is working now to address the shortcoming's in Maple by building in Maple dedicated algorithms for satisfiability problems similar to those implemented in SMT [28]. ### Algebraic Number Substitutions We suspect some timeout problems are due to a failure to substitute for algebraic numbers in the problems described in the final paragraph of Section 4.1. In the SignPattern problems discussed there, the exponent of a variable \(y\) (introduced to do bookkeeping of \(\sqrt{5}\)) in polynomials gets very high. The difficulty of this problem lowers immensely if a system can identify and at least utilize the second degree equational constraint \(y^{2}=5\). We believe Z3 does this substitution and lowers the cost of calculations immensely. We also believe that most implementations would have been able to answer these questions in a matter of seconds if they were to do this preprocessing before asking for the satisfiability. For example, the CAD implementation of Mathematica can answer SignPattern\_Lemma4a-f examples from the dataset in about half a minute to a minute each, but when the \(y^{2}=5\) is used and the degree of \(y\) is reduced to a only linear powers, these numbers drop to under 10 seconds each. We note that CVC5 performs particularly badly on these problems. Figure 2: Survival plot of benchmarks with the time scale up to 200 seconds. ### Curiosities On our examples, CVC5 is outperformed by Z3 and Yices overall, and it is outperformed by Mathematica for large compute times (see Figure 1). This is somehow at odds with the SMT Competition results of 20213. In addition to the algebraic numbers issue above, the poor performance may also be down to the presence of rational functions. Although CVC5 accepts rational functions in its input, we do not think that it does much preprocessing. The smtlib Maple package that was used to translate these examples to SMT-LIB language adds clauses to keep the denominators non-zero. Therefore, we never experience CVC5 encountering a division by zero and quitting or throwing an exception. However, we observe that it takes a much longer time, and even times-out on occasion for the problems with rationals where others do not. Footnote 3: [https://smt-comp.github.io/2021/results.html](https://smt-comp.github.io/2021/results.html) Mathematica's Resolve and Reduce solve slightly more problems that its cad procedure. But at one point cad overtakes Reduce. I.e. sometimes the cost of the extra considerations Reduce does hinders its success (see Figure 2(a)). This indicates that there is scope for a better meta-algorithm to decide when Reduce resorts to cad. Another curious observation between front runners Z3 and Yices is that Z3 is actually slower than Yices on the SAT problems. But since Z3 was much faster to identify that a problem is unsatisfiable and there were more UNSAT problems in the dataset, it gained victory overall. See Figure 2(b). To the best of our knowledge Z3 and Yices both rely on NLSAT as the underlying theory algorithms. So it suggests the difference is in either the heuristics inside that, or the other incomplete methods tried first. One can also observe from Figure 1 that QuantifierElimination Maple package is cut Figure 3: Scatter plots of noticeable time differences. Runtimes measured in seconds. Blue data points are SAT examples and red are UNSAT. mulatively slower than RegularChains:-QuantifierElimination on this problem set, overtaking eventually it only due to its handling of rational functions. Nevertheless, even when only considering examples with polynomial entries, QuantifierElimination is faster on 14% of the examples. These examples might be where the virtual term substitution can be applied to make a significant difference. ### Effects of Denominator Clearance Finally, we observe that _how_ we clear denominators affects conclusions about the best solver. We focus our attention to the rational function calls with their denominators cleared using (12) or (13). For polynomial calls acquired by (12), Z3 is still the best solver but the second best solver changes hands from Yices to Mathematica's Resolve function both in time and in the number of problems solved. However, when we focus on rational call images under (13) denominator clearance, we see that Mathematica Resolve solves one extra problem than Z3 and Yices. ## 6 Conclusions Our first conclusion is that SMT solvers do very well on most of the examples in this problem set, outperforming computer algebra systems designed to tackle broader QE problems (Section 5.1). We also observe that the solvers perform differently on this new dataset than they did on the QF_NRA section of the SMT-LIB overall in the most recent competition. This shows us that it offers some new characteristics, and they continue the much needed diversification of the QF_NRA benchmarks. They also exposes some interesting strengths and weaknesses of solvers that the developers may find interesting to study (Section 5.3). Our second conclusion is that is a need for further work on the SMT-LIB language for QF_NRA to decide how best to deal with rational functions. We observed that the choice of how we clear denominators effects conclusions over the best solver (Section 5.4). At the moment, the SMT-LIB seems to suggest the user should make this choice, but would it not be more appropriate for the solver to do it? It clearly introduces a scope for new heuristics that researchers can explore. The authors support that rational function calls be included in the SMT-LIB language, with a semantics that implies the denominator be non-zero. But this must be defined so that the meaning is mathematically consistent and avoid getting conflicting results from solvers. Our third conclusion is in a similar vein: we suggest the SMT-LIB considers allowing the use of algebraic numbers in the input (Section 5.2). There are 21 examples under the SignPattern header, where we replaced \(\sqrt{5}\) with a variable \(y\) and two added clauses that \(y^{2}=5\wedge y>0\). Not only that, we let the iterations to grow the degree of \(y\)s and left the preprocessing to the solvers. On this dataset the usually competitive CVC5 performed poorly. But if we exclude this 21 problem subset, then among the polynomial calls CVC5 beats Mathematica methods in cumulative time. Allowing algebraic numbers in the problem statement stretches the definition of polynomial (usually assumed to have rational coefficients). But many of the theory algorithms such as CAD can handle these and they can be encoded into actual polynomials. Having the user do the encoding can make the problems artificially harder for solvers. ### Acknowledgements The authors would like to thank Manuel Kauers for providing a modified version of his ProveInequality function which exposed the CAD calls, easing the creation of the dataset. All three authors are supported by the EPSRC DEWCAD Project (_Pushing Back the Doubly-Exponential Wall of Cylindrical Algebraic Decomposition_): JHD and AU by grant number EP/T015713/1 and ME by grant number EP/T015748/1. AU also acknowledges partial support from FWF grant P-34501N.
2307.16749
Separable mixing: the general formulation and a particular example focusing on mask efficiency
The aim of this short note is twofold. We formulate the general Kermack-McKendrick epidemic model incorporating static heterogeneity and show how it simplifies to a scalar Renewal Equation (RE) when separable mixing is assumed. A key feature is that all information about the heterogeneity is encoded in one nonlinear real valued function of a real variable. Inspired by work of R. Pastor-Satorras and C. Castellano, we next investigate mask efficiency and demonstrate that it is straightforward to rederive from the RE their main conclusion, that the best way to protect the population as a whole is to protect yourself. Thus we establish that this conclusion is robust, in the sense that it also holds outside the world of network models.
M. C. J. Bootsma, K. M. D. Chan, O. Diekmann, H. Inaba
2023-07-31T15:16:48Z
http://arxiv.org/abs/2307.16749v1
# Separable mixing: the general formulation and a particular example focusing on mask efficiency ###### Abstract. The aim of this short note is twofold. We formulate the general Kermack-McKendrick epidemic model incorporating static heterogeneity and show how it simplifies to a scalar Renewal Equation (RE) when separable mixing is assumed. A key feature is that all information about the heterogeneity is encoded in one nonlinear real valued function of a real variable. Inspired by work of R. Pastor-Satorras and C. Castellano, we next investigate mask efficiency and demonstrate that it is straightforward to rederive from the RE their main conclusion, that the best way to protect the population as a whole is to protect yourself. Thus we establish that this conclusion is robust, in the sense that it also holds outside the world of network models. Keywords. Kermack-McKendrick, epidemic model, heterogeneity, separable mixing, mask efficiency. \({}^{1}\)Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht University, Utrecht, The Netherlands \({}^{2}\)Department of Mathematics, Faculty of Science, Utrecht University, Utrecht, The Netherlands \({}^{3}\)Korteweg-de Vries Institute, University of Amsterdam, Amsterdam, The Netherlands \({}^{4}\)Transtrend BV, Rotterdam, The Netherlands \({}^{5}\)Faculty of Education, Tokyo Gakugei University, Koganei-shi, Tokyo, Japan Corresponding author: Kit Ming Danny Chan ([email protected]). ## 1. Introduction The work described below was triggered when the third author of the present paper attended the lecture of R. Pastor-Satorras during the 'Workshop on Epidemic Modelling: Current Challenges' in Girona, 19-21 June 2023. This lecture reported on the models, methods and results of the paper [9] and culminated in a powerful qualitative insight: masks that protect the wearer against infection are, also in public health perspective, more efficient than masks that, if the wearer is infectious, protect its contacts against infection. This conclusion is derived in the context of network models. Already for quite a while the present authors are working on the manuscript [1] which aims to provide a general survey of various effects of (mainly static) heterogeneity. A natural question arose: is it possible to sustain the qualitative insight by rederiving it in the context of homogeneous mixing models? As we show below, the methodology developed in our manuscript in preparation allows us to easily provide an affirmative answer! ## 2. Formulation of a comprehensive model for epidemic outbreaks in heterogeneous host populations By using the word 'outbreak', we imply that demographic turnover is ignored and that infection leads to permanent immunity. Host individuals are characterized by a trait \(x\) taking values in a set \(\Omega\). We assume that \(\Omega\) is a measurable space, meaning that it comes equipped with a \(\sigma\)-algebra. We introduce a positive measure \(\Phi\) on \(\Omega\) to describe the distribution of the trait in the host population. We normalize \(\Phi(\Omega)=1\) and denote the host population size by \(N\). For a concrete example see Section 4 below. A major restriction is that the trait of an individual does not change during the outbreak (so if the trait corresponds to age, the assumption is that the duration of the outbreak is so short, that we can ignore that individuals are becoming older while it lasts). Let \(s(t,x)\), with \(s(-\infty,x)=1\), denote the probability that an individual with trait \(x\) is susceptible at time \(t\). When the NUMBER of infected individuals is small, demographic stochasticity has a large impact and cannot be ignored. Our description starts when a small FRACTION of the very large host population is infected. With an informal appeal to the Law of Large Numbers, we then also interpret \(s(t,x)\) as the FRACTION of individuals with trait \(x\) that is susceptible at time \(t\). It follows that \[s(t,x)=\exp\left(-\int_{-\infty}^{t}F(\tau,x)d\tau\right) \tag{2.1}\] with \(F\) the force of infection as a function of time and trait. In the spirit of [5] (for a reformulation in modern language see [2]) we introduce as the key modelling ingredient (2.2) \[A(\tau,x,\xi)= \text{ the expected contribution to the force of infection on an individual with trait }x\] of an individual with trait \(\xi\) that became infected \(\tau\) units of time ago Here \(A\) is a measurable non-negative function mapping \(\mathbb{R}_{+}\times\Omega\times\Omega\) into \(\mathbb{R}_{+}\) and \(A\) is integrable with respect to \((\tau,\xi)\) over \(\mathbb{R}_{+}\times\Omega\). The formula \[F(t,x)=N\int_{0}^{\infty}\int_{\Omega}A(\tau,x,\xi)F(t-\tau,\xi)s(t-\tau,\xi) \Phi(d\xi)d\tau, \tag{2.3}\] expresses the force of infection as a sum of contributions of individuals that were infected time \(\tau\) ago while having trait \(\xi\). By integrating (2.3) over time, interchanging the integrals, using the differentiated version of (2.1) to evaluate and inserting the result at the rhs of (2.1), we arrive at the nonlinear abstract Renewal Equation (RE) \[s(t,x)=\exp\left(-N\int_{0}^{\infty}\int_{\Omega}A(\tau,x,\xi)[1-s(t-\tau,\xi) ]\Phi(d\xi)d\tau\right) \tag{2.4}\] Equation (2.4) provides a concise representation of a rather general class of models. For quantitative work the discrete time variant introduced in [3] might be more suitable, especially when \(\Omega\) is (or can be approximated, in some sense, by) a finite set, see [6, 7, 8] for steps in this direction. An alternative way to increase the tractability is to assume separable mixing, or, in other words, to assume that \(A\) is a product of a function of \(x\) and a function of \((\tau,\xi)\), reflecting that the properties of the susceptible individual and the infected individual have independent influence on the likelihood of an encounter and concomitant transmission. We shall go one step further, and assume that \(A\) is the product of three factors, the functions \(a(x)\), \(b(\tau)\) and \(c(\xi)\). So also the age of infection and the trait of the infected individual are assumed to have independent influence on transmission. ## 3. Separable mixing When \[A(\tau,x,\xi)=a(x)b(\tau)c(\xi), \tag{3.1}\] it follows straight away from (2.3) that the force of infection factorizes as a product of \(a(x)\) and an unknown function of time. The same holds for the cumulative force of infection and accordingly we put \[s(t,x)=e^{-a(x)w(t)}, \tag{3.2}\] and find that \(w\) should satisfy the scalar nonlinear RE \[w(t)=\int_{0}^{\infty}b(\tau)\Psi(w(t-\tau))d\tau, \tag{3.3}\] where \(\Psi:\mathbb{R}\to\mathbb{R}\) is defined by \[\Psi(w):=N\int_{\Omega}c(\eta)(1-e^{-a(\eta)w})\Phi(d\eta). \tag{3.4}\] In the 'trivial' case that both \(c\) and \(a\) are identically equal to one, all individuals have identical susceptibility as well as expected infectiousness, so, after all, there is no heterogeneity. In this case \[\Psi(w)=N(1-e^{-w}), \tag{3.5}\] and (3.3) is the standard Kermack-McKendrick RE as, for instance, presented in [2]. So (3.3) tells us how, in the separable mixing case, the various components of heterogeneity, viz., susceptibility \(a\), infectiousness \(c\) and distribution \(\Psi\), affect the nonlinearity in the RE. (Incidentally, in [4], it is shown how to efficiently derive compartmental models that incorporate heterogeneity, by choosing in (3.3) functions \(b\) that are a matrix exponential sandwiched between two vectors.) To investigate the initial phase of an outbreak, we linearize at the disease-free steady state \(w=0\), which amounts to replacing \(\Psi(w)\) by \(\Psi^{\prime}(0)w\). Inserting the trial solution \(w(t)=e^{\lambda t}\) we obtain the Euler-Lotka equation \[1=\Psi^{\prime}(0)\int_{0}^{\infty}b(\tau)e^{-\lambda\tau}d\tau, \tag{3.6}\] which has a unique positive solution \(\lambda=r\) whenever the Basic Reproduction Number \(R_{0}\), given by \[R_{0}=\Psi^{\prime}(0)\int_{0}^{\infty}b(\tau)d\tau, \tag{3.7}\] exceeds one. (The non-negativity of \(b\) guarantees that in the complex plane \(r\) is the right most root of (3.6); for \(R_{0}<1\) there exists a solution \(r<0\) provided the rhs of (3.6) assumes, on the real axis, values greater than one; a sufficient condition for this to happen is that \(b\) has compact support). Note that \[\Psi^{\prime}(0)=N\int_{\Omega}c(\eta)a(\eta)\Phi(d\eta). \tag{3.8}\] The Herd Immunity Threshold (HIT) is, by definition, reached when \(w\) assumes the value \(\bar{w}\) such that the reproduction number corresponding to the situation in which \(\Psi^{\prime}(0)\) is replaced by \(\Psi^{\prime}(\bar{w})\) equals one (note that after reaching the HIT there might still be a high incidence, simply because the reservoir of already infected individuals generates a considerable force of infection; but the contents of the reservoir will gradually diminish once the HIT is reached). The HIT itself is defined as \(\bar{s}\), where \(\bar{s}\) is the fraction of the population that is still susceptible when \(w\) assumes the value \(\bar{w}\). Hence \[\bar{s}=\int_{\Omega}e^{-a(x)\bar{w}}\Phi(dx), \tag{3.9}\] with \(\bar{w}\) the unique (since \(\Psi^{\prime\prime}(w)<0\) ) solution of \[1=\Psi^{\prime}(\bar{w})\int_{0}^{\infty}b(\tau)d\tau. \tag{3.10}\] For \(t\to\infty\), \(w\) tends to \(w(\infty)\) characterized by \[w(\infty)=\Psi(w(\infty))\int_{0}^{\infty}b(\tau)d\tau=\frac{\Psi(w(\infty))} {\Psi^{\prime}(0)}R_{0} \tag{3.11}\] and the fraction of the population that escapes is accordingly given by \[s(\infty)=\int_{\Omega}e^{-a(x)w(\infty)}\Phi(dx). \tag{3.12}\] Note that (3.11) implies that \(\Psi^{\prime}(w(\infty))<\frac{\Psi^{\prime}(0)}{R_{0}}\) (since \(\Psi(y)>y\Psi^{\prime}(y)\) for \(y>0\)) and hence that \(w(\infty)>\bar{w}\). In the next section we shall specialize the model ingredients \(\Omega\), \(\Phi\), \(a\) and \(c\) such that they reflect a situation in which a fraction \(f\) of the population wears (all the time) a mask and that wearing a mask reduces, potentially, both the susceptibility and the infectiousness. ## 4. Efficiency of masks Consider a population in which a fraction \(f\) of the individuals wears a mask (whenever they are in a situation where they can come into contact with other individuals) while the complementary fraction \(1-f\) never wears a mask. To describe this distinction, we let \(\Omega\) consist of two points, indicated by \(1\) and \(2\). We label the individuals that do not wear a mask \(1\) and those who do, we label \(2\). We specify: \[\Phi(1)=1-f\quad\text{and}\quad\Phi(2)=f. \tag{4.1}\] We assume that wearing a mask is not correlated with any property that has influence on the contact process (in principle one could imagine that the contact process is assortative, in the sense that mask wearers meet disproportionately often with other mask wearers; but by this assumption we explicitly exclude such effects). Accordingly, we adopt (3.1). Noting that this decomposition provides the freedom of incorporating multiplicative constants into the factor \(b\), we normalize \(a\) and \(c\) by choosing: \[a(1)=1\quad\text{and}\quad c(1)=1. \tag{4.2}\] The values of \(a(2)\) and \(c(2)\) then describe the relative susceptibility and infectiousness of those who wear a mask. The idea that a mask offers protection is reflected in our assumption that these values lie in the interval \([0,1]\). The aim of our analysis is to investigate the influence of these values on the epidemic outbreak. Therefore we introduce parameters \(\epsilon_{1}\) and \(\epsilon_{2}\) and put: \[a(2)=\epsilon_{1}\quad\text{and}\quad c(2)=\epsilon_{2}. \tag{4.3}\] It follows that: \[\Psi(w)=N\left[(1-f)(1-e^{-w})+f\epsilon_{2}(1-e^{-\epsilon_{1}w})\right], \tag{4.4}\] and \[\Psi^{\prime}(w)=N\left[(1-f)e^{-w}+f\epsilon_{1}\epsilon_{2}e^{-\epsilon_{1}w} \right]. \tag{4.5}\] In succession, we now consider the initial phase, the HIT and the final size, focusing on the (a)symmetry of the impact of the two parameters \(\epsilon_{1}\) and \(\epsilon_{2}\). As (3.6) and (3.7) show, the crucial quantities for the initial phase are \(b(\tau)\) and \(\Psi^{\prime}(0)\). From (4.5) we deduce: \[\Psi^{\prime}(0)=N\left[1-f+f\epsilon_{1}\epsilon_{2}\right]. \tag{4.6}\] It follows that in the initial phase of an outbreak the two protection factors carry equal weight, in the sense that both the reproduction number \(R_{0}\) and the Malthusian parameter \(r\) depend only on their product. Motivated by this observation, we shall keep the product constant, say \[\epsilon_{1}\epsilon_{2}=\epsilon, \tag{4.7}\] when investigating the HIT and the final size. **Theorem 4.1:** Assume (4.7) with \(\epsilon\in(0,1)\). The HIT \(\bar{s}\), defined in (3.9), is a decreasing function of \(\epsilon_{1}\). Proof.: Define: \[G(w,\epsilon_{1})=(1-f)e^{-w}+\epsilon fe^{-\epsilon_{1}w}, \tag{4.8}\] then (3.10) can be rewritten as: \[G(\bar{w},\epsilon_{1})=\left(N\int_{0}^{\infty}b(\tau)d\tau\right)^{-1}. \tag{4.9}\] Since \[\operatorname{D}_{1}G(w,\epsilon_{1})=-(1-f)e^{-w}-\epsilon_{1}\epsilon fe^{- \epsilon_{1}w}<0 \tag{4.10}\] \[\operatorname{D}_{2}G(w,\epsilon_{1})=-w\epsilon fe^{-\epsilon_{1}w}<0 \tag{4.11}\] we have \[\frac{d\bar{w}}{d\epsilon_{1}}(\epsilon_{1})=-\left(\operatorname{D}_{1}G( \bar{w},\epsilon_{1})\right)^{-1}\operatorname{D}_{2}G(\bar{w},\epsilon_{1})<0. \tag{4.12}\] Next observe that the expressions for \(\bar{s}\) and for \(G(\bar{w},\epsilon_{1})\) differ only by a factor \(\epsilon\) in the last term. To exploit this, we rewrite \(\mathrm{D}_{1}G\frac{d\bar{w}}{d\epsilon_{1}}+\mathrm{D}_{2}G=0\) as \[-\epsilon_{1}fe^{-\epsilon_{1}\bar{w}}\frac{d\bar{w}}{d\epsilon_{1}}(\epsilon_ {1})-\bar{w}fe^{-\epsilon_{1}\bar{w}}=\frac{1}{\epsilon}(1-f)e^{-\bar{w}}\frac {d\bar{w}}{d\epsilon_{1}}(\epsilon_{1}). \tag{4.13}\] Since \[\frac{d\bar{s}}{d\epsilon_{1}}(\epsilon_{1})=-(1-f)e^{-\bar{w}}\frac{d\bar{w}} {d\epsilon_{1}}(\epsilon_{1})-\epsilon_{1}fe^{-\epsilon_{1}\bar{w}}\frac{d\bar {w}}{d\epsilon_{1}}(\epsilon_{1})-\bar{w}fe^{-\epsilon_{1}\bar{w}} \tag{4.14}\] we find \[\frac{d\bar{s}}{d\epsilon_{1}}(\epsilon_{1})=(\frac{1}{\epsilon}-1)(1-f)e^{- \bar{w}}\frac{d\bar{w}}{d\epsilon_{1}}(\epsilon_{1})<0, \tag{4.15}\] since \(0<\epsilon<1\). We conclude that we should minimize \(\epsilon_{1}\) to maximize the susceptible fraction upon reaching the HIT or, in other words, we should maximize self protection. **Theorem 4.2:** Assume (4.7) with \(\epsilon\in(0,1)\). The fraction \(s(\infty)\) that is still susceptible after the outbreak, defined in (3.12), is a decreasing function of \(\epsilon_{1}\). Sketch of the proof: Define \[H(w,\epsilon_{1})=(1-f)\frac{1-e^{-w}}{w}+\epsilon f\frac{1-e^{-\epsilon_{1}w} }{\epsilon_{1}w} \tag{4.16}\] then (3.11) can be rewritten as the equation \[H(w(\infty),\epsilon_{1})=\left(N\int_{0}^{\infty}b(\tau)d\tau\right)^{-1}. \tag{4.17}\] Using that \(\frac{d}{dx}\frac{1-e^{-x}}{x}<0\) for \(x>0\) one can copy the reasoning in the proof of Theorem 4.1 concerning \(G\) to \(H\) and derive that both \(w(\infty)\) and \(\bar{s}(\infty)\) are decreasing functions of \(\epsilon_{1}\). From (3.12) we have \[s(\infty)=(1-f)e^{-w(\infty)}+fe^{-\epsilon_{1}w(\infty)}. \tag{4.18}\] Since \(w(\infty)\) is a decreasing function of \(\epsilon_{1}\) the escape probability for those who do NOT wear a mask, represented by \(e^{-w(\infty)}\), increases with \(\epsilon_{1}\). From Theorem 4.2 it follows then that the escape probability of those who DO wear a mask, represented by \(e^{-\epsilon_{1}w(\infty)}\), decreases strongly enough to make the overall per capita escape probability \(s(\infty)\) decreasing as well. Stated otherwise, maximizing self protection by those who wear a face mask improves the escape probability for themselves (Figure 0(a)) and the population as a whole (Figure 2), but reduces the escape probability for those who do not wear a mask (Figure 0(b)). The intuitive 'explanation' of the overall positive effect is that when infection of an individual is prevented, automatically the secondary infections that potentially are caused by this individual are prevented. In other words, self protection occurs one step earlier in a chain. ## 5. Concluding remarks From a strictly medical point of view, the chief aim of vaccination is to protect individuals against disease. In a public health perspective, however, one is interested in the effect of vaccination on transmission. Vaccination may reduce both the probability to get infected during an encounter with an infectious individual and the infectiousness, should infection nevertheless occur. Both reductions help to lower the force of infection and thus to diminish the size of an outbreak. A mask is not that different from a vaccine, it too reduces both susceptibility and infectiousness. Different constructions may be more efficient in one or the other of these reductions, see [9]. This then leads to the question of what one should strive for. In [9] a clear conclusion is reached in the context of a SIR configuration network model (with 'random' distribution of the masks, i.e., with a form of proportionate mixing): if one keeps Figure 1. Improvement factor of escape probability for type 2 individuals who always wear a mask and for type 1 individuals who never wear a mask. We find the escape probabilities \(s(\infty,1)\) and \(s(\infty,2)\) by first numerically solving equation (3.11). Then we compute the improvement factor of the escape probability by dividing the escape probability in a population with mask (for fraction \(f\)) by the escape probability in a maskless population. Curves are shown for two choices of \(R_{0}\)(no mask) and two choices of \(\epsilon\), where \(R_{0}\)(no mask) is the basic reproduction number in a maskless population. Note that \(\epsilon_{2}=\epsilon/\epsilon_{1}\) as assumed in (4.7). the product of the two reduction factors constant, one should maximize the reduction of susceptibility in order to achieve a maximal reduction of the final size. Here we checked that the same conclusion obtains when one allows, in Kermack-McKendrick spirit, for expected infectiousness described by a general function of time elapsed since exposure and for proportionate mixing of those who do and those who do not wear a mask. A secondary objective of the present paper is to demonstrate the effectiveness of a top down approach. Before we became aware of [9], we had already formulated a rather general model of an outbreak in a host population with static heterogeneity and we had studied the simplification that derives from assuming proportionate mixing. Thus the present study became, essentially, a fill in exercise. ## Use of AI tools declaration The authors declare they have not used Artificial Intelligence (AI) tools in the creation of this article. ## Acknowledgments It is a pleasure to thank Joan Saldana and his team for organizing the stimulating Current Challenges Workshop on Epidemic Modelling, Girona 2023, and to thank Romualdo Pastor-Satorras for his inspiring lecture during the workshop. Figure 2. Improvement factor of escape probability for the population as a whole. We find the escape probability \(s(\infty)\) for the population as a whole by first numerically solving equation (3.11). The increase in escape probability is then computed by dividing the escape probability in a population with mask (for fraction \(f\)) by the escape probability in a maskless population. Curves are shown for two choices of \(\epsilon\). In addition we show in Figure 1(a) the impact of different choices of \(R_{0}\)(no mask): the basic reproduction number in a maskless population, while in Figure 1(b) we show the impact of different choices of \(f\). Note that \(\epsilon_{2}=\epsilon/\epsilon_{1}\) as assumed in (4.7). ## Conflict of interest The authors declare there is no conflict of interest.
2302.14780
Renormalisation group flows connecting a $4-ε$ dimensional Hermitian field theory to a $\mathcal{PT}$-symmetric theory for a fermion coupled to an axion
The renormalisation group flow of a Hermitian field theory is shown to have trajectories which lead to a non-Hermitian Parity-Time ($\mathcal{PT}$) symmetric field theory for an axion coupled to a fermion in spacetime dimensions $D=4-\epsilon$, where $\epsilon >0 $. In this renormalisable field theory, the Dirac fermion field has a Yukawa coupling $g$ to a pseudoscalar (axion) field and there is quartic pseudoscalar self-coupling $u$. The robustness of this finding is established by considering flows between $\epsilon$ dpependent Wilson-Fisher fixed points and also by working to \emph{three loops} in the Yukawa coupling and to \emph{two loops} in the quartic scalar coupling. The flows in the neighbourhood of the non-trivial fixed points are calculated using perturbative analysis, together with the $\epsilon$ expansion. The global flow pattern indicates flows from positive $u$ to negative $u$; there are no flows between real and imaginary $g$. Using summation techniques we demonstrate a possible non-perturbative $\mathcal{PT}$-symmetric saddle point for $D=3$.
Lewis Croney, Sarben Sarkar
2023-02-28T17:25:40Z
http://arxiv.org/abs/2302.14780v4
Renormalisation group flows in a \(\boldsymbol{4-\epsilon}\) dimensional \(\boldsymbol{\mathcal{PT}}\)-symmetric field theory for a fermion coupled to an axion ###### Abstract The role of \(\mathcal{PT}\) symmetry in an effective field theory for fermions and axions is considered in spacetime dimensions \(D=4-\epsilon\), where \(\epsilon>0\). The renormalisable field theory is for a Dirac field and a pseudoscalar field, whose interactions are a Yukawa coupling \(g\) and a quartic scalar self-coupling \(u\). The field theory is Hermitian or non-Hermitian (but \(\mathcal{PT}\)-symmetric) depending on whether the Yukawa coupling is imaginary or real and the quartic coupling is positive or negative. The introduction of \(\epsilon\) allows a controlled investigation of the validity of recent work which indicates that the quartic coupling and the square of the Yukawa coupling (regarded as a function of scale) may change sign in a renormalisation group flow. Renormalisation group flows in coupling constant space are investigated using the Mathematica package RGBeta, which calculates beta functions up to three loops in the Yukawa coupling and up to two loops in the quartic scalar coupling. Fixed points are found for non-zero \(\epsilon\), which are classified according to their linear stability. The flows in the neighbourhood of the non-trivial fixed points are calculated using perturbative analysis, together with the \(\epsilon\) expansion. The global flow indicates flows from positive \(u\) to negative \(u\); there are no flows from imaginary \(g\) to real \(g\). Using summation techniques for divergent series, we demonstrate a possible \(\mathcal{PT}\)-symmetric saddle fixed point for \(D=3\). keywords: \(\mathcal{PT}\) symmetry, quantum field theory, epsilon expansion, renormalisation group, non-Hermiticity, axion + Footnote †: journal: Nuclear Physics B ## 1 Introduction Non-Hermitian Hamiltonians govern systems that are dissipative in general and so they are typically not in equilibrium. In the presence of a discrete \(\mathcal{PT}\) symmetry, it is shown in [1; 2] that non-Hermitian systems allow a new possibility for unitary time evolution [3] in quantum mechanics (which is a one-dimensional field theory) where self-adjointness of operators for observables is implemented with respect to an inner product different from the Dirac inner product. More recently there have been proposals that \(\mathcal{PT}\)-symmetric [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] quantum field theory (QFT) in four spacetime dimensions \(D=4\) may be used for model building of beyond the Standard Model (BSM) physics. The quantum aspects of such models are only starting to be explored. Once higher dimensional field theory is considered, it is also necessary to confront issues of renormalisation. A promising framework for \(\mathcal{PT}\) quantum field theory is based on path integrals [19; 20; 21] and in particular the complex deformation of paths [2; 22; 23]. A simple renormalisable field-theoretic model for axion physics is considered here [24; 25; 21]. The interaction terms have a conventional form but can be tuned to have values which render the QFT no longer Hermitian, but still \(\mathcal{PT}\)-symmetric (as in [26]). It provides a framework for studying the interplay of renormalisation and non-Hermiticity in the presence of a fermion and a pseudoscalar near four dimensions. One of the early successes of the \(\mathcal{PT}\)-symmetric approach addressed renormalisation of the field-theoretic Lee model [27; 28] at strong coupling, which led to a non-Hermitian but \(\mathcal{PT}\)-symmetric [26; 29] theory1. The Lee model is not a conventional crossing-symmetric field theory and so the significance of the emergence of \(\mathcal{PT}\) symmetry is not clear in a wider context2. In order to understand, in a controlled way, the interplay of renormalisation and \(\mathcal{PT}\) symmetry in relativistic four-dimensional QFT models, it is necessary to introduce two tools: the \(\epsilon\) expansion [32] in \(D=4-\epsilon\) spacetime dimensions, and the perturbative renormalisation group [33]. The applicability of perturbation theory to such \(\mathcal{PT}\)-symmetric QFTs is discussed in detail in a semiclassical evaluation of the path integral [19; 21] based on contributions from trivial and non-trivial saddle points. Perturbation theory, using Feynman diagrams, gives the contribution from the trivial saddle-point in a path integral formulation. The contribution from the non-trivial saddle points (due to instantons) are resurgent contributions and subdominant in the weak coupling limit [34]. However, the instanton solutions give rise to imaginary contributions to odd point Greens functions which would otherwise vanish [20; 34]. Hence our approach, which ignores the subdominant contributions from non-trivial saddle points, is based on perturbation theory around the trivial saddle point, which is valid for the study of all weak coupling fixed points. Footnote 1: A different non-perturbative approach to renormalisation based on path integrals has also recently appeared [29; 30]. Footnote 2: More recently this issue has been discussed in toy one-dimensional models based on effective theory [31] using quantum mechanics. ### \(\mathcal{PT}\) symmetry and the taming of instabilities and ghosts An example of renormalisation leading to a theory with ghost states is provided by the Lee model [27; 28]. The reinterpretation of the Lee model as a \(\mathcal{PT}\)-symmetric field theory, however, leads to a unitary theory [26; 27; 28] with no ghost states. The instability of the vacuum in the Standard Model of particle physics [35; 36; 37; 38; 39; 40; 41] is another example where the effects of renormalisation lead to imaginary energies of states or false vacua3. There has been a proposal that such effective theories, which have \(\mathcal{PT}\) symmetry, do not have unstable states [31] within a \(\mathcal{PT}\)-symmetric context. The effects of renormalisation at weak coupling are embodied in the perturbative renormalisation group where the scale dependence of couplings is manifest, and the couplings run with the scale. This aspect is the subject of our paper. There are preliminary indications [19; 42] that there is flow from Hermitian regions of parameter space to non-Hermitian \(\mathcal{PT}\)-symmetric regions. These findings may, a priori, be due to limitations of perturbation theory. In this paper, we address this limitation by considering the behaviour of the renormalisation group flows for small \(\epsilon>0\) in a spacetime of dimension \(D=4-\epsilon\). This gives rise to fixed points which are \(\epsilon\)-dependent. For small \(\epsilon\), the couplings of the flow remain small. Our calculation is facilitated by a recent advance in higher loop calculations of beta functions which is useful for a range of field theories [43; 44]4. The contribution to the beta function from non-trivial fixed points is a resurgent series and is asymptotically negligible (for small coupling). Furthermore, by working to high order in the \(\epsilon\) expansion, we find that it is possible in one case to extrapolate with reasonable confidence to \(\epsilon=1\) and determine the fixed point and its stability for \(D=3\). This fixed point is a non-Hermitian saddle point. Footnote 4: There are other programs which calculate higher loops such as ARGES [45], which we have also used to check our calculations. For the convenience of the reader, some review material on the quantum mechanics of \(\mathcal{PT}\)-symmetric systems is included. The organisation of this paper is as follows: * We review the role of \(\mathcal{PT}\) symmetry and renormalisation in the Lee model [29]. * We introduce our perturbatively renormalisable conventional field theory model in 4 dimensions [19; 21] involving a single Dirac fermion interacting with a single pseudoscalar (axion) field which has two couplings \(g\) (the Yukawa coupling) and \(u\) the quartic pseudoscalar coupling, which can have values leading to Hermitian and non-Hermitian (but \(\mathcal{PT}\)-symmetric) couplings. This model is a modern relative of the Lee model, but it is not an exactly solvable field theory. * We give expressions for the beta functions for the couplings in the theory using the Mathematica package RGBeta [44], but modified in the standard way since we work in \(D=4-\epsilon\) dimensions 5. We solve for the fixed points and determine their stability. Going from \(\epsilon=0\) to non-zero \(\epsilon\) leads to the trivial fixed point spawning three new \(\epsilon\)-dependent fixed points, whose magnitudes are directly controlled by \(\epsilon\). Furthermore, the flow in the neighbourhood of the fixed points is joined together to give a more global flow picture. From this picture, we can see how the Hermitian and non-Hermitian fixed points interact with each other i.e. how the flow is organised around these fixed points. For one _non-Hermitian_ fixed point the \(\epsilon\) expansion is stable, i.e. the coefficients do not increase rapidly with order, so resummation techniques using Pade approximants [47] leads to a genuine fixed point in \(D=3\), which is not sensitive to variations in the form of Pade approximants used. This fixed point has the stability of a saddle point. Footnote 5: The beta functions are calculated to third order in the Yukawa coupling but second order in the scalar self-coupling. We have checked the beta functions against those found in [19; 45]. For field theories with more than one coupling constant, it is customary to consider loop expansions. The program RGBeta is a general purpose program applicable to a wide range of field theories. The price for this flexibility is that we do not have the expression for the beta function to three loops in both the Yukawa and quartic scalar couplings [46]. ## 2 The role of renormalisation and \(\mathcal{PT}\) symmetry The Lee model (LM) is a class of soluble simplified field theories used to study non-perturbative renormalisation in a simple context. Because LM involves fermions and a pseudoscalar interacting through a Yukawa interaction, it can be loosely considered as a precursor of the model that we consider. However, it does not possess some important properties of relativistic field theories: the spin-statistics theorem and crossing symmetry do not hold; energy-momentum dispersion relations are not standard. LM does have coupling constant renormalisation and is thus able to illustrate the emergence of a non-Hermitian but \(\mathcal{PT}\)-symmetric Hamiltonian due to renormalisation. The model has two species of spinless fermions \(N\) and \(V\) interacting with a neutral pseudoscalar \(\theta\). The lack of crossing symmetry is important for the solubility of the model and is manifest since the only allowed interactions are \[V\rightleftharpoons N+\theta. \tag{1}\] It is straightforward to note that such interactions insure the conservation rules for \(B\) and \(Q\) where * \(B=n_{N}+n_{V}\) * \(Q=n_{V}-n_{\theta}\), and \(n_{N}\) is the number of \(N\) quanta, \(n_{V}\) is the number of \(V\) quanta and \(n_{\theta}\) is the number of \(\theta\) quanta. These conservation laws lead to superselection sectors labelled by \(B\) and \(Q\) and to solutions which become more involved as \(B\) and \(Q\) become larger. One variant of the Lee model [29] in a finite spatial volume \(\Omega\), which is sufficient for illustrative purposes, has, in momentum space, a Hamiltonian \(H\) \[H=H_{0}+H_{int}, \tag{2}\] where 6 Footnote 6: It is possible to have more general continuum models with dispersion in the energy-momentum relations for the fermions so that \[H_{0}=\int d^{3}p\ E_{V}\left(\vec{p}\right)\psi_{V}^{\dagger}\left(\vec{p} \right)\psi_{V}\left(\vec{p}\right)+\int d^{3}p\ E_{N}\left(\vec{p}\right) \psi_{N}^{\dagger}\left(\vec{p}\right)\psi_{N}\left(\vec{p}\right)+\int d^{3 }k\ \omega(\vec{k})a^{\dagger}(\vec{k})a(\vec{k}). \tag{3}\] However such models do not reveal any new features which are not present in (4). \[H_{0}=m_{V}\psi_{V}^{\dagger}\psi_{V}+m_{N}\psi_{N}^{\dagger}\psi_{N}+\sum_{ \vec{k}}\omega_{\vec{k}}\ a^{\dagger}(\vec{k})a(\vec{k}), \tag{4}\] \[H_{int}=\delta m_{V}\psi_{V}^{\dagger}\psi_{V}-g_{0}\Omega^{-\frac{1}{2}}\sum _{\vec{k}}\frac{u(\omega_{\vec{k}})}{\sqrt{2\omega_{\vec{k}}}}\left(\psi_{V}^ {\dagger}\psi_{N}a(\vec{k})+\psi_{N}^{\dagger}\psi_{V}a^{\dagger}(\vec{k}) \right), \tag{5}\] \(\omega_{\vec{k}}=\sqrt{\vec{k}^{2}+m^{2}}\) and \(u(\omega_{\vec{k}})\) is a dimensional cut-off function, which is chosen to tend to 0 for large \(\omega_{\vec{k}}\). \(\psi_{V}\), \(\psi_{N}\) and \(a(\vec{k})\) are annihilation operators for \(V\), \(N\) and \(\theta\) quanta. The non-vanishing commutation relations are \[\left\{\psi_{V}^{\dagger},\psi_{V}\right\} =1, \tag{6}\] \[\left[a_{\vec{k}},a_{\vec{k}^{\prime}}^{\dagger}\right] =\delta_{\vec{k}\vec{k}^{\prime}}. \tag{7}\] \(m_{V},m_{N},\mu\) are renormalised parameters; on analysis of divergences in the scattering amplitudes a mass counterterm \(\delta m_{V}\) (a function of \(g_{0}\)) is required. At the outset, _the theory is Hermitian since \(g_{0}\) is real_. The Hilbert space is spanned by the basis of states of the form \(|n_{V},n_{N},\left\{n_{\vec{k}}\right\}\rangle\). As noted earlier, the Hilbert space has independent sectors labelled by \(B\) and \(Q\) which makes the model soluble. The _case_\(B=1\) and \(Q=0\), although simple, is adequate to illustrate the emergence of non-Hermiticity. We further simplify by restricting to _just one \(\vec{k}\) mode_, which still allows us to see the emergence of non-Hermiticity. This leads to a quantum mechanical Hamiltonian7\({\cal H}={\cal H}_{0}+{\cal H}_{1}\) Footnote 7: It should be emphasised that the result, from the analysis of \({\cal H}\), which demonstrates the emergence of non-Hermiticity is present in the fully field-theoretic versions of the Lee model. \[{\cal H}_{0}=m_{V}\psi_{V}^{\dagger}\psi_{V}+m_{N}\psi_{N}^{\dagger}\psi_{N}+ \mu a^{\dagger}a \tag{8}\] and \[{\cal H}_{1}=\delta m_{V}\psi_{V}^{\dagger}\psi_{V}+g_{0}\left(\psi_{V}^{ \dagger}\psi_{N}a+a^{\dagger}\psi_{N}^{\dagger}\psi_{V}\right). \tag{9}\] The sector with \(B=1\) and \(Q=0\) is spanned by the states \(|1,0,0\rangle\) and \(|0,1,1\rangle\). The eigenstates of \({\cal H}\) are denoted by \(|V\rangle\) and \(|N\theta\rangle\) with associated eigenvalues \(m_{V}\) and \(E_{N\theta}\) given by \[m_{V} ={\frac{1}{2}}\left(m_{N}+\mu+m_{V_{0}}-\sqrt{M_{0}^{2}+4g_{0}^{2}}\right)\] \[E_{N\theta} ={\frac{1}{2}}\left(m_{N}+\mu+m_{V_{0}}+\sqrt{M_{0}^{2}+4g_{0}^{2}}\right) \tag{10}\] where \(M_{0}\equiv m_{N}+\mu-m_{V_{0}}\) and \(m_{V_{0}}\equiv m_{V}+\delta m_{V}\). The wave-function renormalisation constant \(Z_{V}\) is determined [26] through the relation \[\sqrt{Z_{V}}=\langle 0|\psi_{V}|V\rangle \tag{11}\] which leads to [26] \[Z_{V}=\frac{2g_{0}^{2}}{\sqrt{M_{0}^{2}+4g_{0}^{2}}\left(\sqrt{M_{0}^{2}+4g_{ 0}^{2}}-M_{0}\right)}. \tag{12}\] The renormalised coupling constant \(g\) satisfies \[g^{2}=Z_{V}g_{0}^{2}. \tag{13}\] In terms of \(M\equiv m_{N}+\mu-m_{V}\), a renormalised quantity, it is straightforward to see that \[M_{0}=M-\frac{g_{0}^{2}}{M}. \tag{14}\] From (13) and (14) we can deduce the non-perturbative result that \[g_{0}^{2}=\frac{g^{2}}{\left(1-\frac{g^{2}}{M^{2}}\right)}. \tag{15}\] If \(g^{2}>M^{2}\), then the bare coupling can become imaginary and the Hamiltonian is non-Hermitian, but \(\mathcal{PT}\)-symmetric [26]. Explicitly the transformations due to \(\mathcal{P}\) are \[\begin{array}{ccc}\mathcal{P}V\mathcal{P}=-V&\mathcal{P}N\mathcal{P}=-N& \mathcal{P}a\mathcal{P}=-a\\ \mathcal{P}V^{\dagger}\mathcal{P}=-V^{\dagger}&\mathcal{P}N^{\dagger}\mathcal{ P}=-N^{\dagger}&\mathcal{P}a^{\dagger}\mathcal{P}=-a^{\dagger}\end{array} \tag{16}\] and due to \(\mathcal{T}\) are \[\begin{array}{ccc}\mathcal{T}V\mathcal{T}=V&\mathcal{T}N\mathcal{T}=N& \mathcal{T}a\mathcal{T}=a\\ \mathcal{T}V^{\dagger}\mathcal{T}=V^{\dagger}&\mathcal{T}N^{\dagger}\mathcal{ T}=N^{\dagger}&\mathcal{T}a^{\dagger}\mathcal{T}=a^{\dagger}.\end{array} \tag{17}\] We would like to note two things about this result: 1. The non-Hermiticity emerges for large coupling where perturbation theory is not valid. 2. The theory deals with pseudoscalar and fermions with two degrees of freedom. The model that we consider is related to the Lee model. It is a relativistic field theory with a Yukawa interaction, crossing symmetry and a quartic self-interaction of a pseudoscalar (required for perturbative renormalisability). ## 3 The Yukawa model The massive Yukawa model is given by the bare Lagrangian in 3-space and 1-time dimensions in terms of bare parameters with subscript 0:8 Footnote 8: Our Minkowski-metric signature convention is \((+,-,-,-)\). \[\mathcal{L}=\frac{1}{2}\partial_{\mu}\phi_{0}\partial^{\mu}\phi_{0}-\frac{M_{0 }^{2}}{2}\phi_{0}^{2}+\bar{\psi}_{0}\left(i\not{\partial}-m_{0}\right)\psi_{0} -ig_{0}\bar{\psi}_{0}\gamma^{5}\psi_{0}\phi_{0}-\frac{u_{0}}{4!}\phi_{0}^{4}. \tag{18}\] \(\mathcal{L}\) is renormalised in four dimensions through mass, coupling constant and wavefunction renormalisations; because we will use dimensional regularisation we will consider the spacetime dimensionality \(D\) to be \(4-\epsilon\), where \(\epsilon>0\) is a small parameter. It is the simplest non-trivial renormalisable model of a Dirac fermion field \(\psi_{0}\) interacting with a pseudoscalar field \(\phi_{0}\). In the Dirac representation of \(\gamma\) matrices the standard discrete transformations [48] on \(\psi_{0}\) are \[\mathcal{P}\psi_{0}(t,\vec{x})\mathcal{P}^{-1}=\gamma^{0}\psi_{0}(t,-\vec{x}), \quad\mathcal{T}\psi_{0}(t,\vec{x})\mathcal{T}^{-1}=i\gamma^{1}\gamma^{3}\psi_ {0}(-t,\vec{x}), \tag{19}\] \(\mathcal{T}\) is an antilinear operator. Moreover, under the action of \(\mathcal{P}\) and \(\mathcal{T}\), the pseudoscalar field \(\phi_{0}\left(t,\vec{x}\right)\) transforms as \[\mathcal{P}\phi_{0}\left(t,\vec{x}\right)\mathcal{P}^{-1}=-\phi_{0}\left(t,- \vec{x}\right),\quad\mathcal{T}\phi_{0}\left(t,\vec{x}\right)\mathcal{T}^{-1}= \phi_{0}\left(-t,\vec{x}\right). \tag{20}\] We now come to the pseudoscalar self-interaction term in (18). Its manifestly \(\mathcal{PT}\)-symmetric nature is best appreciated by considering the potential \[u\,\phi_{0}^{2}\left(i\phi_{0}\right)^{\delta}\,, \tag{21}\] for \(u,\delta>0\), in any spacetime dimension \(D\). The parameter being continued is \(\delta\) and not \(u\). As \(\delta\to 2\) the sign in front of \(u\) changes. When the "negative \(u\)" term arises in this way, the connection with \(\mathcal{PT}\) symmetry becomes manifest since we have (20) and \(i\rightarrow-i\) owing to the anti-linearity of \(\mathcal{T}\). Hence, in summary, if \(g_{0}\) is real then the Yukawa term is Hermitian and \(g_{0}^{2}>0\). If \(g_{0}\) is imaginary then the Yukawa term is non-Hermitian but is \({\cal PT}\)-symmetric and so \(g_{0}^{2}<0\). \(u_{0}\) is real but it can be positive or negative. If \(u_{0}<0\) the quartic term is non-Hermitian. (See A for a discussion of \(D=1\) spacetime using a canonical formulation of quantum theory and construction of a modified inner product.) It should be stressed that in a path integral formulation, it has been argued that explicit construction of an inner product when calculating Greens functions is not necessary [49]. If \(u_{0}>0\) the quartic term is Hermitian. Thus both couplings, \(g\) and \(u\) allow the possibility of showing non-Hermitian but \({\cal PT}\)-symmetric behaviour. In a path integral setting, the interplay of trivial saddle points (where perturbation theory is valid) and non-perturbative saddle points (where instanton effects play a role has been discussed) in detail [20]. The perturbative renormalisation group is unaffected by the non-trivial saddle points [34]. We shall not consider the role of non-trivial saddle points in this paper since we will be considering effects where perturbation theory is valid. Corresponding to \({\cal L}\), the associated renormalised Lagrangian (in terms of renormalised parameters without the subscript \(0\) and counterterms) is \[{\cal L} = \frac{1}{2}(1+\delta Z_{\phi})\partial_{\mu}\phi\partial^{\mu} \phi-\frac{M_{0}^{2}}{2}(1+\delta Z_{\phi})\phi^{2}+(1+\delta Z_{\psi})\bar{ \psi}\left(i\partial\!\!\!/-m_{0}\right)\psi \tag{22}\] \[-ig_{0}(1+\delta Z_{\psi})\sqrt{1+\delta Z_{\phi}}\bar{\psi} \gamma^{5}\psi\phi-\frac{u_{0}}{4!}(1+\delta Z_{\phi})^{2}\phi^{4},\] where we have introduced the multiplicative renormalisations \(Z_{\phi}\), \(Z_{\psi}\), \(Z_{g}\), \(Z_{u}\), \(Z_{m}\), and \(Z_{M}\) defined through \[\phi_{0} = \sqrt{Z_{\phi}}\phi\equiv\sqrt{1+\delta Z_{\phi}}\phi, \tag{23}\] \[\psi_{0} = \sqrt{Z_{\psi}}\psi\equiv\sqrt{1+\delta Z_{\psi}}\psi,\] (24) \[M_{0}^{2}Z_{\phi} = M^{2}+\delta M^{2}\equiv M^{2}Z_{M},\] (25) \[m_{0}Z_{\psi} = m+\delta m\equiv mZ_{m},\] (26) \[g_{0}Z_{\psi}\sqrt{Z_{\phi}} = g+\delta g\equiv gZ_{g},\] (27) \[u_{0}(Z_{\phi})^{2} = u+\delta u\equiv uZ_{u}. \tag{28}\] The conventional approach to renormalisation involves the regularisation of loop integrals in Feynman diagrams. We shall use dimensional regularisation to evaluate the counterterms, taking \(D=4-\epsilon\) and \(\mu\) as the renormalisation scale. This leads in the standard way to the perturbative renormalisation group (see, for example, [50]). The field theoretic action \(S\) generally depends on these \(\mu\) dependent couplings such that \[S\left[Z\left(\mu\right)^{1/2}\Phi;\mu,g_{i}\left(\mu\right)\right]=S\left[Z \left(\mu^{\prime}\right)^{1/2}\Phi;\mu^{\prime},g_{i}\left(\mu^{\prime} \right)\right] \tag{29}\] where \(Z\left(\mu\right)\) is the wave function renormalisation (generally a matrix) of the generic field \(\Phi\). As an example, for a scalar field theory we can write \[S\left[\phi;\mu,g_{i}\right]=\int d^{D}x\left(-\frac{1}{2}\partial_{\mu}\phi \partial^{\mu}\phi+\sum_{i}\mu^{D-d_{i}}g_{i}O_{i}\left(x\right)\right) \tag{30}\] where \(O_{i}\left(x\right)\) is a local operator of mass dimension \(d_{i}\) and \(g_{i}\) is dimensionless. The \(\mu\) dependence of \(g_{i}\) is determined through functions \(\beta_{i}\left(\left\{g_{j}\right\}\right)\) \[\mu\frac{dg_{i}\left(\mu\right)}{d\mu}=\beta_{i}\left(\left\{g_{j}\right\} \right), \tag{31}\] which are the renormalisation group equations. ### The renormalisation group analysis In terms of \(t=\log\mu\) and \(h=g^{2}\) the renormalisation group beta functions9 are Footnote 9: When \(g\) is pure imaginary, \(h\) is negative and so \(h\) positive or negative distinguishes between Hermitian and \(\mathcal{PT}\)-symmetric cases. The expressions for the beta functions given here are only applicable for \(h>0\) (\(g\) real). Our qualitative conclusions are unaffected by the sign of \(h\). \[\frac{dh}{dt}=\beta_{h}\left(h,u\right)\text{ and }\frac{du}{dt}=\beta_{u} \left(h,u\right) \tag{32}\] where \[\begin{split}\beta_{h}\left(h,u\right)=&-\epsilon h +\frac{1}{(4\pi)^{2}}10h^{2}+\frac{1}{(4\pi)^{4}}\left(-\frac{57}{2}h^{3}-4h^{ 2}u+\frac{1}{6}hu^{2}\right)\\ &+\frac{1}{(4\pi)^{6}}\left(\left[-\frac{339}{8}+222\;\zeta(3) \right]h^{4}+72h^{3}u+\frac{61}{24}h^{2}u^{2}-\frac{1}{8}hu^{3}\right)\end{split} \tag{33}\] and \[\beta_{u}\left(h,u\right)=-\epsilon u+\frac{1}{(4\pi)^{2}}\left(-48h^{2}+8hu+3 u^{2}\right)+\frac{1}{(4\pi)^{4}}\left(384h^{3}+28h^{2}u-12hu^{2}-\frac{17}{3}u^{ 3}\right). \tag{34}\] For clarity, \(\zeta\) is the Riemann zeta function. These expressions for the beta functions have been found from a perturbative calculation to three loops for the Yukawa coupling and two loops for the quartic coupling using the Mathematica package RGBeta [44] and are independent of \(m\) and \(M\)10. Footnote 10: The flows for \(m\) and \(M\) are dependent on the flows for \(h\) and \(u\) however. In the next subsections we shall consider: 1. The zeros of the beta functions \(\beta_{u}\) and \(\beta_{h}\) which determine the fixed points of the renormalisation group. 2. The stability of the fixed points, which can be determined from a linearised analysis around the fixed points (except for the trivial fixed point when \(\epsilon=0\)). 3. The full non-linear flows connecting the different fixed points. These flows are instructive, especially for the epsilon-dependent fixed points emanating from the trivial fixed point. 4. Once we have an \(\epsilon\) expansion of the fixed points it is natural to enquire about any possible resummation to determine information about fixed points and their stability at \(D=3\). We have used the method of Pade approximants and made checks on the pole structure [47] in the neighbourhood of \(\epsilon=1\) to determine the trustworthiness of any \(D=3\) fixed point determined this way. #### 3.1.1 Fixed points for \(\epsilon=0\) It is customary to denote the fixed point of \(h\) as \(h^{*}\) and the fixed point of \(u\) as \(u^{*}\). However, in the main text, for clarity we will use \(f_{i,h}\) (the fixed point value for \(h\)) and \(f_{u,h}\) (the fixed point value for \(u\)) for our numerical results for the fixed points, given to three significant figures. When \(\epsilon=0\), we have two fixed points 1. The trivial (or Gaussian) fixed point: \(f_{1,h}=0\), and \(f_{1,u}=0\). 2. \(f_{2,h}=0\), and \(f_{2,u}\simeq 83.6\) which corresponds to a quartic coupling \(\simeq 3.48\) (rescaled by \(1/4!\)); since the \(f_{2,h}\) and \(f_{2,u}\) are non-negative this is a Hermitian fixed point. The trivial fixed point is the progenitor of the fixed points for \(\epsilon\neq 0\). We perform a linearised analysis first for the fixed point \(f_{2}\). A non-linear analysis is necessary for \(f_{1}\). ### Stability analysis A linearised analysis around fixed points \(h^{*}\) and \(u^{*}\) consists of examining the evolution of \(\delta h=h-h^{*}\) and \(\delta u=u-u^{*}\). A linearised stability analysis [51] is determined by \[\frac{d}{dt}\begin{pmatrix}\delta h\\ \delta u\end{pmatrix}=M\left(h^{*},u^{*}\right)\begin{pmatrix}\delta h\\ \delta u\end{pmatrix} \tag{35}\] where \(M\) is a \(2\times 2\) matrix11. \(M\) is diagonalised to obtain eigenvalues \(\left(\lambda_{1}(h^{*},u^{*}),\lambda_{2}(h^{*},u^{*})\right)\) and corresponding eigenvectors \(\left(\vec{e}_{1}(h^{*},u^{*}),\vec{e}_{2}(h^{*},u^{*})\right)\). Footnote 11: \(M\) will also have a dependence on \(\epsilon\) in \(D=4-\epsilon\). Here, we summarise the eigenvectors and eigenvalues for \(f_{2,h}\): * \(\lambda_{1}\left(f_{2h},f_{2u}\right)\approx-1.59\), and \(\vec{e}_{1}\left(f_{2h},f_{2u}\right)=\begin{pmatrix}0\\ 1\end{pmatrix}\) * \(\lambda_{2}\left(f_{2h},f_{2u}\right)\approx 0.0282\) and \(\vec{e}_{2}\left(f_{2h},f_{2u}\right)=\begin{pmatrix}1.85\\ 1\end{pmatrix}\) _Non-linear analysis around trivial fixed point_ The stability of the trivial fixed point requires a non-linear analysis, due to the vanishing of the eigenvalues of the linear stability matrix \(M\). For the study of renormalisation group flows in the neighbourhood of the trivial fixed point, \(\beta_{u}\left(h,u\right)\) and \(\beta_{h}\left(h,u\right)\) can be simplified to \[\beta_{u}\left(h,u\right)\simeq\frac{1}{\pi^{2}}\left[-3h^{2}+\frac{1}{2}hu+ \frac{3}{16}u^{2}\right] \tag{36}\] and \[\beta_{h}\left(h,u\right)\simeq\frac{5}{8\pi^{2}}h^{2}. \tag{37}\] The family of flows for \(h\), parameterised with \(h_{0}\) and \(t_{0}\), is given by \[h\left(t\right)=\frac{8\pi^{2}h_{0}}{8\pi^{2}-5h_{0}\left(t-t_{0}\right)}. \tag{38}\] We define \(f(t)=8\pi^{2}-5h_{0}\left(t-t_{0}\right)\) for convenience. The accompanying flow for \(u\) is \[u(t)=-\frac{8\pi^{2}h_{0}}{3f(t)}\left[\frac{-p\;f(t)^{n}+q\;c}{f(t)^{n}+c}\right] \tag{39}\] where \(c\) is an integration constant, \(p=1+\sqrt{145}\approx 13\), \(q=-1+\sqrt{145}\approx 11\), \(n=\sqrt{\frac{29}{5}}\approx 2.4\). The behaviour is complicated and when \(h\) or \(u\) becomes large, which occurs due to the presence of a Landau pole, the perturbative analysis is not valid. We can write \(u(t)\) in terms of \(h(t)\) directly as \[u(t)=-\frac{1}{3}h(t)\left[\frac{-p\;h_{0}^{n}+q\;\tilde{c}\;h(t)^{n}}{h_{0}^{ n}+\tilde{c}\;h(t)^{n}}\right] \tag{40}\] writing \(c=(8\pi^{2})^{n}\;\tilde{c}\). This allows us to relate \(\tilde{c}\) to \(h_{0}\) and \(u_{0}\) as \[u_{0}=-\frac{1}{3}h_{0}\left[\frac{-p+q\;\tilde{c}}{1+\tilde{c}}\right] \tag{41}\] If we define \(k=\frac{u_{0}}{h_{0}}\), then we find \[\tilde{c}=\frac{p-3k}{3k+q} \tag{42}\] This suggests that if the \(h_{0}\) and \(u_{0}\) are sufficiently close to the origin, then any straight line through the origin is possible. ### Renormalisation group flows We shall examine the flow around the fixed points \(f_{ih}\) and \(f_{iu}\), for \(i=1,2\). For \(\epsilon=0\) the dimensionless couplings are of \(O(1)\) and are not small in any controlled fashion; hence the flows derived from perturbation theory can only be indicative of possible features of renormalisation. Moreover, geometric methods are best suited to visualise the flows12. Figure 1: Global flow for \(\epsilon=0\). In the figures, the vertical axis is the \(u\)-axis and the horizontal axis is the \(h\)-axis. The \(h\)-axis (where present) is shown in red, and any fixed points are shown in blue (colour online). Some features to be noted are: * There are no flows from positive to negative \(h\) and vice versa13. Footnote 13: This has been verified by performing the analysis for \(h<0\). * There are flows from positive \(u\) to negative \(u\), i.e. from a Hermitian to a \(\mathcal{PT}\)-symmetric region. * The flows around the trivial fixed point \(f_{1}\) do not show a simple source, sink or saddle point behaviour, but rather a non-linear flow. This flow is complicated but an approximate solution is given in (40). In Figure 1, there are approximate lines of both positive and negative slope crossing the \(h\)-axis, which are an indication of this behaviour. Given that the analysis is based on perturbation theory, flows in regions where the couplings are large compared to \(1\) can only be misleading. However, near the trivial fixed point, we can see evidence for flows from positive to negative \(u\), i.e. from Hermitian to \(\mathcal{PT}\)-symmetric behaviour. This type of behaviour is investigated in much more detail in a situation where there are four fixed points which occur at small values of \(u\) and \(h\). In our context, this arises since there is a separate parameter which controls the size of the couplings and makes perturbation theory possible. This parameter is \(\epsilon\). #### 3.3.1 Fixed points for \(\epsilon\neq 0\) We consider \(\epsilon>0\) and examine the flows of (32). We have fixed points which we denote by \(F_{i}\), \(i=0,1,2,\ldots,4\). \(F_{0}=f_{1}\) is the trivial fixed point. The remaining \(F_{i}\) are given in terms of series which are not typically convergent but asymptotic as \(\epsilon\to 0\). The expressions for the fixed points Figure 2: The local flows around the fixed points for \(\epsilon=0\). are given in B. These expressions allow tracking of fixed points as a function of \(\epsilon\) and also, in some circumstances, an extrapolation to \(\epsilon=1\) using the technique of Pade approximants. In the limit \(\epsilon\to 0\), the fixed point \(F_{4}\to f_{2}\), and the fixed points \(F_{i}\to f_{1}\) for \(i=1,2,3\). Hence the trivial fixed point becomes 4 fixed points for \(\epsilon\neq 0\): the trivial fixed point and 3 further fixed points (\(F_{i},i=1,2,3\)) which are \(O(\epsilon)\). For sufficiently small \(\epsilon,\;\;F_{2}\) is a non-Hermitian (\(\mathcal{PT}\)-symmetric) fixed point whereas \(F_{1}\) and \(F_{3}\) are Hermitian. The renormalisation group flows in the neighbourhoods of \(F_{i},i=1,2,3\) and \(f_{1}\) are described through perturbative analysis and are our main focus. Although near \(F_{4}\) our analysis does indicate possible new behaviour (in terms of flows between Hermitian and \(\mathcal{PT}\)-symmetric regions in the \(h\) coupling) these latter findings can only remain conjectural since perturbation theory is unreliable for large couplings. As such, we ignore this point in most of our analysis below. However, it is worth noting that the emergence of \(\mathcal{PT}\) symmetry in the Lee model is in terms of \(h\)[26] and occurs at strong coupling. ### The stability of fixed points for \(\epsilon\neq 0\) We follow the linear stability analysis of (35) for the fixed points \(F_{0}\equiv f_{1}\) and \(F_{j},(j=1,2,3)\). \(F_{\alpha}\) (\(\alpha=0,1,2,3\)) has two components: \(F_{\alpha,u}\), the fixed point value for \(u\) and \(F_{\alpha,h}\), the fixed point value for \(h\). The eigenvalues of the stability matrix around \(F_{\alpha}\), will be denoted by \(\Lambda_{\alpha,j},\;j=1,2\). The corresponding 2 component eigenvectors will be denoted by \(\vec{E}_{\alpha j},\;j=1,2\). #### 3.4.1 The renormalisation group flow between fixed points for \(\epsilon\neq 0\) The renormalisation group flows for \(0<\epsilon\lesssim 0.027\) are qualitatively the same and so we shall consider the case \(\epsilon=0.01\) as a representative flow. The flows are organised by the different fixed points \(F_{\alpha}\). We determine the flows numerically and non-perturbatively in \(\epsilon\). Figure 3: Global flow for \(\epsilon=0.01\). There are a group of four fixed points that are close to the origin, and one high-\(u\) fixed point that we ignore from concerns over its validity in perturbation theory. As expected, many of the features from the \(\epsilon=0\) case persist, particularly regarding flows across the coordinate axes. However, the non-zero \(\epsilon\) ensures that the behaviour of the flow near the origin can now be characterised using linear stability analysis [51]; we find an ultraviolet stable stellar node there (as shown in Figure 5a). Furthermore, three additional points emanate from the origin as \(\epsilon\) has increased. If we focus on the non-Hermitian (and \(\mathcal{PT}\)-symmetric) saddle fixed point \(F_{2}\) (Figure 5c), we note that (by examining Figure 4): * There is a flow that originates at the Hermitian infrared fixed point \(F_{3}\) (Figure 5d) in the IR (large negative \(t\)) limit, which can flow to the non-Hermitian saddle \(F_{2}\) in the UV (large positive \(t\)) limit. * There is a flow that originates at the stellar node at the origin \(F_{0}\) (Figure 5a) in the UV (large positive \(t\)) limit, which can flow to the non-Hermitian saddle \(F_{2}\) in the IR (large negative \(t\)) limit. As \(\epsilon\) continues to increase, we reach a critical value \(\epsilon_{c}\sim 0.027\) where the behaviour of the large-\(u\) fixed point changes (in terms of the eigenvalues of the linear stability analysis). However, this is not significant for our interests here, since we cannot be sure of the validity of the analysis for these fixed points in the perturbation theory of \(h\) and \(u\). We note that the character of the non-Hermitian saddle fixed point \(F_{2}\) seems to be preserved as we extend our analysis to \(D\to 3\) (\(\epsilon\to 1\)) with Pade approximants. ## 4 Pade approximants and the \(D=3\) fixed point The \(\epsilon\) expansion is used in the study of critical phenomena [32; 53], but its convergence is not understood in any systematic way. Although series using the \(\epsilon\) expansion are readily generated, the series are generally divergent. Hence there is no radius of convergence \(\epsilon_{R}\) such that the series is convergent for \(|\epsilon|<\epsilon_{R}\). If the perturbation series is singular, it diverges for all non-zero \(\epsilon\). Pade Figure 4: Flows around the group of fixed points near the origin for \(\epsilon=0.01\). approximants can sometimes offer a way of summing such a series. The partial sums of the \(\epsilon\) series cannot be summed directly, since for fixed \(\epsilon\) the sequence of partial sums diverge. If we have a formal power series \(P(\epsilon)=\sum a_{n}\epsilon^{n}\) in \(\epsilon\) then the Pade approximant \(P_{M}^{N}\left(\epsilon\right)\) is defined by \[P_{M}^{N}\left(\epsilon\right)=\frac{\sum_{n=0}^{N}A_{n}\epsilon^{n}}{\sum_{n =0}^{M}B_{n}\epsilon^{n}}. \tag{43}\] Without loss of generality we take \(B_{0}=1\) and the first \(M+N+1\) coefficients of \(\sum a_{n}\epsilon^{n}\) are used to determine the coefficients \(A_{0},A_{1},\ldots,A_{N},B_{1},B_{2},\ldots,B_{M}\). \(P_{N}^{N}\left(\epsilon\right)\) is a diagonal Pade sequence. All Pade approximants have pole singularities from the denominator and zeros from the numerator. If there are poles in the neighbourhood of \(\epsilon=1\) then an extrapolation to \(\epsilon=1\) using Pade sequences is not viable. By checking for the consistent predictions of fixed points and their stability as \(N\) and \(M\) are varied, we decide on the validity of our extrapolation [47] to \(\epsilon=1\). This is a necessary (but not sufficient) criterion for a valid extrapolation to \(D=3\). Figure 5: The four trustworthy (in perturbation theory) fixed points for \(\epsilon=0.01\). We consider the cases where \(P(\epsilon)\) is truncated to \(\epsilon^{2n}\), for \(n=4,\ 5,\ 6,\ 7\); then we examine the corresponding diagonal Pade approximants \(P_{N}^{N}\left(\epsilon\right)\) for \(N=4,\ 5,\ 6,\ 7\), as well as off-diagonal Pade sequences \(P_{N-1}^{N+1}\left(\epsilon\right)\) and \(P_{N+1}^{N-1}\left(\epsilon\right)\). The convergence of the various Pade approximants for the fixed points \(F_{\alpha}\) is only consistent for \(F_{2}\), a non-Hermitian fixed point. The resultant fixed point at \(D=3\) is \[\left(h^{*},u^{*}\right)=\left(17.6,-32.3\right) \tag{44}\] whose linearised stability is characterised by eigenvalues \(\Lambda_{1}=-1.16\) and \(\Lambda_{2}=1.08\). Hence the fixed point has saddle-like stability. The eigenvectors \(\vec{E}_{j}\) associated with \(\Lambda_{j}\), for \(j=1,2\) are \[\vec{E}_{1}=\left(-0.0121,1\right) \tag{45}\] and \[\vec{E}_{2}=\left(-4.21,1\right). \tag{46}\] As \(\epsilon\) has increased from small values this fixed point has retained its non-Hermitian character and its Pade approximants have been stable for diagonal and off-diagonal sequences. Hence these computations provide some confidence that this is a genuine non-perturbative fixed point for \(D=3\). ## 5 Conclusions The role of renormalisation in \(\mathcal{PT}\)-symmetric field theories in higher dimensional field theories is only beginning to be explored. This work explores, within relativistic field theories, how \(\mathcal{PT}\)-symmetric features may arise from the renormalisation procedure itself. This is a feature which was first found only at strong coupling in the Lee model, a toy model which is oversimplified but soluble. More recently, the possible emergence of unstable potentials in the Standard Model has been considered within the framework of \(\mathcal{PT}\) symmetry. However, these treatments were not able to address the issues within the context of \(D=4\) because of a lack of appreciation of the role of non-trivial fixed points in \(\mathcal{PT}\)-symmetric theories. The role of non-trivial fixed points for higher dimensional \(\mathcal{PT}\)-symmetric theories has recently been much better understood [20]. Consequently, in weak coupling it is becoming clear that, provided couplings are controllably weak, perturbative renormalisation group calculations are adequate to examine the role of renormalisation in the emergence of non-Hermiticity. By considering the fixed points for \(D=4-\epsilon\), we have control over the strength of \(\epsilon\)-dependent fixed points and can examine the flow between non-Hermitian and Hermitian fixed points within perturbation theory. Our model is richer than the Lee model, in that it is a fully relativistic field theory with crossing symmetry and two couplings: Yukawa and quartic self-couplings. The requirement of perturbative renormalisation demands the inclusion of both couplings. In our \(\epsilon\)-expansion discussion, Hermitian to non-Hermitian flows occur only in terms of the quartic self-couplings. The robustness of these findings is being investigated in other renormalisable field theories involving both more flavours of fermions and the inclusion of gauge fields. ## Acknowledgements L.C. is supported by King's College London through an NMES funded studentship. The work of S.S. is supported in part by the UK Science and Technology Facilities Research Council (STFC) under the research grant ST/T000759/1 and EPSRC grant EP/V002821/1. We would like to thank Wen-Yuan Ai, Carl Bender, Nick Mavromatos, Alex Soto and Andy Stergiou for discussions. ## Appendix A Non-Hermiticity and the scalar quartic coupling At various points in our discussion, we have used the sign of the quartic scalar coupling at a fixed point to label the coupling Hermitian or non-Hermitian. From a conventional stance, this may seem puzzling since the terms \(\left|u\right|\phi^{4}\) and \(-\left|u\right|\phi^{4}\) look equally Hermitian. This distinction is based on Hilbert space and the associated inner product in quantum mechanics. More recently, a discussion of non-Hermiticity, based on a semiclassical analysis of path integrals, which generalises to \(D>1\), has also been given [20], but the discussion given here is more elementary. The counterintuitive behaviour of quantum systems with \(\mathcal{PT}\) symmetry follows from a complex deformation of classical mechanics in the calculation of wave functions and partition functions. This deformation can be done explicitly in quantum mechanics, which represents systems with a finite number of degrees of freedom. The interaction term in the Hamiltonian evolves through a path in the space of Hamiltonians from a well-defined Hermitian theory to another one which is well-defined but obeys different boundary conditions. This deformation can be explicitly studied in quantum mechanics because we can deal with functions just of time. In higher spacetime dimensions the analysis of partition functions has not as yet been carried out (in terms of critical points in function space) and so the idea of Stokes sectors is much more complicated and relies on paths of steepest descents using Lefschetz-Picard theory [20]. A non-Hermitian \(\mathcal{PT}\)-symmetric theory requires a new inner product on the Hilbert space in order to give a unitary, physically acceptable theory. Indeed a Hermitian Hamiltonian is a self-adjoint operator in the Hilbert space with the Dirac inner product and the concept of adjoint is defined in terms of this inner product on a Hilbert space. The concept of a \(\mathcal{PT}\) adjoint requires a new inner product and the role of Hermiticity of an operator is replaced by the concept of \(\mathcal{PT}\) self-adjointness. For \(D=1\), the case of quantum mechanics, the Hamiltonian \(H\) can be replaced by a Schrodinger operator acting on the space of wave functions. The spectrum of the Hamiltonian is determined by an eigenvalue problem which has the form \[-\Psi^{\prime\prime}\left(x\right)+Q(x)\Psi\left(x\right)=0 \tag{10}\] where \(Q(x)=V(x)-E\), \(V(x)\) being the potential and \(E\) the energy eigenvalue. An important \(V(x)\) that arises in discussions of \(\mathcal{PT}\) symmetry is \[V(x)=x^{2}(ix)^{\delta},\quad\delta\geq 0. \tag{11}\] When \(\delta=0\), \(H\) reduces to the harmonic oscillator Hamiltonian. As \(\delta\) increases, \(H\) becomes complex until \(\delta=2\), when \(H\) becomes real with \(V(x)=-x^{4}\). This reality should not be identified with Hermiticity, since the solution of (10) cannot be found without knowledge of boundary conditions. For a non-perturbative solution of (10) we require that \(\left|Q(x)\right|\rightarrow\infty\) as \(\left|x\right|\rightarrow\infty\) so that we can apply the WKB approximation away from turning points [47]. Another important feature of the deformation is that \(\mathcal{PT}\)_symmetry is preserved_ for all values of \(\delta\). The leading WKB approximation gives for the solution of (10) \[\Psi\left(x\right)\sim Q(x)^{-1/4}\exp\left[\pm\int\limits^{x}ds\sqrt{Q\left(s \right)}\right] \tag{12}\] For \(\delta>0\), there is a logarithmic branch point singularity at \(x=0\) and so we choose a branch cut along the positive imaginary \(x\)-axis, a choice that respects \(\mathcal{PT}\) symmetry. For _large_\(\left|x\right|\), the exponential factor becomes \[\exp\left[\pm\frac{2}{4+\delta}i^{\delta/2}x^{2+\delta/2}\right].\] (A.4) Along a line parametrised by \(x=r\exp\left(i\theta\right)\) with \(r>0\), the exponential factor can be written as \(\exp\left(\pm a\exp\left(i\varphi\right)\right)\), with \(a>0\) and \(\varphi\) a linear function of \(\theta\). On taking the minus sign in the exponent and on requiring normalisable wave functions, we readily deduce that \[-\frac{\pi}{2}+2n\pi<\varphi<\frac{\pi}{2}+2n\pi,\qquad n\in\mathbb{Z}.\] (A.5) which determines Stokes sectors. For \(\delta=2\), on choosing \(n=0\) and \(n=-1\), we find two Stokes sectors given by \[-\frac{\pi}{3}<\theta<0,\quad\text{and}\;\;\pi<\theta<\frac{4\pi}{3}.\] (A.6) The contour for the integral in (A.3) is taken for large \(|x|\), in order to lie in these Stokes wedges. The non-Hermiticity of the negative quartic potential is reflected in the necessity of complex paths through Stokes wedges, in order to determine the solution of the Schrodinger equation. It can be shown that the spectrum is real and positive using this procedure. It still remains to be shown that the theory is unitary and so we need a Hilbert space with a positive norm, which is also preserved under time evolution [54, 55]. It has been established that for the case of unbroken \(\mathcal{PT}\) symmetry, there is a hidden symmetry in terms of an operator \(\mathcal{C}\) which satisfies \[\left[\mathcal{C},H\right] = 0\,,\] (A.7) \[\left[\mathcal{C},\mathcal{PT}\right] = 0\] (A.8) \[\mathcal{C}^{2} = I\] (A.9) where \(I\) is the identity operator, and has a representation \(\mathcal{C}(x,y)\) in coordinate space. The new inner product, (which leads to a positive norm), is \[\left\langle\psi\right|\left.\chi\right\rangle^{CPT}\equiv\int dx\,\psi^{CPT} \left(x\right)\chi\left(x\right)\] (A.10) for an integration contour lying in the Stokes wedges. \(\psi^{CPT}\left(x\right)\) can be expressed in terms of \(\mathcal{C}(x,y)\) as \[\psi^{\mathcal{CPT}}\left(x\right)=\int dy\,\mathcal{C}\left(x,y\right)\psi^{ \ast}\left(-y\right).\] (A.11) The WKB approximation can be used to calculate the \(\mathcal{C}\) operator [56], which can be expressed as \(\mathcal{C}=\exp(\mathcal{Q})\;\mathcal{P}\). In coordinate space, this can be written as \(\mathcal{C}(x,y)=\exp(\mathcal{Q}(x,-id/dx))\;\delta(x+y)\). It is convenient to work in momentum space \[\widetilde{C}\left(p,q\right)=\int dx\;e^{-ipx}\int dy\;e^{iqy}\;\mathcal{C} \left(p,q\right)\] (A.12) and leads to a non-trivial \(\mathcal{Q}\) operator [56] with Fourier transform \[\widetilde{\mathcal{Q}}\left(p,q\right)=-\frac{\sqrt{\pi}\varGamma\left( \frac{3+\delta}{2+\delta}\right)}{\varGamma\left(\frac{8+3\delta}{4+2\delta} \right)}\sin\left(\frac{\pi\delta}{4+2\delta}\right)\left(\theta\left(p\right) p^{\frac{4+\delta}{2+\delta}}-\theta\left(-p\right)\left(-p\right)^{\frac{4+ \delta}{2+\delta}}\right).\] (A.13) This calculation shows explicitly the existence of a \(\mathcal{PT}\) metric that differs from the Hermitian metric. The metric is determined by the particular Hamiltonian; in contrast, for Hermitian theories, the Dirac metric is independent of the Hamiltonian being considered. This semi-classical calculation for the \(\mathcal{C}\) operator does not generalise to field theory. However, from the general conditions given in (A.7), (A.8) and (A.9), Jones and Rivers have argued [49] that the _calculation of Green's functions, for general \(D\),_ within a path integral formulation is not dependent on the detailed knowledge of the \(\mathcal{C}\) operator (unlike other quantities such as scattering matrix elements). The proof of the last statement has been generalised in [19] and discussed in [29]. ## Appendix B Data for fixed points and their stability for \(\epsilon\neq 0\) In this appendix, we give the series results in \(\epsilon\) for the fixed points and their linear stability eigenvalues and eigenvectors. Here, we provide these results to three decimal places (unless exact, or where this would give no significant figures). * \(F_{0h}=0\), \(F_{0u}=0\). This is the trivial Hermitian fixed point. The stability matrix has degenerate eigenvalues: \(\Lambda_{0,1}=\Lambda_{0,2}=-\epsilon\). For \(\epsilon\neq 0\) (and sufficiently small), this is a UV-stable stellar node (so that trajectories which begin near \(F_{0}\) approach \(F_{0}\) on straight lines). * \(F_{1h}=0\), \(F_{1u}=52.638\epsilon+33.142\epsilon^{2}+41.735\epsilon^{3}+65.694\epsilon^{ 4}+115.816\epsilon^{5}+218.763\epsilon^{6}+432.896\epsilon^{7}+885.833\epsilon^{ 8}+1859.156\epsilon^{9}+3979.970\epsilon^{10}+8656.771\epsilon^{11}+19076.958 \epsilon^{12}\). The stability matrix has eigenvalues \(\Lambda_{1,1}=\epsilon-0.630\epsilon^{2}-0.793\epsilon^{3}-1.248\epsilon^{4}-2.200\epsilon^{5}-4.156\epsilon^{6}-8.224\epsilon^{7}-16.829\epsilon^{8}-35.320 \epsilon^{9}-75.610\epsilon^{10}-164.459\epsilon^{11}-362.419\epsilon^{12}\) and \(\Lambda_{1,2}=-\epsilon+0.019\epsilon^{2}+0.019\epsilon^{3}+0.028\epsilon^{4} +0.048\epsilon^{5}+0.090\epsilon^{6}+0.176\epsilon^{7}+0.359\epsilon^{8}+0.75 0\epsilon^{9}+1.601\epsilon^{10}+3.472\epsilon^{11}+7.636\epsilon^{12}\), with corresponding eigenvectors \(\vec{E}_{1,1}=\begin{pmatrix}0\\ 1\end{pmatrix}\) and \(\vec{E}_{1,2}=\begin{pmatrix}A_{1,2}\\ 1\end{pmatrix}\), with \(A_{1,2}=-0.750+0.340\epsilon+0.383\epsilon^{2}+0.566\epsilon^{3}+0.960\epsilon ^{4}+1.765\epsilon^{5}+3.425\epsilon^{6}+6.905\epsilon^{7}+14.326\epsilon^{8} +30.386\epsilon^{9}+65.590\epsilon^{10}+143.623\epsilon^{11}+318.258\epsilon^{ 12}\). For \(\epsilon\neq 0\) (and sufficiently small), this is a Hermitian saddle fixed point. * \(F_{2h}=15.791\epsilon+1.819\epsilon^{2}+1.646\epsilon^{3}-0.757\epsilon^{4}+0.405\epsilon^{5}-1.241\epsilon^{6}+0.643\epsilon^{7}-1.430\epsilon^{8}+1.411 \epsilon^{9}-1.983\epsilon^{10}+2.625\epsilon^{11}-3.393\epsilon^{12}\), \(F_{2u}=-58.121\epsilon+16.812\epsilon^{2}-8.154\epsilon^{3}+16.338\epsilon^{4} -9.360\epsilon^{5}+17.343\epsilon^{6}-16.587\epsilon^{7}+23.178\epsilon^{8}-2 8.866\epsilon^{9}+37.721\epsilon^{10}-50.784\epsilon^{11}+67.832\epsilon^{12}\). The stability matrix has eigenvalues \(\Lambda_{2,1}=-2.408\epsilon-0.601\epsilon^{2}+1.301\epsilon^{3}-0.089\epsilon ^{4}+1.006\epsilon^{5}-0.593\epsilon^{6}+0.986\epsilon^{7}-1.204\epsilon^{8} +1.462\epsilon^{9}-2.076\epsilon^{10}+2.641\epsilon^{11}-3.691\epsilon^{12}\) and \(\Lambda_{2,2}=\epsilon-0.115\epsilon^{2}-0.159\epsilon^{3}+0.186\epsilon^{4} -0.072\epsilon^{5}+0.255\epsilon^{6}-0.173\epsilon^{7}+0.336\epsilon^{8}-0.3 83\epsilon^{9}+0.546\epsilon^{10}-0.752\epsilon^{11}+1.030\epsilon^{12}\), with corresponding eigenvectors \(\vec{E}_{2,1}=\begin{pmatrix}A_{2,1}\\ 1\end{pmatrix}\) and \(\vec{E}_{2,2}=\begin{pmatrix}A_{2,2}\\ 1\end{pmatrix}\), with \(A_{2,1}=0.015\epsilon-0.013\epsilon^{2}+0.001\epsilon^{3}-0.013\epsilon^{4} +0.006\epsilon^{5}-0.015\epsilon^{6}+0.014\epsilon^{7}-0.022\epsilon^{8}+0.029\epsilon^{9}-0.039\epsilon^{10}+0.055\epsilon^{11}-0.077\epsilon^{12}\) and \(A_{2,2}=-0.272-0.188\epsilon-0.075\epsilon^{2}-0.159\epsilon^{3}-0.087\epsilon ^{4}-0.178\epsilon^{5}-0.080\epsilon^{6}-0.209\epsilon^{7}-0.029\epsilon^{9} -0.039\epsilon^{10}+0.055\epsilon^{11}-0.077\epsilon^{12}\), with corresponding eigenvectors \(\vec{E}_{1,1}=\begin{pmatrix}0\\ 1\end{pmatrix}\) and \(\vec{E}_{1,2}=\begin{pmatrix}A_{1,2}\\ 1\end{pmatrix}\), with \(A_{1,2}=-0.750+0.340\epsilon+0.383\epsilon^{2}+0.566\epsilon^{3}+0.960 \epsilon^{4}+1.765\epsilon^{5}+3.425\epsilon^{6}+6.905\epsilon^{7}+14.326 \epsilon^{8}+30.386\epsilon^{9}+65.590\epsilon^{10}+143.623\epsilon^{11}+318.258 \epsilon^{12}\). For \(\epsilon\neq 0\) (and sufficiently small), this is a Hermitian saddle fixed point. * \(F_{2h}=15.791\epsilon+1.819\epsilon^{2}+1.646\epsilon^{3}-0.757\epsilon^{4}+0.405\epsilon^{5}-1.241\epsilon^{6}+0.643\epsilon^{7}-1.430\epsilon^{8}+1.411 \epsilon^{9}-1.983\epsilon^{10}+2.625\epsilon^{11}-3.393\epsilon^{12}\), \(F_{2u}=-58.121\epsilon+16.812\epsilon^{2}-8.154\epsilon^{3}+16.338\epsilon^{4}-9.3 60\epsilon^{5}+17.343\epsilon^{6}-16.587\epsilon^{7}+23.178\epsilon^{8}-28.866 \epsilon^{9}+37.721\epsilon^{10}-50.784\epsilon^{11}+67.832\epsilon^{12}\). The stability matrix has eigenvalues \(\Lambda_{2,1}=-2.408\epsilon-0.601\epsilon^{2}+1.301\epsilon^{3}-0.089\epsilon^{4} +1.006\epsilon^{5}-0.593\epsilon^{6}+0.986\epsilon^{7}-1.204\epsilon^{8}+1.462 \epsilon^{9}-2.076\epsilon^{10}+2.641\epsilon^{11}-3.691\epsilon^{12}\) and \(\Lambda_{2,2}=\epsilon-0.115\epsilon^{2}-0.159\epsilon^{3}+0.186\epsilon^{4}-0.072 \epsilon^{5}+0.255\epsilon^{6}-0.173\epsilon^{7}+0.336\epsilon^{8}-0.383\epsilon^{9}+0.546\epsilon^{10}-0.752\epsilon^{11}+1.030\epsilon^{12}\), with corresponding eigenvectors \(\vec{E}_{2,1}=\begin{pmatrix}A_{2,1}\\ 1\end{pmatrix}\) and \(\vec{E}_{2,2}=\begin{pmatrix}A_{2,2}\\ 1\end{pmatrix}\), with \(A_{2,1}=0.015\epsilon-0.013\epsilon^{2}+0.001\epsilon^{3}-0.013\epsilon^{4}+0.006\epsilon^{5}-0.015\epsilon^{6}+0.014\epsilon^{7}-0.022\epsilon^{8}+0.029 \epsilon^{9}-0.039\epsilon^{10}+0.055\epsilon^{11}-0.077\epsilon^{12}\) and \(A_{2,2}=-0.272-0.188\epsilon-0.075\epsilon^{2}-0.159\epsilon^{3}-0.087\epsilon^{4}-0.178\epsilon^{5}-0.080\epsilon^{6}-0.209\epsilon^{7}-0.039\epsilon^{10}+0.055 \epsilon^{11}-0.077\epsilon^{12}\), with corresponding eigenvectors \(\vec{E}_{1,1}=\begin{pmatrix}0\\ 1\end{pmatrix}\) and \(0.051\epsilon^{8}-0.262\epsilon^{9}+0.014\epsilon^{10}-0.362\epsilon^{11}+0.293 \epsilon^{12}\). For \(\epsilon\neq 0\) (and sufficiently small), this is a non-Hermitian saddle fixed point. * \(F_{3h}=15.791\epsilon+6.749\epsilon^{2}-3.314\epsilon^{3}-12.829\epsilon^{4}-11. 559\epsilon^{5}+9.263\epsilon^{6}+37.770\epsilon^{7}+28.770\epsilon^{8}-64.624 \epsilon^{9}-196.697\epsilon^{10}-156.077\epsilon^{11}+274.654\epsilon^{12}\), \(F_{3u}=68.648\epsilon+29.392\epsilon^{2}+2.112\epsilon^{3}-11.144\epsilon^{4}+26.493\epsilon^{5}+143.046\epsilon^{6}+300.979\epsilon^{7}+383.667\epsilon^{8}+3 47.310\epsilon^{9}+566.087\epsilon^{10}+2056.631\epsilon^{11}+5955.454\epsilon ^{12}\). The stability matrix has eigenvalues \(\Lambda_{3,1}=\epsilon-0.427\epsilon^{2}+0.785\epsilon^{3}+1.460\epsilon^{4}+0.700\epsilon^{5}-1.668\epsilon^{6}-2.969\epsilon^{7}+2.758\epsilon^{8}+20.656 \epsilon^{9}+48.759\epsilon^{10}+86.232\epsilon^{11}+188.086\epsilon^{12}\) and \(\Lambda_{3,2}=2.408\epsilon-2.406\epsilon^{2}-3.775\epsilon^{3}-2.340\epsilon ^{4}+1.815\epsilon^{5}+4.386\epsilon^{6}-3.621\epsilon^{7}-28.393\epsilon^{8}-5 9.880\epsilon^{9}-72.951\epsilon^{10}-78.896\epsilon^{11}-238.428\epsilon^{12}\), with corresponding eigenvectors \(\vec{E}_{3,1}=\begin{pmatrix}A_{3,1}\\ 1\end{pmatrix}\) and \(\vec{E}_{3,2}=\begin{pmatrix}A_{3,2}\\ 1\end{pmatrix}\), with \(A_{3,1}=0.230-0.000\epsilon-0.244\epsilon^{2}-0.768\epsilon^{3}-1.951\epsilon ^{4}-4.607\epsilon^{5}-10.748\epsilon^{6}-25.330\epsilon^{7}-60.213 \epsilon^{8}-143.193\epsilon^{9}-339.680\epsilon^{10}-806.636\epsilon^{11}-19 98.394\epsilon^{12}\) and \(A_{3,2}=-0.018\epsilon+0.019\epsilon^{2}+0.060\epsilon^{3}+0.141\epsilon^{4}+0.332\epsilon^{5}+0.867\epsilon^{6}+2.439\epsilon^{7}+6.966\epsilon^{8}+19.714 \epsilon^{9}+55.425\epsilon^{10}+156.092\epsilon^{11}+441.899\epsilon^{12}\). For \(\epsilon\neq 0\) (and sufficiently small), this is a Hermitian IR-stable fixed point. * \(F_{4h}=0\), \(F_{4u}=83.601-52.638\epsilon-33.142\epsilon^{2}-41.735\epsilon^{3}-65.694 \epsilon^{4}-115.816\epsilon^{5}-218.763\epsilon^{6}-432.896\epsilon^{7}-885.83 3\epsilon^{8}-1859.156\epsilon^{9}-3979.970\epsilon^{10}-8656.771\epsilon^{11}- 19076.958\epsilon^{12}\). The stability matrix has eigenvalues \(\Lambda_{4,1}=-1.588+3.000\epsilon+0.630\epsilon^{2}+0.793\epsilon^{3}+1.248 \epsilon^{4}+2.200\epsilon^{5}+4.156\epsilon^{6}+8.224\epsilon^{7}+16.829 \epsilon^{8}+35.320\epsilon^{9}+75.610\epsilon^{10}+164.459\epsilon^{11}+362.4 19\epsilon^{12}\) and \(\Lambda_{4,2}=0.028-1.024\epsilon-0.019\epsilon^{2}-0.019\epsilon^{3}-0.028 \epsilon^{4}-0.048\epsilon^{5}-0.090\epsilon^{6}-0.176\epsilon^{7}-0.359 \epsilon^{8}-0.750\epsilon^{9}-1.601\epsilon^{10}-3.472\epsilon^{11}-7.636 \epsilon^{12}\), with corresponding eigenvectors \(\vec{E}_{4,1}=\begin{pmatrix}0\\ 1\end{pmatrix}\) and \(\vec{E}_{4,2}=\begin{pmatrix}A_{4,2}\\ 1\end{pmatrix}\), with \(A_{4,2}=1.854-7.949\epsilon+14.292\epsilon^{2}-28.867\epsilon^{3}+53.621 \epsilon^{4}-107.027\epsilon^{5}+199.582\epsilon^{6}-398.417\epsilon^{7}+740.7 33\epsilon^{8}-1486.571\epsilon^{9}+2742.767\epsilon^{10}-5559.741\epsilon^{1 1}+10127.112\epsilon^{12}\). For \(\epsilon\neq 0\) (and sufficiently small), this is a Hermitian saddle fixed point.
2309.04839
Safe Control of Euler-Lagrange Systems with Limited Model Information
This paper presents a new safe control framework for Euler-Lagrange (EL) systems with limited model information, external disturbances, and measurement uncertainties. The EL system is decomposed into two subsystems called the proxy subsystem and the virtual tracking subsystem. An adaptive safe controller based on barrier Lyapunov functions is designed for the virtual tracking subsystem to ensure the boundedness of the safe velocity tracking error, and a safe controller based on control barrier functions is designed for the proxy subsystem to ensure controlled invariance of the safe set defined either in the joint space or task space. Theorems that guarantee the safety of the proposed controllers are provided. In contrast to existing safe control strategies for EL systems, the proposed method requires much less model information and can ensure safety rather than input-to-state safety. Simulation results are provided to illustrate the effectiveness of the proposed method.
Yujie Wang, Xiangru Xu
2023-09-09T16:57:31Z
http://arxiv.org/abs/2309.04839v1
# Safe Control of Euler-Lagrange Systems with Limited Model Information ###### Abstract This paper presents a new safe control framework for Euler-Lagrange (EL) systems with limited model information, external disturbances, and measurement uncertainties. The EL system is decomposed into two subsystems called the proxy subsystem and the virtual tracking subsystem. An adaptive safe controller based on barrier Lyapunov functions is designed for the virtual tracking subsystem to ensure the boundedness of the safe velocity tracking error, and a safe controller based on control barrier functions is designed for the proxy subsystem to ensure controlled invariance of the safe set defined either in the joint space or task space. Theorems that guarantee the safety of the proposed controllers are provided. In contrast to existing safe control strategies for EL systems, the proposed method requires much less model information and can ensure safety rather than input-to-state safety. Simulation results are provided to illustrate the effectiveness of the proposed method. ## I Introduction Safe-by-design control has received increasing interest because of its broad applications. Control Barrier Functions (CBFs) and Barrier Lyapunov Function (BLFs) are two widely investigated barrier type functions that can provably ensure _safety_ expressed as the controlled invariance of a given set [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. By integrating the CBF constraint into a convex quadratic program (QP), a CBF-QP-based controller is capable of serving as a safety filter that minimally alters possibly unsafe control inputs. In contrast, BLFs are Lyapunov-like functions defined in given open sets, such that they can ensure safety and stability simultaneously. Euler-Lagrange (EL) systems, which represent a large number of mechanical systems including robot manipulators and vehicles, have been extensively investigated in the literature [12, 13, 14, 15]. Recently, the safe control of EL systems attracted significant attention because of the broad application of robotic systems in safety-critical scenarios, such as human-robot interaction. Many CBF-based control strategies have been developed for EL systems [16, 17, 18]. Although these methods are demonstrated by both theoretical analysis and simulation/experimental results, they rely on model information of the EL system (i.e., the exact forms of the inertia matrix, the Coriolis/centripetal matrix, and the gravity term), which is hard to obtain precisely in practice. Few research has been devoted to the safe control of EL systems with limited model information [19, 20]. In [19], a novel CBF that integrates kinetic energy with the classical form is proposed, resulting in reduced model dependence and less conservatism; however, this method does not take account of external disturbances, which are ubiquitous in practical applications. In [20], a safe velocity is designed based on reduced-order kinematics and tracked by a velocity tracking controller; nevertheless, only _input-to-state safety_[21, Definition 3] rather than safety is ensured when the model information is unavailable, and the safe velocity is required to be differentiable. On the other hand, various BLF-based controllers have been developed for EL systems [8, 9], whereas these approaches require the desired trajectory to stay inside the safe set and impose relatively strict structural requirements on safety constraints. In this work, we propose a new control strategy for EL systems with limited model information, external disturbances, and measurement uncertainties. The original EL system is decomposed into two subsystems: the _proxy subsystem_, which is a double integrator with a mismatched bounded disturbance, and the _virtual tracking subsystem_, which corresponds to the dynamical model of the EL system. A CBF-based controller is designed for the proxy subsystem to generate the safe velocity, while an adaptive BLF-based controller is developed for the virtual tracking subsystem to track the safe velocity and ensure the boundedness of the tracking error. See Fig. 1 for illustration, where the symbols will be introduced in Section III. Compared with existing results, the proposed method has four main advantages as shown in the following: 1. The proposed method does not rely on any model information except for the upper bound of the inertia matrix's norm, which implies that even the bounds of the Coriolis-centrifugal and gravity matrices are not Fig. 1: Illustration of the proposed proxy-CBF-BLF control design scheme for safe control design of EL systems in the joint space. The original EL system is decomposed into the proxy subsystem and the virtual tracking subsystem. The safe velocity for the virtual tracking subsystem is generated by the proxy subsystem. A CBF-QP-based controller is designed for the proxy subsystem to ensure safety, while an adaptive BLF-based control law is proposed for the virtual tracking subsystem to constrain the safe velocity tracking error. required in control design because such bounds are estimated by adaptive laws online. 2. The closed-loop system is guaranteed to be safe, instead of input-to-state safe, in the presence of external disturbances and measurement uncertainties. 3. The safe velocity's differentiability, which is important for velocity tracking control design, is guaranteed, and calculating its derivative is straightforward. 4. The proposed method takes measurement uncertainties into account, allowing its use in robots where precise angular velocity measurements are not available. The remainder of this paper is organized as follows. In Section II, preliminaries and the problem statement are introduced; in Section III, the joint space safe control strategy is presented; in Section IV, the task space safe control scheme is shown; in Section V, numerical simulation results are presented to validate the proposed method; and finally, the conclusion is drawn in Section VI. ## II Preliminaries and Problem Statement Throughout the paper, we denote by \(\mathbb{R}_{>0}\) and \(\mathbb{R}_{\geq 0}\) the sets of positive real and nonnegative numbers, respectively. We denote \(\|\cdot\|\) the 2-norm for vectors and the induced 2-norm for matrices. We denote by \(\sigma_{\min}(A)\) the smallest eigenvalue of a square matrix \(A\). We consider the gradient \(\frac{\partial h}{\partial x}\in\mathbb{R}^{n\times 1}\) as a row vector, where \(x\in\mathbb{R}^{n}\) and \(h:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is a function with respect to \(x\). ### _Control Barrier Functions & Barrier Lyapunov Functions_ CBFs and BLFs are two types of barrier functions that are widely used to ensure the controlled invariance of a given set [1, 5]. Our approach aims to combine the advantages of both CBFs and BLFs, which are briefly reviewed below. #### Ii-A1 Control Barrier Functions Consider a control-affine system given as \(\dot{x}=f(x)+g(x)u\) where \(x\in\mathbb{R}^{n}\) is the state, \(u\in U\subset\mathbb{R}^{m}\) is the control input, and \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) and \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times m}\) are locally Lipchitz continuous functions. Define a _safe set_\(\mathcal{C}=\{x\in\mathbb{R}^{n}:h(x)\geq 0\}\) where \(h\) is a continuously differentiable function. The function \(h\) is called a _(zeroing) CBF_ of relative degree 1, if there exists a constant \(\gamma>0\) such that \(\sup_{u\in U}\left[L_{f}h(x)+L_{g}h(x)u+\gamma h(x)\right]\geq 0\) where \(L_{f}h(x)=\frac{\partial h}{\partial x}f(x)\) and \(L_{g}h(x)=\frac{\partial h}{\partial x}g(x)\) are Lie derivatives [2]. In this paper, we assume there is no constraint on the input \(u\), i.e., \(U=\mathbb{R}^{m}\). The following result that guarantees the forward invariance of \(\mathcal{C}\) is given in [2]. **Lemma 1**: _[_2_, Corollary 7]_ _If \(h\) is a (zeroing) CBF on \(\mathbb{R}^{n}\), then any Lipschitz continuous controller \(u:\mathbb{R}^{n}\to U\) such that \(u(x)\in K(x)\triangleq\{u\in U\mid L_{f}h(x)+L_{g}h(x)u+\gamma h(x)\geq 0\}\) will guarantee the forward invariance of \(\mathcal{C}\), i.e., the safety of the closed-loop system._ By including the CBF condition into a convex QP, the provably safe controller is obtained by solving a CBF-QP online. The time-varying CBF and its safety guarantee for a time-varying system are discussed in [22]. #### Ii-A2 Barrier Lyapunov Function In contrast to CBFs, BLFs are positive definite functions that are more tightly connected with Lyapunov functions. **Definition 1**: _[_5_, Definition 2]_ _A barrier Lyapunov function is a scalar function \(V(x)\), defined with respect to the system \(\dot{x}=f(x)\) on an open region \(\mathcal{D}\) containing the origin, that is continuous, positive definite, has continuous first-order partial derivatives at every point of \(\mathcal{D}\), has the property \(V(x)\rightarrow\infty\) as \(x\) approaches the boundary of \(\mathcal{D}\), and satisfies \(V(x(t))\leq b\) for any \(t>0\) along the solution of \(\dot{x}=f(x)\) for \(x(0)\in\mathcal{D}\) and some positive constant \(b\)._ The following lemma is used for BLF control design to guarantee that constraints on the output or state are satisfied. **Lemma 2**: _[_5_, Lemma 1]_ _For any positive constants \(k_{a_{1}}\), \(k_{b_{1}}\), let \(\mathcal{Z}_{1}\triangleq\{z_{1}\in\mathbb{R}:-k_{a_{1}}<z_{1}<k_{b_{1}}\} \subset\mathbb{R}\) and \(\mathcal{N}\triangleq\mathbb{R}^{l}\times\mathcal{Z}_{1}\subset\mathbb{R}^{l+1}\) be open sets. Consider the system \(\dot{\eta}=h(\eta,t)\) where \(\eta\triangleq[w,z_{1}^{\top}]\in\mathcal{N}\), and \(h:\mathbb{R}_{\geq 0}\times\mathcal{N}\rightarrow\mathbb{R}^{l+1}\) is piecewise continuous in \(t\) and locally Lipschitz in \(z\), uniformly in \(t\), on \(\mathbb{R}_{\geq 0}\times\mathcal{N}\). Suppose that there exist functions \(U:\mathbb{R}^{l}\rightarrow\mathbb{R}_{\geq 0}\) and \(V_{1}:\mathcal{Z}_{1}\rightarrow\mathbb{R}_{\geq 0}\), continuously differentiable and positive definite in their respective domains, such that \(V_{1}(z_{1})\rightarrow\infty\) as \(z_{1}\rightarrow-k_{a_{1}}\) or \(z_{1}\to k_{b_{1}}\), and \(\gamma_{1}(\|w\|)\leq U(w)\leq\gamma_{2}(\|w\|)\), where \(\gamma_{1}\) and \(\gamma_{2}\) are class \(\mathcal{K}_{\infty}\) functions. Let \(V(\eta)\triangleq V_{1}(z_{1})+U(w)\), and \(z_{1}(0)\) belong to the set \(z_{1}\in(-k_{a_{1}},k_{b_{1}})\). If the inequality \(V\leq\frac{\partial V}{\partial\eta}h\leq 0\) holds, then \(z_{1}(t)\) remains in the open set \(z_{1}\in(-k_{a_{1}},k_{b_{1}})\), \(\forall t\in[0,\infty)\)._ ### _Euler-Lagrange Systems_ Consider an EL system given as follows [13, 23]: \[\dot{q} =\omega, \tag{1a}\] \[\dot{\omega} =M^{-1}(q)\left(\tau-C(q,\omega)\omega-G(q)+\tau_{d}\right), \tag{1b}\] where \(q\in\mathbb{R}^{n}\) is the generalized coordinate, \(\omega\in\mathbb{R}^{n}\) is the generalized velocity, \(\tau\in\mathbb{R}^{n}\) is the control input, \(\tau_{d}:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}^{n}\) is the external disturbance, \(M:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times n}\) is the inertia matrix, \(C:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{n\times n}\) is the Coriolis/centripetal matrix, and \(G:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is the gravity term. We assume that the exact knowledge of the velocity \(\omega\) is not known, and denote the measured generalized velocity as \(\hat{\omega}\) (e.g., in some application scenarios, \(\omega\) is obtained by numerically differentiating \(q\) such that it may be contaminated by measurement noise); therefore the velocity measurement uncertainty can be defined as \[\xi=\omega-\hat{\omega}.\] Furthermore, we assume that \(\tau_{d}\), \(\xi\), and \(\dot{\xi}\) are all bounded. **Assumption 1**: _The disturbance \(\tau_{d}\) satisfies \(\|\tau_{d}\|\leq D_{0}\) where \(D_{0}>0\) is a positive constant._ **Assumption 2**: _The measurement uncertainty \(\xi\) and its derivative \(\dot{\xi}\) are bounded as \(\|\xi\|\leq D_{1}\) and \(\|\dot{\xi}\|\leq D_{2}\), where \(D_{1}\) and \(D_{2}\) are positive constants._ Note that Assumption 1 is extensively used in the robust control literature, and numerous state estimation techniques have been developed to ensure that the state estimation error is bounded. The system given in (1) has the following properties that will be exploited in the subsequent control design [24]. **Property 1 (PI):** The matrix \(M\) is positive definite, symmetric, and satisfies \[\lambda_{1}\|q\|^{2}\leq q^{\top}M(q)q\leq\lambda_{2}\|q\|^{2},\quad\forall q \in\mathbb{R}^{n}, \tag{2}\] where \(\lambda_{1}\), \(\lambda_{2}\) are positive constants. **Property 2 (P2):** The matrices \(C(q,\omega)\) and \(G(q)\) satisfy \[\|C(q,\omega)\|\leq\zeta_{c}\|\omega\|,\quad\|G(q)\|\leq\zeta_{g},\quad\forall q,\omega\in\mathbb{R}^{n}, \tag{3}\] where \(\zeta_{c}\) and \(\zeta_{g}\) are positive constants. ### _Problem Statement_ In this work, we consider provably safe control design for an EL system given in (1) with _limited information_. Specifically, we assume that the matrices \(M,C,G\) in (1) are unknown and satisfy inequalities (2) and (3) but only \(\lambda_{2}\) is known. With such an EL system, the first problem we aim to solve is to design a feedback controller based on the knowledge of \(q\) and \(\hat{\omega}\) to ensure the safety of the system in the joint space. **Problem 1:** Consider an EL system described by (1) where the matrices \(M,C,G\) are unknown, and a joint space safe set \(\mathcal{C}_{q}\) defined as \[\mathcal{C}_{q}=\{q\in\mathbb{R}^{n}:h(q)\geq 0\}, \tag{4}\] where \(h\) is a twice differentiable function. Suppose that Assumptions 1 and 2 hold with \(D_{0}\), \(D_{2}\) unknown, and \(M,C,G\) satisfy inequalities (2) and (3) with constant \(\lambda_{2}\) known and constants \(\lambda_{1},\zeta_{c},\zeta_{g}\) unknown. Design a feedback control law \(\tau(q(t),\hat{\omega}(t),t)\) such that the closed-loop system is always safe with respect to \(\mathcal{C}_{q}\), i.e., \(h(q(t))\geq 0,\forall t\geq 0\). The second problem we aim to solve is about designing a safe controller in the task space. **Problem 2:** Consider an EL system described by (1) where the matrices \(M,C,G\) are unknown, and the forward kinematics of the EL system: \[p=f(q), \tag{5}\] where \(p\in\mathbb{R}^{k}\) denotes the variable of the task space and \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{k}\) represents a continuously differentiable function with \(k\leq n\). Consider a task space safe set \(\mathcal{C}_{p}\) defined as \[\mathcal{C}_{p}=\{p\in\mathbb{R}^{p}:h(p)\geq 0\}, \tag{6}\] where \(h\) is a twice differentiable function. Suppose that Assumptions 1 and 2 hold with \(D_{0},D_{2}\) unknown, and \(M,C,G\) satisfy inequalities (2) and (3) with constant \(\lambda_{2}\) known and constants \(\lambda_{1},\zeta_{c},\zeta_{g}\) unknown. Design a feedback control law \(\tau(p(t),q(t),\hat{\omega}(t),t)\) such that the closed-loop system is safe with respect to \(\mathcal{C}_{p}\), i.e., \(h(p(t))\geq 0,\forall t\geq 0\). The main difficulty of Problems 1 and 2 lies in the limited information of the EL system: \(\lambda_{1}\), \(D_{0}\), \(D_{2}\), \(\zeta_{c}\), \(\zeta_{g}\) are assumed to be unknown in control design. The proposed controller in this work is highly robust to model uncertainties and can be easily transferred between different EL systems without re-designing the control laws. Existing safe control design approaches for EL systems are not applicable to solve the problems in this work because they rely on the exact forms of \(M\), \(C\), \(G\) or the values of \(\lambda_{1}\), \(D_{0}\), \(D_{2}\), \(\zeta_{c}\), \(\zeta_{g}\); see [14, 15, 16, 17, 18, 19, 20] for more details. ## III Joint Space Safe Control In this section, a novel proxy-CBF-BLF-based method will be presented to solve Problem 1 for the EL system with limited information, external disturbances, and measurement uncertainties. We will show the main idea of the method in Subsection III-A, propose an adaptive BLF-based control design approach for the virtual tracking subsystem in Subsection III-B, and a CBF-based control design strategy for the proxy subsystem in Subsection III-C. ### _Method Overview_ The main idea of our method is to decompose an EL system into two subsystems, called the proxy1 subsystem and the virtual tracking subsystem, and use the CBF and BLF to design safe controllers for the two subsystems, respectively, such that the overall controller will ensure the safety of the EL system (see Fig. 1 for illustration). Footnote 1: The term “proxy” is inspired by proxy-based sliding mode control [25] and haptic rendering [26]. The proxy subsystem is given as: \[\dot{q} =\mu+e_{q}+\xi, \tag{7a}\] \[\dot{\mu} =\nu, \tag{7b}\] where \(\mu\) is the virtual safe velocity with \(\mu(0)=\hat{\omega}(0)\), \(e_{q}\) is the virtual velocity tracking error defined as \[e_{q}=\hat{\omega}-\mu, \tag{8}\] and \(\nu\) is the virtual control input to be designed. Note that (7) is equivalent to (1a) augmented with an integrator. The virtual tracking subsystem is given as: \[\dot{e}_{q}=M(q)^{-1}(\tau-C(q,\omega)\omega-G(q)+\tau_{d})-\dot{\xi}-\nu \tag{9}\] where \(\tau\) is the control input to be designed and \(\nu\) is from the proxy subsystem (7). With this decomposition, Problem 1 can be solved by accomplishing two tasks shown as follows. **Task 1:** For the virtual tracking subsystem (9), design a controller \(\tau\) to guarantee \[\|e_{q}(t)\|<L,\forall t\geq 0, \tag{10}\] where \(L>0\) is an arbitrary positive constant. **Task 2:** For the proxy subsystem (7), design a control law \(\nu\) to ensure \(h(q(t))\geq 0,\forall t\geq 0\), under the assumption that \(\|e_{q}(t)\|<L,\forall t\geq 0\). **Remark 1:** In [20], a safe velocity is designed based on reduced-order kinematics, which is similar to (7a) in our proxy subsystem. However, including an additional integrator as shown in (7b) is important because \(\dot{\mu}\), which is equal to \(\nu\), is required in the virtual tracking subsystem (9) and \(L\) can be selected to be arbitrarily small, thereby reducing the potential conservatism of the safe controller (see Remark 4). Nevertheless, the added integrator will result in a system with a mismatched virtual disturbance, \(e_{q}+\xi\); a new CBF-based safe control scheme will be proposed for such a system in Section III-C. ### _BLF-based Control For the Virtual Tracking Subsystem_ In this subsection, an adaptive BLF-based controller will be presented to accomplish Task 1. The BLF-based method is suitable for this task because it does not rely on the bounds of the unknown parameters and the external disturbances. Inspired by our previous work [27], the following theorem presents a controller \(\tau\) for the virtual tracking subsystem to ensure \(\|e_{q}(t)\|<L,\forall t\geq 0\). **Theorem 1**: _Consider the virtual tracking subsystem (9) where the matrices \(M,C,G\) are unknown. Suppose that Assumptions 1 and 2 hold with \(D_{0}\), \(D_{2}\) unknown, and \(M,C,G\) satisfy inequalities (2) and (3) with constant \(\lambda_{2}\) known and constants \(\lambda_{1},\zeta_{c},\zeta_{g}\) unknown. Suppose that the controller \(\tau\) is designed as_ \[\tau=-\lambda_{2}c_{q}\mathcal{N} \tag{11}\] _where_ \[\mathcal{N} = k_{1}+\frac{(\hat{\theta}_{1}\varphi)^{2}}{\hat{\theta}_{1} \varphi\|e_{q}\|+\epsilon_{1}}+\frac{\hat{\theta}_{2}^{2}}{\hat{\theta}_{2}\|e _{q}\|+\epsilon_{2}}+\frac{\|\nu\|^{2}}{\|e_{q}\|\|\nu\|+\epsilon}, \tag{12a}\] \[\dot{\hat{\theta}}_{1} = -\gamma_{\theta}\hat{\theta}_{1}+\frac{\|e_{q}\|\varphi}{L^{2}- \|e_{q}\|^{2}},\] (12b) \[\dot{\hat{\theta}}_{2} = -\gamma_{\theta}\hat{\theta}_{2}+\frac{\|e_{q}\|}{L^{2}-\|e_{q}\| ^{2}}, \tag{12c}\] _and \(\varphi=(\|\hat{\omega}\|+D_{1})^{2}\), with positive constants \(\epsilon,\epsilon_{1},\epsilon_{2},\gamma_{\theta}>0\), \(k_{1}>\frac{\Lambda}{L^{2}}\), and \(\Lambda=\epsilon+\epsilon_{1}+\epsilon_{2}\). If \(\hat{\theta}_{1}(0),\hat{\theta}_{2}(0)>0\), then \(\|e_{q}(t)\|<L\) for any \(t\geq 0\)._ From (12b)-(12c), \(\dot{\hat{\theta}}_{1}\geq-\gamma_{\theta}\hat{\theta}_{1},\dot{\hat{\theta}}_ {1}\geq-\gamma_{\theta}\hat{\theta}_{2}\) hold in the open set \(\mathcal{Z}_{L}\triangleq\{e_{q}\in\mathbb{R}^{n}\ |\ \|e_{q}\|<L\}\). Since \(\hat{\theta}_{1}(0)>0,\hat{\theta}_{2}(0)>0\), it is easy to see that \(\hat{\theta}_{1}(t)\geq 0\) and \(\hat{\theta}_{2}(t)\geq 0\) for any \(t\geq 0\) by the Comparison Lemma [28, Lemma 2.5]. Define \(\theta_{1}=\zeta_{c}\lambda_{1}^{-1}\) and \(\theta_{2}=\lambda_{1}^{-1}(\zeta_{g}+D_{0})+D_{2}\), which are unknown parameters because \(\lambda_{1},\zeta_{c},\zeta_{g},D_{2}\) are unknown. Define a candidate BLF as \[V=\frac{1}{2}\log\left(\frac{L^{2}}{L^{2}-\|e_{q}\|^{2}}\right)+ \frac{1}{2}\tilde{\theta}_{1}^{2}+\frac{1}{2}\tilde{\theta}_{2}^{2}, \tag{13}\] where \(\tilde{\theta}_{1}=\theta_{1}-\hat{\theta}_{1}\), \(\tilde{\theta}_{2}=\theta_{2}-\hat{\theta}_{2}\). The derivative of \(V\) in the open set \(\mathcal{Z}_{L}\) can be expressed as \[\dot{V} = \frac{e_{q}^{\top}}{L^{2}-\|e_{q}\|^{2}}(M^{-1}(\tau-C(q,\omega) \omega-G(q)+\tau_{d}) \tag{14}\] \[-\dot{\xi}-\nu)-\tilde{\theta}_{1}\dot{\hat{\theta}}_{1}-\tilde{ \theta}_{2}\dot{\hat{\theta}}_{2}\] \[\leq \frac{e_{q}^{\top}M^{-1}\tau}{L^{2}-\|e_{q}\|^{2}}+\frac{\|e_{q} \|}{L^{2}-\|e_{q}\|^{2}}(\|M^{-1}\|(\|C(q,\omega)\omega\|\] \[+\|G\|+\|\tau_{d}\|)+\|\dot{\xi}\|+\|\nu\|)-\tilde{\theta}_{1} \dot{\hat{\theta}}_{1}-\tilde{\theta}_{2}\dot{\hat{\theta}}_{2}\] \[\leq \frac{e_{q}^{\top}M^{-1}\tau}{L^{2}-\|e_{q}\|^{2}}+\frac{\|e_{q} \|}{L^{2}-\|e_{q}\|^{2}}(\lambda_{1}^{-1}(\zeta_{c}(\|\hat{\omega}\|+D_{1})^{2}\] \[+\zeta_{g}+D_{0})+D_{2}+\|\nu\|)-\tilde{\theta}_{1}\dot{\hat{ \theta}}_{1}-\tilde{\theta}_{2}\dot{\hat{\theta}}_{2}\] \[= \frac{e_{q}^{\top}M^{-1}\tau}{L^{2}-\|e_{q}\|^{2}}+\frac{\|e_{q} \|}{L^{2}-\|e_{q}\|^{2}}(\hat{\theta}_{1}\varphi+\hat{\theta}_{2}+\|\nu\|)\] \[-\tilde{\theta}_{1}\bigg{(}\dot{\hat{\theta}}_{1}\!-\!\frac{\|e_{ q}\|\varphi}{L^{2}\!-\!\|e_{q}\|^{2}}\bigg{)}\!-\!\tilde{\theta}_{2}\bigg{(}\dot{ \hat{\theta}}_{2}\!-\!\frac{\|e_{q}\|}{L^{2}\!-\!\|e_{q}\|^{2}}\bigg{)}\,,\] where the second inequality comes from \[\|C(q,\omega)\omega\|\stackrel{{\rm(P1)}}{{\leq}}\zeta_{c}\|\omega \|^{2}=\zeta_{c}\|\hat{\omega}+\xi\|^{2}\leq\zeta_{c}(\|\hat{\omega}\|+D_{1})^{2},\] and the third inequality arises from the fact \(\lambda_{2}^{-1}\leq\|M(q)^{-1}\|\leq\lambda_{1}^{-1}\) for any \(q\in\mathbb{R}^{n}\), according to Property 1. Substituting (12) into (14) yields \[\dot{V} \leq \frac{1}{L^{2}\!-\!\|e_{q}\|^{2}}\bigg{(}-\lambda_{2}\underbrace{( e_{q}^{\top}M^{-1}e_{q})}_{\geq\lambda_{2}^{-1}\|\epsilon_{q}\|^{2}}\bigg{(}k_{1}+ \underbrace{\frac{(\hat{\theta}_{1}\varphi)^{2}}{\hat{\theta}_{1}\varphi\|e_{q} \|\!+\!\epsilon_{1}}}_{\geq 0} \tag{15}\] \[+\underbrace{\frac{(\hat{\theta}_{2})^{2}}{\hat{\theta}_{2}\|e_{q} \|\!+\!\epsilon_{2}}}_{\geq 0}+\underbrace{\frac{\|\nu\|^{2}}{\|e_{q}\|\|\nu\|+ \epsilon}}_{\geq 0}\bigg{)}+\|e_{q}\|(\hat{\theta}_{1}\varphi\!+\!\hat{\theta}_{2}\!+\!\|\nu\|) \bigg{)}\] \[+\gamma_{\theta}(\tilde{\theta}_{1}\hat{\theta}_{1}+\tilde{\theta}_{2 }\hat{\theta}_{2})\] \[\leq \frac{1}{L^{2}-\|e_{q}\|^{2}}\bigg{(}-k_{1}\|e_{q}\|^{2}+\bigg{(} \hat{\theta}_{1}\varphi\|e_{q}\|-\frac{(\hat{\theta}_{1}\varphi\|e_{q}\|)^{2}}{ \hat{\theta}_{1}\varphi\|e_{q}\|+\epsilon_{1}}\bigg{)}\] \[+\bigg{(}\hat{\theta}_{2}\|e_{q}\|-\frac{(\hat{\theta}_{2}\|e_{q}\|) ^{2}}{\hat{\theta}_{2}\|e_{q}\|\!+\!\epsilon_{2}}\bigg{)}\!+\!\bigg{(}\|e_{q} \|\|\nu\|-\frac{(\|e_{q}\|\|\nu\|)^{2}}{\|e_{q}\|\|\nu\|\!+\!\epsilon}\bigg{)}\] \[+\gamma_{\theta}(\tilde{\theta}_{1}\hat{\theta}_{1}+\tilde{\theta}_ {2}\hat{\theta}_{2})\] \[\leq \frac{1}{L^{2}-\|e_{q}\|^{2}}\left(-k_{1}\|e_{q}\|^{2}+\Lambda \right)+\gamma_{\theta}(\tilde{\theta}_{1}\hat{\theta}_{1}+\tilde{\theta}_{2} \hat{\theta}_{2}),\] where the last inequality comes from the fact that for any \(A\geq 0,\epsilon>0\), \(A-\frac{A^{2}}{A+\epsilon}=\frac{A\epsilon}{A+\epsilon}\leq\epsilon\) holds true. Noting that \(\frac{\Lambda}{L^{2}-\|e_{q}\|^{2}}=\frac{\Lambda}{L^{2}}+\frac{\Lambda}{L^{2}} \frac{\|e_{q} (ii) the set \(K_{BF}^{q}(q,\mu)=\{\mathbf{u}\in\mathbb{R}^{n}:\Psi_{0}+\Psi_{1}\mathbf{u}\geq 0\}\) is not empty for any \(q\in\mathcal{C}_{q}\) and \(\mu\in\mathbb{R}^{n}\), where \[\Psi_{0}=\mathcal{M}\mu-\|\mathcal{M}\|\,(D_{1}+L)+\gamma\bar{h}, \tag{17a}\] \[\Psi_{1}=\frac{\partial h}{\partial q}, \tag{17b}\] with \(\mathcal{M}=\mu^{\top}\mathrm{H}_{h}-\frac{1}{\beta}\frac{\partial h}{ \partial q}\mathrm{H}_{h}+\lambda\frac{\partial h}{\partial q}\), \(\mathrm{H}_{h}=\frac{\partial^{2}h}{\partial q^{2}}\) denotes the Hessian, and \(\bar{h}=\frac{\partial h}{\partial q}\mu-\frac{1}{2\beta}\left\|\frac{\partial h }{\partial q}\right\|^{2}-\frac{\beta(D_{1}+L)^{2}}{2}+\lambda h\). Then, any Lipschitz continuous control input \(\nu\in K_{BF}^{q}(q,\mu)\) will make \(h(q(t))\geq 0\) for any \(t\geq 0\). First, we show that \(\nu\in K_{BF}^{q}(q,\mu)\Longrightarrow\bar{h}(t)\geq 0\) for any \(t\geq 0\). Note that Condition (i) indicates that \(\bar{h}(q(0),\mu(0))\geq 0\). Meanwhile, one can observe that \(\bar{h}\) can be expressed as \[\dot{\bar{h}} = \frac{\partial h}{\partial q}\nu+\left(\mu^{\top}\mathrm{H}_{h}- \frac{1}{\beta}\frac{\partial h}{\partial q}\mathrm{H}_{h}+\lambda\frac{ \partial h}{\partial q}\right)(\mu+e_{q}+\xi)\] \[= \frac{\partial h}{\partial q}\nu+\mathcal{M}\mu+\mathcal{M}(e+\xi)\] \[\stackrel{{\eqref{eq:K_BF}}}{{\geq}}\Psi_{1}\nu+ \mathcal{M}\mu-\|\mathcal{M}\|\,(D_{1}+L).\] Selecting \(\nu\in K_{BF}^{q}(q,\mu)\) yields \(\dot{\bar{h}}\geq-\Psi_{0}+\mathcal{M}\mu-\|\mathcal{M}\|\,(D_{1}+L)=-\gamma \bar{h}\), which indicates \(\bar{h}(q(t),\mu(t))\geq 0,\forall t\geq 0\) because \(\bar{h}(q(0),\mu(0))\geq 0\). Since \[\dot{h}+\lambda h = \frac{\partial h}{\partial q}(\mu+e_{q}+\xi)+\lambda h\] \[\geq \frac{\partial h}{\partial q}\mu-\frac{1}{2\beta}\left\|\frac{ \partial h}{\partial q}\right\|^{2}-\frac{\beta}{2}\|e_{q}+\xi\|^{2}+\lambda h\] \[\stackrel{{\eqref{eq:K_BF}}}{{\geq}}\frac{\partial h }{\partial q}\mu-\frac{1}{2\beta}\left\|\frac{\partial h}{\partial q}\right\|^ {2}-\frac{\beta(D_{1}+L)^{2}}{2}+\lambda h\] \[= \bar{h}(q,\mu)\geq 0,\] one can conclude that \(h(q(t))\geq 0,\forall t\) since \(h(q(0))\geq 0\). The safe virtual controller proposed in Theorem 2 is obtained by solving the following CBF-QP: \[\min_{\nu} \|\nu-\nu_{d}\|^{2}\] (18) s.t. \[\Psi_{0}+\Psi_{1}\nu\geq 0,\] where \(\Psi_{0},\Psi_{1}\) are given in (17) and \(\nu_{d}\) is any given nominal control law. The safe feedback control law \(\tau(q(t),\hat{\omega}(t),t)\) to the EL system (1) consists of the control law \(\tau\) given in (11) and the control law \(\nu\) given in (18). By Theorems 1 and 2, the safe controller will ensure that the closed-loop system is always safe with respect to \(\mathcal{C}_{q}\), i.e., \(h(q(t))\geq 0\) for all \(t\geq 0\). **Remark 2**: _The nominal control law \(\nu_{d}\) can be designed as \(\nu_{d}=-\alpha_{1}E_{q}-\alpha_{2}E_{\mu}+\hat{q}_{d}\), where \(E_{q}\triangleq q-q_{d}\), \(E_{\mu}\triangleq\mu-\hat{q}_{d}\), \(q_{d}\) denotes the reference trajectory, and \(\alpha_{1},\alpha_{2}\in\mathbb{R}\) are selected such that_ \[\sigma_{\min}\left(\begin{bmatrix}0_{n\times n}&I_{n\times n}\\ -\alpha_{1}I_{n\times n}&-\alpha_{2}I_{n\times n}\end{bmatrix}\right)\triangleq -\alpha<-\frac{1}{2}.\] _Define a Lyapunov candidate function as \(V=\frac{1}{2}\varepsilon^{\top}\varepsilon\), where \(\varepsilon=[E_{q}^{\top}\ E_{\mu}^{\top}]^{\top}\). Since \(\dot{V}\) satisfies_ \[\dot{V} =\varepsilon^{\top}\begin{bmatrix}0_{n\times n}&I_{n\times n}\\ -\alpha_{1}I_{n\times n}&-\alpha_{2}I_{n\times n}\end{bmatrix}\varepsilon+E_{q} ^{\top}(e_{q}+\xi)\] \[\leq-\,(2\alpha-1)\,V+\frac{(D_{1}+L)^{2}}{2},\] _the tracking error is uniformly ultimately bounded [28]._ **Remark 3**: _Suppose that \(q^{*}\) is the unique zero of \(\Psi_{1}\) in \(\mathcal{C}_{q}\). We claim that if_ \[\mathrm{H}_{h}^{*}\triangleq\mathrm{H}_{h}(q^{*})\succ aI_{n\times n},\ h^{*} \triangleq h(q^{*})>0, \tag{19}\] _where \(a\) is an arbitrary positive constant, then one can always find \(\gamma,\beta,\lambda>0\) such that Condition (i) and (ii) in Theorem 2 hold true. Indeed, one can easily select \(\beta\) and \(\lambda\) such that Condition (i) is fulfilled. Meanwhile, from (17) one can observe that \(\Psi_{0}^{*}\triangleq\Psi_{0}(q^{*},\mu)=\mu^{\top}\mathrm{H}_{h}^{*}\mu-\| \mu^{\top}\mathrm{H}_{h}^{*}\|(D_{1}+L)+\gamma\lambda h-\frac{\beta(D_{1}+L)^{2}}{2}\) satisfies_ \[\Psi_{0}^{*} \geq a\|\mu\|^{2}-\|\mathrm{H}_{h}^{*}\|(D_{1}+L)\|\mu\|+\gamma \lambda h^{*}-\frac{\beta(D_{1}+L)^{2}}{2}\] \[=a\left(\|\mu\|-\frac{\|\mathrm{H}_{h}^{*}\|(D_{1}+L)}{2a}\right)^ {2}+\gamma\lambda h^{*}-\Xi\] \[\geq\gamma\lambda h^{*}-\Xi,\] _where \(\Xi=\frac{\|\mathrm{H}_{h}^{*}\|^{2}(D_{1}+L)^{2}}{4a}+\frac{\beta(D_{1}+L)^{2}} {2}\). It is obvious that selecting \(\gamma\geq\frac{\Xi}{\lambda h^{*}}\) will yield \(\Psi_{0}\geq 0\), such that \(K_{BF}^{q}\) is not empty when \(q=q^{*}\), which shows the correctness of the claim. Furthermore, it is obvious that if \(\Psi_{1}\) has finite zeros in \(\mathcal{C}_{q}\) and each zero satisfies (19), then one can always select appropriate \(\gamma,\lambda,\beta\) such that Conditions (i) and (ii) of Theorem 2 are satisfied. Nevertheless, it should be noticed that (19) is not the unique criterion for verifying the conditions in Theorem 2. Developing systematic methods to design \(h\) satisfying these conditions will be our future work._ **Remark 4**: _The bound \(L\) for \(\|e_{q}\|\) as given in (10) should be carefully selected to achieve a trade-off between the control performance and the maximum magnitude of the control input. If \(L\) is selected to be very small, the control input tends to be significant because the state is more likely to approach the boundary of the output constraint; if \(L\) is chosen to be large, unnecessary conservatism (i.e., the system only operates in a subset of the original safety set) may be introduced because in Theorem 2 the worst-case of \(e_{q}+\xi\) is considered._ If the proxy subsystem is not augmented with an additional integrator, the requirement of \(L\) would be more restrictive, i.e., \(L\geq|\hat{\omega}(0)-\mu(q(0))|\) is required. In practice, this may necessitate the selection of a larger \(L\), which could result in unnecessarily conservatism. Meanwhile, \(L\) is used in the design of \(\mu\) to guarantee safety, which implies that \(\mu(q(0))\) implicitly relies on \(L\). Thus, in some cases, it may be difficult to find an appropriate \(L\) that satisfies \(L\geq|\hat{\omega}(0)-\mu(q(0))|\). ## IV task space Safe Control In this section, we will utilize the idea presented in the preceding section to solve the task space safe control problem for the EL system with limited information, external disturbances, and measurement uncertainties. The proxy subsystem in task space is more complicated to control than that in joint space; therefore, a different CBF-based control scheme is proposed. Invoking (5), one can see \[\dot{p}=J(q)\omega=J(q)(\hat{\omega}+\xi), \tag{20}\] where \(J=\frac{\partial f}{\partial q}\) denotes the Jacobian [23]. Substituting (20) into (1) yields \[\dot{p}=J(q)(\hat{\omega}+\xi), \tag{21a}\] \[\dot{\hat{\omega}}=M^{-1}(q)(\tau-C(q,\omega)\omega-G(q)+\tau_{d})-\dot{\xi}. \tag{21b}\] System (21) can be decomposed into the proxy subsystem and the virtual tracking subsystem similar to Section III. The proxy subsystem is given as: \[\dot{p} =J(q)\eta+J(q)(e_{p}+\xi), \tag{22a}\] \[\dot{\eta} =\upsilon, \tag{22b}\] where \(\eta\) is the virtual state with \(\eta(0)=\hat{\omega}(0)\), \(e_{p}\triangleq\hat{\omega}-\eta\), and \(\upsilon\) denotes the virtual control input to be designed. The virtual tracking subsystem is given as: \[\dot{e}_{p}=M^{-1}(\tau-C(q,\omega)\omega-G(q)+\tau_{d})-\dot{\xi}-\upsilon. \tag{23}\] Note that system (23) corresponds to system (9), for which the adaptive BLF-based controller developed in Theorem 1 is still applicable. On the other hand, the CBF-based controller presented in Theorem 2 is inapplicable to the proxy subsystem given in (22) because (22) is different from (7). We will design a new CBF-based safe control law for (22) to ensure the forward invariance of \(\mathcal{C}_{p}\). To that end, we first design a nominal tracking controller for the proxy subsystem (22) based on backstepping [29] as shown in the following proposition. **Proposition 1**: _Consider the proxy subsystem (22) and a desired trajectory \(p_{d}\). Suppose that \(\|e_{p}\|<L\) and the Jacobian \(J\) has full row rank, i.e., there exists \(J^{\dagger}\) such that \(JJ^{\dagger}=I_{k\times k}\). If the desired control input \(v_{d}\) is designed as_ \[\delta =J^{\dagger}\left(-l_{1}\epsilon_{d}+\dot{p}_{d}-\frac{\|J\|^{2}} {2}\epsilon_{d}\right), \tag{24a}\] \[v_{d} =-l_{2}\epsilon_{\eta}+\frac{\partial\delta}{\partial p}J\eta+ \frac{\partial\delta}{\partial t}-\frac{1}{2}\left\|\frac{\partial\delta}{ \partial p}J\right\|^{2}\epsilon_{\eta}-J^{\top}\epsilon_{d}, \tag{24b}\] _where \(\epsilon_{d}=p-p_{d}\), \(\epsilon_{\eta}=\eta-\delta\), and \(l_{1},l_{2}>0\) are arbitrary positive constants, then the tracking error \(\epsilon_{d}\) is uniformly ultimately bounded._ Define a Lyapunov candidate function as \(V_{1}=\frac{1}{2}\epsilon_{d}^{\top}\epsilon_{d}\). The derivative of \(V_{1}\) satisfies \[\dot{V}_{1} = \epsilon_{d}^{\top}(J\delta+J\epsilon_{\eta}+J(e_{p}+\xi)-\dot{p }_{d})\] \[\leq \epsilon_{d}^{\top}(J\delta+J\epsilon_{\eta}-\dot{p}_{d})+\frac{ \|\epsilon_{d}\|^{2}\|J\|^{2}}{2}+\frac{(D_{1}+L)^{2}}{2}\] \[\stackrel{{\eqref{eq:L1}}}{{\leq}}-l_{1}\|\epsilon_ {d}\|^{2}+\epsilon_{d}^{\top}J\epsilon_{\eta}+\frac{(D_{1}+L)^{2}}{2}.\] Then, an augmented Lyapunov candidate function is designed as \(V_{2}=V_{1}+\frac{1}{2}\epsilon_{\eta}^{\top}\epsilon_{\eta}\), whose derivative can be expressed as \[\dot{V}_{2} \leq -l_{1}\|\epsilon_{d}\|^{2}+\epsilon_{\eta}^{\top}J^{\top}\epsilon _{d}+\frac{(D_{1}+L)^{2}}{2}\] \[+\epsilon_{\eta}^{\top}\left(v_{d}-\frac{\partial\delta}{\partial p }J(\eta+e_{p}+\xi)-\frac{\partial\delta}{\partial t}\right)\] \[\leq -l_{1}\|\epsilon_{d}\|^{2}+\epsilon_{\eta}^{\top}J^{\top}\epsilon _{d}+(D_{1}+L)^{2}\] \[+\epsilon_{\eta}^{\top}\left(v_{d}-\frac{\partial\delta}{\partial p }J\eta-\frac{\partial\delta}{\partial t}\right)+\frac{1}{2}\|\epsilon_{\eta} \|^{2}\left\|\frac{\partial\delta}{\partial p}J\right\|^{2}\] \[\stackrel{{\eqref{eq:L2}}}{{\leq}} -l_{1}\|\epsilon_{d}\|^{2}-l_{2}\|\epsilon_{\eta}\|^{2}+(D_{1}+L)^{2}.\] Therefore, the tracking error \(\epsilon_{d}\) is uniformly ultimately bounded [28]. A CBF-based safe control law is proposed for the proxy subsystem (22) in the following theorem. **Theorem 3**: _Consider the proxy subsystem (22) and the set \(\mathcal{C}_{p}\) defined in (6). Suppose that \(h(p(0))\geq 0\), \(\|e_{p}(t)\|<L,\forall t\geq 0\), and there exist constants \(\lambda,\gamma,\beta>0\) such that_ (i)__\(\frac{\partial h}{\partial p}(p(0))J(q(0))\eta(0)\)__\(-\)__\(\frac{1}{2\beta}\left\|\frac{\partial h}{\partial p}(p(0))J(q(0))\right\|^{2}\)__\(-\)__\(\frac{\beta(D_{1}+L)^{2}}{2}+\lambda h(p(0))\geq 0\)_;_ (ii) _the set \(K_{BF}^{p}(p,q,\mu)=\{\mathbf{u}\in\mathbb{R}^{n}:\Psi_{0}+\Psi_{1}\mathbf{u}\geq 0\}\) is not empty for any \(p\in\mathcal{C}_{p}\) and \(\eta\in\mathbb{R}^{n}\), where_ \[\Phi_{0} =\frac{\partial\bar{h}}{\partial q}(J\eta+\hat{\omega})-\left\| \frac{\partial\bar{h}}{\partial q}J\right\|(D_{1}+L)\] \[\quad-\left\|\frac{\partial\bar{h}}{\partial q}\right\|D_{1}+\gamma \bar{h}, \tag{25a}\] \[\Phi_{1} =\frac{\partial h}{\partial p}J, \tag{25b}\] _with \(\bar{h}=\frac{\partial h}{\partial p}J\eta-\frac{1}{2\beta}\left\|\frac{\partial h }{\partial p}J\right\|^{2}-\frac{\beta(D_{1}+L)^{2}}{2}+\lambda h\)._ Then, any Lipschitz continuous control input \(\upsilon\in K_{BF}^{p}\) will make \(h(p(t))\geq 0\) for any \(t\geq 0\). We only show the sketch of the proof due to space limitation and the similarity of the proof to that of Theorem 2. One can see that selecting \(\upsilon\in K_{BF}^{p}\) ensures \(\hat{\hat{h}}\geq-\gamma\bar{h}\); therefore, \(\bar{h}(t)\geq 0\) for any \(t\geq 0\) since Condition (i) implies \(\bar{h}(p(0),q(0),\eta(0))\geq 0\). Then, it can be proved that \(\bar{h}(t)\geq 0\implies h(t)\geq 0\) for any \(t\geq 0\). Based on Proposition 1 and Theorem 3, the safe virtual controller \(\nu\) can be obtained by solving a CBF-QP: \[\min_{\nu} \left\|\upsilon-\upsilon_{d}\right\|^{2}\] (26) s.t. \[\Phi_{0}+\Phi_{1}\upsilon\geq 0,\] where \(\Phi_{0},\Phi_{1}\) are given in (25) and \(v_{d}\) is presented in (24). The safe feedback control law \(\tau(p(t),q(t),\hat{\omega}(t),t)\) to the EL system (1) consists of the control law \(\tau\) given in (11) and the control law \(\nu\) given in (26). By Theorems 1 and 3, the control law \(\tau(p(t),q(t),\hat{\omega}(t),t)\) will ensure the safety of the closed-loop system with respect to \(\mathcal{C}_{p}\), i.e., \(h(p(t))\geq 0\) for all \(t\geq 0\). ## V Simulation In this section, numerical simulation results are presented to demonstrate the effectiveness of the proposed method. Consider a two-linked robot manipulator, whose dynamics can be described by (1) with \[M(q) =\!\left[\!\!\begin{array}{cc}\frac{m_{1}l^{2}}{3}+\frac{4m_{2}l ^{2}}{3}+m_{2}l^{2}\cos q_{2}&\frac{m_{2}l^{2}}{3}+\frac{m_{2}l^{2}}{2}\cos q_{2} \\ \frac{m_{2}l^{2}}{3}+\frac{m_{2}l^{2}}{2}\cos q_{2}&\frac{m_{2}l^{2}}{3}\\ \end{array}\!\!\right],\] \[C(q,\omega) =\!\left[\!\!\begin{array}{cc}-\frac{m_{2}l^{2}}{2}\dot{q}_{2} \sin q_{2}&-\frac{m_{2}l^{2}}{2}(\dot{q}_{1}+\dot{q}_{2})\sin q_{2}\\ \frac{m_{2}l^{2}}{2}\dot{q}_{1}\sin q_{2}&0\\ \end{array}\!\!\right],\] \[G(q) =\left[\!\!\!\begin{array}{cc}\frac{m_{1}gl}{2}\cos q_{1}+\frac{m_{2}gl }{2}\cos(q_{1}+q_{2})+m_{2}gl\cos q_{1}\\ \frac{m_{2}gl}{m_{2}}\cos(q_{1}+q_{2})\end{array}\!\!\right],\] where \(m_{1}=m_{2}=1\)\(kg\), \(l=1\)\(m\), \(q=[q_{1}\)\(q_{2}]\in\mathbb{R}^{2}\) denotes the joint ### _Joint Space Safe Control_ In this subsection, simulation results of the joint space safe control are presented. The reference trajectories are \(q_{1d}=q_{2d}=3\sin(t)\); four CBFs are selected as \(h_{1}=2.5-q_{1}\), \(h_{2}=q_{1}+2.5\), \(h_{3}=2-q_{2}\), and \(h_{4}=q_{2}+1\), which aim to ensure \(-2.5\leq q_{1}\leq 2.5\) and \(-1\leq q_{2}\leq 2\); the control parameters are selected as \(\beta=2\), \(\gamma=10\), \(\lambda=16\), \(\epsilon=\epsilon_{1}=\epsilon_{2}=0.01\), \(L=0.3\), \(\gamma_{\theta}=1\), and \(k_{1}=0.1\); the initial conditions are \(q_{1}(0)=q_{2}(0)=1\) and \(\dot{q}_{1}(0)=\dot{q}_{2}(0)=0\); the measurement uncertainty and disturbance are selected as \(\xi=[0.2\sin(2t)\ 0.2\sin(2t)]^{\top}\) and \(\tau_{d}=10\sin(t)\), from which one can see that Assumption 1 and 2 are satisfied. It is easy to check that Conditions (i) and (ii) of Theorem 2 are fulfilled with the given parameters and CBFs. The simulation results are presented in Fig. 2. From the simulation results one can see that the safety of \(\mathcal{C}_{q}\) is guaranteed as the trajectories of \(q_{1}\) and \(q_{2}\) always stay inside the safe region whose boundaries are represented by the dashed red line, and the reference trajectory is well-tracked within the safe set. Moreover, from Fig. 2(c) one can observe that \(\|e_{q}(t)\|<L\) is satisfied for any \(t\geq 0\), which indicates that the adaptive BLF-based controller proposed in Theorem 1 is effective. ### _Task Space Safe Control_ In this subsection, simulation results for task space safe control are presented. The forward kinematics can be expressed as \[\begin{bmatrix}x\\ y\end{bmatrix}=\begin{bmatrix}l_{1}\cos(q_{1})+l_{2}\cos(q_{1}+q_{2})\\ l_{1}\sin(q_{1})+l_{2}\sin(q_{1}+q_{2})\end{bmatrix},\] and the Jacobian is \[J(q)=\begin{bmatrix}-l_{1}\sin(q_{1})-l_{2}\sin(q_{1}+q_{2})&-l_{2}\sin(q_{1}+ q_{2})\\ l_{1}\cos(q_{1})+l_{2}\cos(q_{1}+q_{2})&l_{2}\cos(q_{1}+q_{2})\end{bmatrix}.\] Note that the measurement uncertainties and disturbance are the same as those in Section V-A such that Assumption 1 and 2 are satisfied. To demonstrate the effectiveness of the proposed method, three cases are considered. * Case 1: The CBF is \(h=x^{2}+y^{2}-0.25\); the initial conditions are \(x(0)=1.59\), \(y(0)=0.11\); the reference trajectories are \(x_{d}(t)=1.5-0.3t\), \(y(t)=0\); and the control parameters are chosen as \(\epsilon=\epsilon_{1}=\epsilon_{2}=0.01\), \(L=0.05\), \(\gamma_{\theta}=1\), \(\beta=2\), \(\lambda=100\), \(l_{1}=l_{2}=20\), \(\gamma=1000\), and \(k_{1}=3\). * Case 2: The CBF is \(h=1+x-y^{2}\); the initial conditions are \(x(0)=1.8\), \(y(0)=0\); the reference trajectories are \(x_{d}(t)=1.5\cos(t)\), \(y(t)=1.5\sin(t)\); and the control parameters are the same as those in Case 1 except for \(\gamma=300\). * Case 3: The CBF is \(h=1+x+y\); the initial conditions are \(x(0)=1.8\), \(y(0)=0\); the reference trajectories are \(x_{d}(t)=1.5\cos(t)\), \(y(t)=1.5\sin(t)\); and the control parameters are the same as those in Case 1 except for \(l_{1}=l_{2}=40\), \(\lambda=100\), and \(\gamma=500\). The simulation results are presented in Fig. 3, from which one can see that in all three cases the safety of \(\mathcal{C}_{p}\) is ensured by the proposed controller as the trajectories of \(x\) and \(y\) Fig. 2: Simulation results of the joint space safe control. From (a) and (b) it can be seen that the proposed controller can ensure safety of \(\mathcal{C}_{q}\) as the trajectories of \(q_{1}\) and \(q_{2}\) never cross the boundary of the safe region represented by the dash red lines, with good tracking performance inside the safe region. Moreover, from (c) one can conclude that the adaptive BLF-based controller developed in Theorem 1 is effective since the constraint on \(e_{q}\) is not violated. always stay inside the safe region whose boundary is represented by the dash red lines, and the tracking performance inside the safe region is satisfactory. ## VI Conclusion In this paper, a novel proxy CBF-BLF-based control design approach is proposed for EL systems with limited information by decomposing an EL system into the proxy subsystem and the virtual tracking subsystem. A BLF-based controller is designed for the virtual tracking subsystem to ensure the boundedness of the safe velocity tracking error. Based on that, a CBF-based controller is designed for the proxy subsystem to ensure safety in the joint space or task space. Simulation results are given to verify the effectiveness of the proposed method. Future work includes conducting experimental studies and generalizing the results to ensure safety and stability simultaneously for EL systems.
2310.00282
High gas pressure and high-temperature synthesis (HP-HTS) technique and its impact on iron-based superconductors
The high-pressure growth technique generally plays an important role in the improvement of the sample quality and the enhancement of various physical and magnetic properties of materials. The high gas pressure technique provides a large sample space (10-15 cm) to grow various kinds of materials. In this paper, we introduce the high gas pressure and high-temperature synthesis (HP-HTS) technique that is present at our institute and is applied to the growth process of different kinds of superconducting materials, particularly iron-based superconductors. More details and the working principle of this HP-HTS technique are discussed. We have also demonstrated the current results based on the iron-based superconductors by using this unique HP-HTS technique. These results demonstrate the enhancement of the superconducting properties with the improved sample quality compared to the conventional synthesis process at ambient pressure.
Mohammad Azam, Manasa Manasa, Andrzej Morawski, Tomasz Cetner, Shiv J. Singh
2023-09-30T07:15:34Z
http://arxiv.org/abs/2310.00282v1
High gas pressure and high-temperature synthesis (HP-HTS) technique and its impact on iron-based superconductors ###### Abstract The high-pressure growth technique generally plays an important role in the improvement of the sample quality and the enhancement of various physical and magnetic properties of materials. The high gas pressure technique provides a large sample space (10-15 cm) to grow various kinds of materials. In this paper, we introduce the high gas pressure and high-temperature synthesis (HP-HTS) technique that is present at our institute and is applied to the growth process of different kinds of superconducting materials, particularly iron-based superconductors. More details and the working principle of this HP-HTS technique are discussed. We have also demonstrated the current results based on the iron-based superconductors by using this unique HP-HTS technique. These results demonstrate the enhancement of the superconducting properties with the improved sample quality compared to the conventional synthesis process at ambient pressure. Keywords:high gas pressure, superconducting properties, pressure synthesis, iron-based superconductor, critical transition temperature + Footnote †: journal: High-pressure synthesis (HP-HTS) technique and its impact on iron-based superconductors ## 1 Introduction High-pressure synthesis of the material is an important area for physics, chemistry, and material sciences [1; 2; 3]. This method reduces the chemical reaction time, controls the evaporation of lighter elements [4], and can also be used to grow new materials that cannot be prepared at ambient pressure, such as superhard materials like diamond and cubic boron nitride [5]. Generally, the sample space is very tiny for the pressure growth process, and due to this, a small sample size is always obtained [6]. This issue has been clearly observed with the growth process of iron-based superconductor (FBS) [7]. To overcome this problem, we need to find a good high-pressure growth method that can provide large crystals and large amount of bulk samples with high superconducting properties. However, the question is: which technique is more suitable to resolve these problems? [8][9]. There are two kinds of pressure techniques: _a) Solid-medium pressure techniques_ such as Hot Isostatic Pressure (HIP), Diamond Anvil Cell (DAC) Technique, Multi-Anvil High-Pressure Apparatus, and cold synthesis [10]. The properties of this technique are as follows: (i) It has a limited sample space of up to 0.5 cm\({}^{3}\), and (ii) the pressure and temperature distribution are not homogeneous. It generates undefined preparation conditions. (iii) Due to the pressure medium touching the sample, there is a great possibility of introducing the impurity phases. (iv) Also, the temperature gradient is not easy to control in the multizone furnace. (v) And always, the sample size is smaller due to the small sample space. _b) Gas pressure technique_: The properties are as follows: (i) it has several cm\({}^{3}\) of sample space, and (ii) it's easy to create homogeneous temperature and pressure stability for practically long growth time. (iii) The spatial temperature profile may be controlled and it may be easy to control the partial gas pressure; (iv) Growth of the crystal is a comparatively easy process; (v) No possibility of introducing impurities from the pressure medium; (vi) Sample chamber with an internal three-zone furnace with pressure up to 2-3 GPa and temperature \(\sim\)2000\({}^{\circ}\)C. In solid-medium pressure growth process, interactions between materials and instrument parts can introduce contamination and other issues due to the applied physical force, potentially leading to morphology problems like cracks and pores [1]. Therefore, it is crucial to explore alternative techniques to address these challenges. These comparative studies suggest that the gas pressure technique can be a unique and attractive way to grow high-quality and large amounts of samples and to improve the superconducting properties of FBS [11][12]. These reasons motivated us to use high-pressure techniques for the growth of iron-based superconductors. In the case of FBS, few studies have been reported based on solid-pressure medium techniques where the sample size is enhanced from 10 micrometers to 300 micrometers with improved superconducting properties [6; 7]. These reports suggest that more studies are needed in this direction by using different pressure techniques, such as the high-gas growth method so that we can also prepare a large amount or large size sample with high superconducting properties. In this paper, we have introduced the principle and more details about our high gas pressure and high-temperature synthesis (HP-HTS) technique that is available at our institute, 'UNIPRESS'. Also, the current results based on iron-based superconductors using this HP-HTS method will be presented and discussed. ## 2 Principle of high-pressure technique This HP-HTS has a compressor, called the reciprocating compressor which is based on the oil gas piston. The main working principle of the reciprocating compressor is based on Boyel's law [13] which states that absolute pressure exerted by a given mass of an ideal gas is inversely proportional to the occupied volume. The piston compresses the gas and increases the gas pressure inside the chamber. Generally in our HP-HTS system, there are three piston cavities (piston chamber) attached to each other in a series manner to achieve the required high pressure. The systematic block diagram of the working principle of these pistons is shown below in various stages: _(a) First-stage piston (\(S_{i}\))_: First, the gas bottle is opened, and the gas enters through the intake valve (\(V_{i}\)) into all piston cavities and the high-pressure chamber. Figure 1(i) Figure 1: The block diagram of the first-stage gas compression process: **(i)** Gas flow into the \(S_{i}\) cavity and chamber **(C)**; **(ii)**\(S_{i}\) piston compressing the gas into chamber **(C)** up to 800 bar. demonstrates the gas refilling process in the first piston cavity (Si) and chamber (C). During this process, gas is filled in the upper part of the Si by moving the piston down to the oil tank. Basically, the piston moves down with the gas bottle pressure of ~200 bar. After 4-5 minutes, we close the intake valve (Vi) and move the piston in an upward direction slowly to pressurize the gas inside the chamber. Figure 1(ii) depicts the movement of the piston in the upward direction in the Si cavity. Through this first stage, we can reach the maximum pressure up to 800 bar, _i.e._, the gas pressure can increase from 200 bar to 800 bar. _(b) Second-stage piston (Sz)_: This stage is connected to the first-stage piston (Si), and has a pressure of around 800 bar. There is again an oil-gas piston that moves down with the first-stage pressure of 800 bar, which is shown in Figure 2(i). Now, the intake valve (Vi) between Si and S2 is closed, and the piston of S2 moves upward slowly which creates gas pressure inside the chamber up to 4000 bar. This process is similar to the first stage and is shown in Figure 2 (ii). _(c) Third stage piston (S\({}_{3}\))_: This stage piston has bigger size than the first and second-stage pistons. Due to the second-stage pressure, this piston is moved to the down position, which is depicted in Figure 3(i). Once all gas pressure moves into the S\({}_{3}\) cavity and chamber, the intake valve (Vi) between S\({}_{2}\) and S\({}_{3}\) is closed. In the next step, S\({}_{3}\) piston move slowly upward direction through the oil-based pump and enhances the pressure inside the chamber (C). The maximum pressure achieved in S\({}_{3}\) can be reached up to 1.8 GPa (18000 bar). Figure 3(ii) displays the block diagram of the third stage piston movement to achieve the highest pressure. Figure 2: The block diagram of the second stage gas compression process: **(i)** Gas flow into the S\({}_{2}\) cavity and chamber (C) **(ii)** S\({}_{2}\) piston compressing the gas into chamber (C) up to 4000 bars. ## 3 HP-HTS technique at UNIPRESS HP-HTS facility designed at UNIPRESS is capable of producing a pressure up to 1.8 GPa and temperature up to 1700\({}^{\circ}\)C. One can use a one- or multi-zone furnace to create different temperatures, especially for a single crystal growth process [9; 8; 11; 14]. Our current system is based on a three-stage oil-gas compressor to create high pressure as discussed above. More advanced designs can produce pressure up to 3 GPa, which requires three pistons and a small diameter of the pressure chamber (C). Our HP-HTS technique is vital in synthesizing high-quality materials [9; 8; 11; 14]. With the capability to generate high gas pressure, this system has multi-zone furnaces, real-time temperature measurement, and adaptability for various sample types positions inside the pressure chamber. It is a versatile tool for growing single and polycrystalline samples across diverse material categories [8]. The block diagram of our HP-HTS system is depicted in Figure 4. The system has four main components: a high-pressure chamber, compressor, sample holder with furnace, and controller unit and monitor, which work cohesively to ensure precise and controlled pressurization, leak-free operation, and real-time temperature and pressure monitoring. These features empower high-pressure synthesis and sintering research, providing a platform for advancements in superconductivity research and applications. HP-HTS system which is currently used for the growth of FBS, is based on three pistons, as depicted in Figure 4. These pistons are connected to each other and also to the pressure chamber. The first stage (Si) generates the pressure to the second stage (Si), the third stage (Si), and the pressure chamber (C). In the next step, the first stage is disconnected and the second-stage piston creates the pressure to the third stage (Si) and pressure chamber. Finally, the third stage (Si) starts to work and generate the maximum pressure inside the chamber up to 1.8 GPa. More details about each part are given below: Figure 3: The block diagram of the third stage gas compression process: **(i)** Gas flow into the S\({}_{3}\) cavity and chamber (C) **(ii)** S\({}_{3}\) piston compressing the gas into chamber (C) up to 1.8 GPa (18000 bar). _i) High-Pressure chamber._ The high-pressure chamber is a central component, depicted in Figure 5, and is constructed from robust steel to withstand extremely high pressures. Its intricate design ensures that this camber remains impervious to leaks even under high-pressure conditions. To regulate temperature efficiently, the outer jacket of the chamber is integrated with a water cooling system. One side of the chamber is precisely connected to a capillary using a large-diameter screw, supported with O-rings and additional seals, ensuring a completely leak-free chamber. This capillary tube performs a dual purpose, functioning as both the inlet and outlet for gas, which is connected to the compressor and gas bottle. On the opposite side, a sample holder with a high-quality O-ring and metal seal, facilitates the insertion of samples into the pressure chamber, as shown in Figure 5. A solder plug is thoughtfully employed to route wires, and safely eliminate leakage risk. These wires are connected to the control unit and monitor, forming a reliable and secure connection for the entire system. The chamber's adaptability for single, double, or Figure 4: The block diagram of HP-HTS technique, presented at our institute, consisting of a three-stage oil-based compressor, high-pressure chamber (C), and a control unit monitor. Figure 5: Schematic diagram of high pressure chamber. triple zone furnaces is a noteworthy feature, enhancing its utility for growing high-quality single crystals and facilitating a wide range of experiments in materials science. _ii) Compressor._ The compressor is an oil-based system with a three-stage piston configuration with pump, as shown in Figure 6. The piston moves upward to create the gas pressure. Generally, this pump is connected to all the stages and moves the piston upward to create high pressure. The systematic block diagram is shown in Figure 6, and the list of maximum pressure for different stages is mentioned in Table 1. In the first stage (S\({}_{1}\)), it can generate a pressure up to 800 bar. The second piston (S\({}_{2}\)) can achieve pressures of up to 4000 bar and in the final stage (S\({}_{3}\)), it can reach an impressive pressure of up to 1.8 GPa (18000 bar). This three-stage piston setup allows for the precise control and adjustment of pressure from low to very high range, which is one of the notable strengths of our compressor. The presence of 12 key valves is the most intricate and critical aspect of the compressor, as shown in Figure 6, which plays a pivotal role during the gas compression process which ensures precise and controlled pressurization. The basic principle of the pump is already explained above. _iii) Sample holder with furnace:_ This component contains a wide setup, consisting of essential components like the sample container, furnace, thermocouples, and pressure gauge. A systematic block diagram is shown in Figure 7. The sample container, designed either in a boat or cylindrical form with a secure cap, acts as the vessel for keeping the samples inside the high-pressure chamber. The furnace is equipped with a Kanthal \begin{table} \begin{tabular}{c c} \hline \hline **Stages of Piston** & **Maximum Pressure** \\ \hline **First stage (S\({}_{1}\))** & 800 bar / 80 MPa \\ **Second stage (S\({}_{2}\))** & 4000 bar/ 400 MPa \\ **Third stage (S\({}_{3}\))** & 18000 bar / 1.8 GPa \\ \hline \hline \end{tabular} \end{table} Table 1: List of the maximum pressure created by the different stage of a compressor. Figure 6: Schematic diagram of a compressor based on the oil pump with three-stage pistons. (FeCrAl alloy) heater wire capable of reaching temperature up to 1300\({}^{\circ}\)C. When higher temperatures are needed, molybdenum (Mo) and tungsten wires can be utilized through which the maximum temperature can reach up to approximately 1700\({}^{\circ}\)C within this high-pressure chamber [3], as listed in Table 2. Furthermore, either a single or multi-zone furnace can be used as shown in Figure 7 to create the high temperature. These thermocouples and heaters are interfaced to the computer through software. For accurate temperature monitoring, three thermocouples are employed, in addition to a reference thermocouple that provides precise temperature readings across all zones. We can use various kinds of thermocouples, as mentioned in Table 3. These are typically K-type thermocouples made of Chromel and Nickel-Aluminium, and are suitable for temperatures up to 1100\({}^{\circ}\)C. However, various thermocouples can be selected based on specific requirements [15; 3]. The actual pressure inside the chamber is regularly monitored using pressure gauges. These vital components, including the thermocouples, heaters, and gauges, are seamlessly connected to the controller unit and monitor, ensuring precise control and monitoring of the high-pressure chamber's conditions. Modifying this chamber according to specific material growth conditions adds to its versatility and suitability for diverse research and experimentation needs in materials science. \begin{table} \begin{tabular}{l c} \hline \hline **Wires Materials** & **Description** \\ \hline **Tungsten (W)** & High resistive metal up to 2600 \({}^{\circ}\)C \\ **Molybdenum (Mo)** & High resistive metal up to 2200 \({}^{\circ}\)C \\ **Kanthal (FeCrAl alloy)** & High resistivity up to 1400\({}^{\circ}\)C \\ **Nifethal (NiFe alloy)** & Low resistive up to 600\({}^{\circ}\)C \\ \hline \hline \end{tabular} \end{table} Table 2: Heater wires for the furnace with a range of maximum temperature. Figure 7: Block diagram of sample holder with furnace. #### 4.3.3 Controller unit and monitor The final component of HP-HTS comprises a temperature controller and a pressure gauge, both of which can be connected to a computer for enhanced functionality. This capability allows for real-time monitoring of temperature and pressure within the system. The temperature controller facilitates monitoring all four thermocouples, each positioned at different locations within the system. One of these thermocouples also serves as a reference temperature point, enabling precise temperature measurements near the sample. This feature provides valuable insights into the actual temperature conditions during experiments. Through the interface with the computer, the pressure gauge provides real-time data on the pressure inside the chamber. This allows for continuous monitoring and control, ensuring that pressure conditions are maintained as required for specific experiments. ## 4 Current results using this HP-HTS facility Currently, we are applying this HP-HTS technique for FBS [16; 17] which has the highest \(T_{c}\) of 58 K, high upper critical field (\(H\alpha\)) of 100 T, and high critical current density (\(J_{c}\)) of 10\({}^{\circ}\)-10\({}^{\circ}\) A/cm\({}^{2}\) at 5 K [18]. These properties make them a strong contender for the practical applications [19]. Many compounds belonging to this high \(T_{c}\) superconductor can be categorized into six families in which 1111 (\(RE\)FeAs(O,F), \(RE\) = La, Ca, Pr, Gd) as a doped family and 1144 (\(AeAFe\)As; \(Ae\) = Ca; \(A\) = K) as a stoichiometric family provide the highest \(T_{c}\) of 58 K [20] and 36 K [21] respectively, for FBS. Hence, our current focus is the growth process of 1111 and 1144 families by using the HP-HTS technique [14]. Before using the high-pressure technique for these complicated families, HP-HTS technique was applied for the simplest FBS _i.e._ 11 (FeSe) family. Tellurium (Te) doping at Se-sites _i.e._ Fe(Se,Te) provides the highest transition temperature \(T_{c}\) of 15 K for Fe\(\mathrm{Se}_{5}\mathrm{Te}_{5}\)[22]. First, we have synthesized the high-quality Fe(Se,Te) sample by solid-state reaction method at ambient pressure (0 GPa). The selected composition Fe\(\mathrm{Se}_{5}\mathrm{Te}_{5}\mathrm{Si}\)s prepared at 600\({}^{\circ}\)C for 21 h in the first step, and in the second stage, the sample is grounded and heated again at 600\({}^{\circ}\)C for 4 hours, as more detail discussed in our previous studies [22; 11]. This FeSe\(\mathrm{Se}_{5}\mathrm{Te}_{5}\) sample shows the transition temperature up to 15 K by the convenient synthesis method at ambient pressure, as reported by the previous report [22] and depicted in Figure 8(a). To understand the high-pressure growth effect, Fe(Se,Te) bulks are prepared in a very broad pressure range from 0 GPa to 1 GPa at 600\({}^{\circ}\)C for 1 h and 11 hours, as reported in our previous study [11]. Also, these samples were prepared by _in-situ_ and _ex-situ_ processes where samples were sealed in a Ta-tube or placed in an open Ta-tube. These various conditions were used to optimize the best growth conditions so that the high-quality sample can be produced with high superconducting properties. Interestingly, the optimized conditions were obtained as 600\({}^{\circ}\)C, 1 h, and 500 MPa where samples show the highest superconducting properties [11]. The comparative graph is shown in Figure 8(a). Our studies also confirm that grain connectivity can be improved when the samples are sealed into a Ta-tube under an argon gas atmosphere through ARC melter, whereas the samples placed in an open Ta-tube have a pure superconducting phase but the grain connectivity was poor due to the high gas pressure passing through the micro or nanopores. Fe(Se,Te) \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Thermocouples**} & **Temperature Range (\({}^{\circ}\)C)** & **Composition** \\ \hline **B-Type** & 200 to 1700 & (+) Platinum- 6\% Rhodium \\ & & (–) Platinum- 30\% Rhodium \\ **K-type** & 0 to 1100 & (+) Nickel-Chromium \\ & & (–) Nickel-Aluminum \\ **T-Type** & -200 to 370 & (+) Copper \\ & & (–) Constantan \\ \hline \hline \end{tabular} \end{table} Table 3: Types of thermocouples that can be used for HP-HTS technique. bulks prepared by the optimized conditions are depicted in Figure 8(a), which shows the high superconducting properties (\(T_{c}=17\) K) compared to the samples prepared by the ambient pressure method (\(T_{c}=15\) K). These optimized study processes confirm that the high gas pressure technique can be an effective method to enhance the superconducting properties and the synthesis process can be completed in a very short reaction time under the optimized growth pressure [11]. After having a good experience with this 11 family of FBS, we have started to work with CaKFeAs\({}_{4}\)(1144) which is a stoichiometric compound of FBS [14; 23]. On the basis of the optimization of the 11 (FeSe\({}_{0.5}\)Te\({}_{0.5}\)Te\({}_{0.5}\)) family, we have prepared 1144 samples under the optimized conditions (500 MPa, 1 hour) at 500\({}^{\circ}\)C. Interestingly, CaKFeAs\({}_{4}\) prepared by HP-HTS has enhanced the superconducting transition by ~2 K with improved sample quality compared to the 1144 bulks prepared under the conventional synthesis method [14] at ambient pressure. The resistivity measurements of these samples are depicted in Figure 8(b) which clearly enhanced the \(T_{c}\) value with a sharp transition. It suggests that 1144 bulk prepared by HP-HTS are homogeneous and well-connected grain boundaries. In a similar way, we are also applying this HP-HTS to other members of 1144 and 1111 families. ## 5 Conclusions High gas pressure technique can be a unique way to improve the sample quality, sample size, and the material properties. HP-HTS facility available at our institute provides the larger sample space and high growth temperature and pressure. We can also create different gas atmosphere according to our requirements. The current application of this HP-HTS technique is going for high-temperature iron-based superconductors. Interestingly, the observed results depict the enhancement of superconducting properties and Figure 8: **(a)** The temperature dependence of the resistivity (\(\rho\)) for **(a)** FeSe\({}_{0.5}\)Te\({}_{0.5}\)[11] **(b)** CaKFeAs\({}_{4}\) (1144) bulk [14] under HP-HTS, presented at our institute. also the improved sample quality. Our studies suggest that the high-pressure synthesis works well for the high \(T_{c}\) material and can be useful for other kinds of materials to improve their properties. Conceptualization, Supervision and Formal analysis, S.J.S.; methodology, S.J.S., M.A. and M.M.; data collection, M.M. and M.A.; High-pressure experiments, M.A., M.M., A.M. and T.C.; investigation and writing\(-\)original draft preparation, S.J.S., M.A. M.M.; writing\(-\)review and editing, S.J.S., M.A. and M.M.; Comments and Suggestions, M.A., M.M., A.M., S.J.S.. All authors have read and agreed to the published version of the manuscript. This research was funded by National Science Centre (NCN), Poland, grant number "2021/42/E/ST5/00262" (SONATA-BIS 11). S.J.S. acknowledges financial support from National Science Centre (NCN), Poland through research Project number: 2021/42/E/ST5/00262.
2302.00093
Large Language Models Can Be Easily Distracted by Irrelevant Context
Large language models have achieved impressive performance on various natural language processing tasks. However, so far they have been evaluated primarily on benchmarks where all information in the input context is relevant for solving the task. In this work, we investigate the distractibility of large language models, i.e., how the model problem-solving accuracy can be influenced by irrelevant context. In particular, we introduce Grade-School Math with Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant information in the problem description. We use this benchmark to measure the distractibility of cutting-edge prompting techniques for large language models, and find that the model performance is dramatically decreased when irrelevant information is included. We also identify several approaches for mitigating this deficiency, such as decoding with self-consistency and adding to the prompt an instruction that tells the language model to ignore the irrelevant information.
Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou
2023-01-31T20:48:57Z
http://arxiv.org/abs/2302.00093v3
# Large Language Models Can Be Easily Distracted by Irrelevant Context ###### Abstract Large language models have achieved impressive performance on various natural language processing tasks. However, so far they have been evaluated primarily on benchmarks where all information in the input context is relevant for solving the task. In this work, we investigate _distractibility_ of large language models, i.e., how the model problem-solving accuracy can be influenced by irrelevant context. In particular, we introduce Grade-School Math with Irrelevant Context (GSM-IC), an arithmetic reasoning dataset with irrelevant information in the problem description. We use this benchmark to measure the distractibility of cutting-edge prompting techniques for large language models, and find that the model performance is dramatically decreased when irrelevant information is included. We also identify several approaches for mitigating this deficiency, such as decoding with self-consistency and adding to the prompt an instruction that tells the language model to ignore the irrelevant information.1 Footnote 1: Dataset is available at [https://github.com/google-research-datasets/GSM-IC](https://github.com/google-research-datasets/GSM-IC). ## 1 Introduction Prompting large language models performs decently well in a variety of domains (Brown et al., 2020; Chowdhery et al., 2022, _inter alia_). However, for most of theses evaluation benchmarks, all the information provided in the problem description is relevant to the problem solution, as the problems in exams. This is different from real-world situations, where problems usually come with several pieces of contextually related information, which may or may not be relevant to the problems that we want to solve. We have to identify what information is actually necessary during solving those problems. Studies in psychology have shown that irrelevant information may significantly decrease some children and even adults problem-solving accuracy (Hoyer et al., 1979; Pasolunghi et al., 1999; Marzocchi et al., 2002, _inter alia_). In this work, we study the _distractibility_ of large language models for various prompting techniques; i.e., how is large language model prompting affected by irrelevant context, and what strategies can be used to improve performance? To measure distractibility, we construct the GSM-IC dataset, a grade-school math problem dataset derived from GSM8K (Cobbe et al., 2021) and introduce two different metrics. In contrast to prior work that derives benchmark variations by substituting sentences of the base problems with variations (Patel et al., 2021; Kumar et al., 2021, _inter alia_), we keep the base problem description and add to it one irrelevant sentence, while making sure that it does not affect the solution of the problem (Figure 1). We use code-davinci-002 in the GPT3 model family to evaluate state-of-the-art prompting techniques on GSM-IC,2 including chain-of-thought prompting (CoT; Wei et al., 2022), zero-shot chain-of-thought prompting (0-CoT; Kojima et al., 2022), least-to-most-prompting (LtM; Zhou et al., 2022), and prompting with programs (Program; Figure 1: An example problem from GSM-IC. An irrelevant sentence (_italic and highlighted_) that does not affect the standard answer is added immediately before the question. Chowdhery et al., 2022).We find that their performance on GSM-IC greatly decreases compared to the original GSM8K (without irrelevant context). We then investigate several approaches to mitigate this weakness, including self-consistency (Wang et al., 2022) and adding irrelevant information to the exemplars in the prompt. In addition to demonstrating how to handle irrelevant information via exemplars, we also investigate the usage of task-specific instructions (Wei et al., 2021; Sanh et al., 2021; Ouyang et al., 2022; Suzgun et al., 2022; Chung et al., 2022), where we prepend an instruction sentence _"feel free to ignore irrelevant information in the problem description"_ to the exemplars. We summarize our key findings below: 1. All investigated prompting techniques are sensitive to irrelevant information in the problem description. In particular, among the original problems that can be solved by baseline prompts with greedy decoding, no more than \(18\%\) of them can be consistently solved for all types of irrelevant information, showing that the large language model is easily distracted and produces inconsistent predictions when adding a small amount of irrelevant information to the problem description. 2. Self-consistency improves the performance of all prompting techniques on GSM-IC. In particular, the recall rate of the correct answer for GSM-IC is as high as 99.7% with 20 samples per problem, i.e., at least one of the 20 solutions result in the correct final answer, which means that using multiple samples allows the model to almost always retrieve the correct answer. 3. Adding irrelevant information to the exemplars shown in the prompt consistently boosts the performance, and the same holds for adding an instruction to ignore irrelevant context. This suggests that language models are--to some extent--able to learn to ignore irrelevant information by following examples or instructions. 4. We identify different factors of the irrelevant information that affect the model's sensitivity to irrelevant context. Our breakdown analysis shows that varying the numbers in the irrelevant information does not notably change the model performance, while the degree of lexical overlap with the original problem description matters. Filtering out irrelevant information is essential for handling real-world tasks. Our evaluation indicates that despite the strong performance on challenging reasoning problems, state-of-the-art language models still have fundamental weaknesses in context understanding and identifying the relevant information from the input. Our findings suggest that in order to gain a more holistic understanding of the reasoning capability of language models, future work should also consider the model sensitivity to irrelevant context, in addition to solving more challenging problems. ## 2 Related Work **Few-shot prompting.** Few-shot prompting (Brown et al., 2020; Chowdhery et al., 2022, _inter alia_) has been significantly boosted with various techniques, including generating intermediate steps (Ling et al., 2017; Cobbe et al., 2021; Nye et al., 2021; Wei et al., 2022; Suzgun et al., 2022; Shi et al., 2022b, _inter alia_), problem decomposition (Zhou et al., 2022; Drozdov et al., 2022; Dohan et al., 2022; Khot et al., 2022; Press et al., 2022, _inter alia_), generating programs (Austin et al., 2021; Chowdhery et al., 2022; Gao et al., 2022; Chen et al., 2022, _inter alia_), marginalizing intermediate steps that share the same result (Wang et al., 2022; Shi et al., 2022), and ensemble (Wang et al., 2022; Drozdov et al., 2022). In addition, Kojima et al. (2022) demonstrate that appropriate hint in prompts also leads to decent performance, even without any exemplar. In this work, we examine these cutting-edge prompting techniques (Wei et al., 2022; Zhou et al., 2022; Kojima et al., 2022; Wang et al., 2022) on our benchmark, and demonstrate that they are sensitive to irrelevant input context. **Natural language benchmarks with input perturbations.** There has been a long line of work on adding input perturbations for natural language tasks, including model-agnostic input transformations (Liang et al., 2022; Ravichander et al., 2022, _inter alia_) and adversarial example generation against individual models (Jia and Liang, 2017; Shi et al., 2018; Morris et al., 2020; Wang et al., 2021). In particular, prior work has constructed arithmetic reasoning benchmarks through paraphrasing or rewriting sentences in the base problems from clean datasets (Patel et al., 2021; Kumar et al., 2021). Meanwhile, Liang et al. (2022) evaluate various large language models under several metrics, including accuracy, robustness, fairness, etc. Specifically, the input transformations in their robustness evaluation include semantics-preserving and semantics-altering perturbations, such as injecting typos and modifying sentences to change the ground-truth classification labels. In contrast the above work where the meaning of problem descriptions may be changed with perturbations, we keep all sentences in the original problem description, and introduce an irrelevant sentence that is ensured not to affect the standard answer. **Natural language benchmarks with irrelevant input context.** Jia and Liang (2017) have shown that neural question answering systems are largely affected by adversarial distracting sentences, whereas follow up work (Khashabi et al., 2017; Ni et al., 2019) proposes learning strategies that mitigate the problem. Similar issues have been found for general-purpose pretrained language models, on the tasks of factual reasoning (Kassner and Schutze, 2020; Pandia and Ettinger, 2021; Misra et al., 2022; Li et al., 2022) and syntactic generalization (Chaves and Richter, 2021). In particular, Li et al. (2022) evaluated T5 (Raffel et al., 2020) and PaLM (Chowdhery et al., 2022) with few-shot prompts, and proposed knowledge-aware finetuning that finetunes the model on problems with counterfactual and irrelevant context, which strengthens the model robustness to noisy context. In our evaluation, we show that without training or finetuning, adding irrelevant context into demonstrations in the prompt also mitigates the distractibility of the underlying language model and significantly improves the model performance on our GSM-IC benchmark. There exist some logical reasoning benchmarks that contain irrelevant content in task descriptions (Weston et al., 2015; Sinha et al., 2019; Clark et al., 2021; Han et al., 2022; Tafjord et al., 2020, _inter alia_). However, previous work largely focuses on designing models that require extra training, and prompting alone still hardly achieves the same level of performance as finetuned models for these tasks (Han et al., 2022; Creswell et al., 2022). In our work, we focus on arithmetic reasoning, where prompting techniques have achieved the state-of-the-art results, e.g., on GSM8K, while we show that adding a single irrelevant sentence into the problem description significantly degrades the performance. **Prompting with noisy ground truth.** A line of work studies the model performance with incorrect prompting exemplars, i.e., the example problems are paired with wrong answers (Min et al., 2022; Kim et al., 2022). In addition, prior work has investigated the model sensitivity to other parts of the prompt, such as instruction tuning with misleading and irrelevant instructions (Webson and Pavlick, 2021) and wrong reasoning steps in the examples (Madaan and Yazdanbakhsh, 2022; Wang et al., 2022). In particular, Madaan and Yazdanbakhsh (2022) conclude that the correctness of numbers and equations in chain-of-thought prompts does not play a key role in model performance, but using wrong entities and removing either equations or text explanation in the reasoning steps drastically hamper the performance. Different from this line of work, we always include correct answers to example problems in the prompt, and ensure that the irrelevant context added to the problem description does not change the ground truth answer. We show that the model performance significantly drops when presented with irrelevant context in problem descriptions, and different distributions of numbers and entities in the irrelevant context also lead to different levels of performance degradation. ## 3 The GSM-IC Dataset In this section, we introduce the creation process of the GSM-IC dataset (SS3.1) and the evaluation metrics (SS3.2). ### Dataset Creation We randomly choose 1,000 problems from the GSM8K training set as a development set. To construct our base dataset, we then choose 100 problems from this development set that can be correctly solved by at least one of the prompting techniques mentioned in this paper,3 that is, our base dataset is an "easy" subset of GSM8K (Table 1). Each base problem requires two to seven reasoning steps to solve.4 Among the 100 base problems, 60 of them can be solved with two reasoning steps. The full dataset statistics can be found in Appendix A. Footnote 3: We do not generate new examples or perform analysis on the test set to avoid potential tuning-on-test-set issues. Footnote 4: The number of reasoning steps of a problem is given by the number of sentences in its standard answer (Cobbe et al., 2021). We then generate the examples of our new dataset by adding to each base problem one sentence containing irrelevant information. We use a template-based method (Figure 2) to generate these sentences, which can be characterized by the following three factors: * **Topic of the inserted sentence.** We write templates for both in-topic and off-topic sentences. In-topic sentences are closely related to the topic of the original problem, whereas off-topic sentences are about a different topic. * **Role name overlap**. Most sentence templates contain some role name blanks, which can be filled with names that may or may not overlap with the role names that occur in the problem. For blank fillers that have overlap with original role names, we: (1) randomly pick a role name A from the original problem description and (2) create the blank fillers with template such as A's father and A's sister. * **Range of numbers**. Since we focus on arithmetic reason \begin{table} \begin{tabular}{c c c c c} \hline \hline & **CoT** & **LtM** & **Program** & **0-CoT** \\ \hline & 95.0 & 94.0 & 83.0 & 44.0 \\ + SC & 96.0 & 99.0 & 91.0 & 76.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy (\(\times 100\)) on the base 100-example dataset. Figure 2: Illustration of the considered factors when creating the GSM-IC dataset. Best viewed in color. ing, most sentence templates also contain a number blank. We can choose to fill in the number blank with a number of similar or different magnitude to those in the original problem description. Concretely, for a number \(a\), if there exists a number \(b\) in the original problem description or solution such that \(\frac{1}{10}\leq\frac{a}{b}\leq 10\), we consider \(a\) as an in-range number, and otherwise an out-of-range number. Since the standard answer to GSM8K problems are all positive integers, we only consider positive integers as the number blank fillers. We manually verify that (1) all the generated sentences are acceptable in English and that (2) adding them does not affect the standard solution of the base problem. Because the above factors are orthogonal, we generate for each base example a set of derived examples with different factor combinations. The full GSM-IC benchmark consists of 58,052 examples. More details about the dataset creation process can be found in Appendix A. ### Evaluation Metrics For a problem \(p\), we denote its standard solution by \(s(p)\), and the solution of method \(\mathcal{M}\) by \(\mathcal{M}(p)\). To evaluate the distractibility of \(\mathcal{M}\), we consider the following two metrics: * **Micro accuracy**\(Acc_{micro}(\mathcal{M};\mathcal{P})\) is the average accuracy of method \(\mathcal{M}\) over all the test problems \(\mathcal{P}\). \[Acc_{micro}(\mathcal{M};\mathcal{P})=\frac{\sum_{p\in\mathcal{P}}\mathbbm{1 }\left[\mathcal{M}(p)=s(p)\right]}{|\mathcal{P}|}\] This means that the micro accuracy weighs all the individual test problems equally. * **Macro accuracy**\(Acc_{macro}(\mathcal{M};\mathcal{B})\) is the average accuracy of method \(\mathcal{M}\) over classes of test problems, where each class \(\mathcal{P}(b)\) consists of the set of test examples derived from the base example \(b\in\mathcal{B}\). We define \(\mathcal{M}\)'s prediction for a class \(\mathcal{P}(b)\) to be correct if and only if \(\mathcal{M}\)'s prediction for all problems in this class are correct. \[Acc_{macro}(\mathcal{M};\mathcal{B})=\frac{\sum_{b\in\mathcal{B}}\mathbbm{1 }\left[\bigwedge_{p\in\mathcal{P}(b)}\left[\mathcal{M}(p)=s(p)\right]\right]} {|\mathcal{B}|}\] This means that the macro accuracy is the fraction of base problems that can be consistently solved no matter what irrelevant sentence is being added. * **Normalized accuracy** measures how a method is affected by the distractors, considering its accuracy on base problems. For a micro or macro accuracy \(a_{\mathcal{M}}\) achieved by method \(\mathcal{M}\), we calculate its corresponding normalized accuracy by \[norm(a_{\mathcal{M}};\mathcal{M})=\frac{a_{\mathcal{M}}}{n_{\mathcal{M}}},\] where \(n_{\mathcal{M}}\) denotes the base problem accuracy of method \(\mathcal{M}\) (Table 1). ## 4 Investigated Solutions In the following section, we review the investigated prompting techniques (SS4.1), present the formats of our prompts (SS4.2), and introduce instructed prompting (SS4.3). ### Base Techniques **Chain-of-thought prompting (CoT; Wei et al., 2022)** is a prompting technique that guides the language models to solve a problem in a step-by-step manner. By presenting exemplars that solve the corresponding problems with intermediate reasoning steps in the prompts, CoT significantly improves the reasoning performance over direct answer prediction without such intermediate reasoning steps. **Zero-shot chain-of-thought prompting (0-CoT; Kojima et al., 2022)** is a variation of CoT where the prompt does not contain any exemplar. Instead, the model is prompted directly with the problem of interest followed by the instruction "_Let's think step by step:_". **Least-to-most prompting (LtM; Zhou et al., 2022)** teaches language models to (1) break down a problem into subproblems, and (2) solve those subproblems sequentially using CoT. The final answer is that to the last subproblem. **Program prompts (Program; Chowdhery et al., 2022)** represent the arithmetic reasoning process as a program. Following prior work on solving GSM8K problems with code (Chowdhery et al., 2022; Gao et al., 2022; Chen et al., 2022), we include a Python program as the problem solution in the prompt, and execute the generated Python code using an external Python interpreter to obtain the final answer. **Self-consistency** (SC; Wang et al., 2022; Shi et al., 2022) may further boost the reasoning performance by marginalizing over intermediate reasoning steps that share the same final result. In practice, SC can be implemented by (1) sampling several solutions from the large language model and (2) taking the majority vote. Note that SC is orthogonal to above techniques, and can be combined with any of them. ### Prompt Design We present some example prompts used in our experiments (Figure 3). For few-shot prompting techniques (i.e., CoT, LtM and Program), the input prompt includes exemplar problems and their solutions before the problem of interest. In order to keep simplicity and avoid over-fitting in prompt engineering, we follow Zhou et al. (2022) on exemplar creation; that is, we only use one simple exemplar for our main experiments. This exemplar is either based on the [Original Problem] or the [Problem with Irrelevant Context], which allows us to investigate the effect of irrelevant information in the prompt exemplar. For 0-CoT, we adhere to Kojima et al. (2022) and directly present the problem of interest followed by "_A: Let's think step by step:_". ### Instructed Prompting In addition to presenting irrelevant information in the exemplars, we also investigate whether natural language instructions help language models ignore irrelevant context and become less distracted. Extending the line of work (Suzgun et al., 2022; Sanh et al., 2021; Ouyang et al., 2022) that includes a general task description before exemplars, we add the sentence _"Solve grade school math problems. Feel free to ignore irrelevant information given in the questions."_ before our exemplars in the prompt (Figure 3), which explicitly _instructs_ the language model to ignore irrelevant information in the problem description. ## 5 Experiments Being mindful of the experiment costs, we uniformly sample 4,000 examples from the GSM-IC dataset (denoted by GSM-IC-4K)5 for evaluation and analysis purposes throughout this paper. We use code-davinci-002 for all our experiments. For experiments without self-consistency decoding, we use greedy decoding (i.e., temperature \(\tau=0\)); for self-consistency experiments that require multiple samples for a problem, we sample 20 responses with temperature \(\tau=0.7\) following Wang et al. (2022c). Footnote 5: Our sampled GSM-IC-4K covers all 100 base problems. ### Main Results on GSM-IC We compare the performance of different prompting techniques on GSM-IC-4K (Table 2), in terms of both micro and macro accuracies, as well as their corresponding normalized accuracies. Overall, we observe significant performance drop for all prompting techniques. The drop on macro accuracy is especially large, showing that fewer than 30% of the base problems are consistently solved after adding distractors. In Figure 4, we present a GSM-IC-4K example where a single irrelevant sentence causes different types of Figure 3: Prompt formats for the investigated techniques on the right, which are constructed from building blocks on the left (best viewed in color). The [Problem with Irrelevant Context] is obtained by adding an irrelevant sentence (_italic and underlined_) to the original problem description and it can be used as an alternative to the [Original Problem] in the prompts on the right. In these prompts, identifiers highlighted and wrapped by brackets (e.g., [Problem of Interest]) are replaced by the contents of the corresponding building blocks. The prompts for all settings can be found in Appendix C. ## Modified Problem Maria buys a large bar of French soap that lasts her for 2 months. She spends $8.00 per bar of soap. Every 10 months, Maria's neighbor blows a new _isimimpoam_and_with_for Maria's neighbor. If Maria wants to stock up for the entire year, how much will she spend on soap? **Correct CoT Output to the Original Problem** Maria spends $8.00 per bar of soap. She buys a bar of soap every 2 months. There are 12 months in a year. Maria will spend $8.00 x 12 / 2 = $48.00 on soap for the year. The answer is $48.00$. ## Output to the Modified Problem COT Maria buys a large bar of French soap that lasts her for 2 months. She spends $8.00 per bar of soap. Every 10 months, Maria's neighbor buys a new _isimpoam_and_with_for Maria's neighbor. If Maria wants to stock up for the entire year, how much will she spend $8.00 x 12 = $96.00 on soap. The answer is $96.00$. **LTM** Lets break down this problem 1. How much does Maria spend on soap every 10 months? 2. How much does Maria spend on soap every year? 1. Maria spends $8.00 per bar of soap. She buys a new bar of soap every 10 months. So Maria spends $8.00 \({}^{*}\) 10 months. So Maria spends $80.00 \({}^{*}\) 12 = 2 Maria spends $80.00 every 10 months. So Maria spends $80.00 \({}^{*}\) 12 = $966.00 every year. The answer is $966.00$. **Program** Soap_pre = 8.00 Soap_per_year = 2 * 12 Goup_total = Soap_pre * Soap_per_year = $192.0 print(Soap_total) Figure 4: Example problem and corresponding outputs by different prompting techniques (best viewed in color). The CoT answer to the original problem is highlighted in green. The added irrelevant sentence is in _italic and highlighted in red_, which causes different errors (highlighted in yellow) for all prompting techniques. More examples of model predictions can be found in Appendix B. \begin{table} \begin{tabular}{l c|c c c|c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**Exemplar**} & \multicolumn{3}{c|}{**Micro Accuracy**} & \multicolumn{3}{c}{**Macro Accuracy**} \\ & w/ IrrCtx? & 2 Steps & \(>\)2 Steps & Overall & _Norm_ & 2 Steps & \(>\)2 Steps & Overall & _Norm_ \\ \hline CoT & ✓ & 79.8 & 72.4 & 76.8 & _80.8_ & 16.7 & 10.0 & 14.0 & _14.7_ \\ & ✗ & 73.5 & 70.8 & 72.4 & 76.2 & 8.3 & 2.5 & 6.0 & _6.3_ \\ CoT + Inst. & ✓ & 80.5 & 74.4 & 78.1 & 82.2 & 20.0 & 12.0 & 17.0 & _17.9_ \\ & ✗ & 79.0 & 76.0 & 77.8 & _81.8_ & 20.0 & 7.0 & 15.0 & _15.8_ \\ \hline 0-COT & N/A & 29.0 & 29.1 & 29.0 & _65.9_ & 1.7 & 0.0 & 1.0 & _2.3_ \\ 0-COT +Inst. & N/A & 31.6 & 28.8 & 30.5 & _69.3_ & 1.7 & 0.0 & 1.0 & _2.3_ \\ \hline LTM & ✓ & 78.1 & 84.6 & 80.7 & _85.9_ & **23.3** & **35.0** & **28.0** & _29.8_ \\ & ✗ & 74.9 & 81.5 & 77.5 & _82.4_ & 16.7 & 20.0 & 18.0 & _19.1_ \\ & ✗ & **81.0** & **85.4** & **82.8** & **88.1** & **23.3** & **35.0** & **28.0** & _29.8_ \\ & ✗ & 80.1 & 81.3 & 80.6 & _85.7_ & 18.3 & 35.0 & 25.0 & _26.6_ \\ \hline Program & ✓ & 67.0 & 55.0 & 62.2 & _74.9_ & 11.7 & 5.0 & 9.0 & _10.8_ \\ & ✗ & 59.1 & 47.4 & 54.4 & _65.5_ & 6.7 & 2.5 & 5.0 & _6.0_ \\ Program + Inst. & ✓ & 68.8 & 54.8 & 63.2 & _76.1_ & 15.0 & 7.5 & 12.0 & _14.5_ \\ & ✗ & 60.6 & 50.9 & 56.7 & _68.3_ & 6.7 & 5.0 & 6.0 & _7.2_ \\ \hline \hline CoT + SC & ✗ & 87.6 & 90.1 & 88.1 & _91.8_ & 29.0 & 28.3 & 30.0 & _31.3_ \\ 0-COT + SC & N/A & 61.6 & 68.4 & 64.3 & _84.6_ & 0.0 & 2.5 & 1.0 & _1.3_ \\ LTM + SC & ✗ & **92.4** & **94.8** & **93.4** & **94.3** & **51.6** & **35.0** & **45.0** & **45.5** \\ Program + SC & ✗ & 73.5 & 76.1 & 74.6 & _82.0_ & 16.7 & 7.5 & 13.0 & _14.3_ \\ \hline \hline \end{tabular} \end{table} Table 2: Micro and macro accuracies (\(\times 100\)) on the GSM-IC-4K dataset. IrrCtx denotes irrelevant contexts, and SC denotes self-consistency. _Norm_ is the overall accuracy normalized by the fraction of solved base problems (Table 1), which is a measure for robustness w.r.t. irrelevant information. The best number in each column for each section (with or without self-consistency) is in **boldface**. **Self-consistency significantly reduces the distractibility.** Taking the majority vote from 20 samples,6 SC improves the overall micro accuracy by more than 11 percentage points. This means that in addition to improving model performance on clean arithmetic reasoning tasks (Wang et al., 2022c), SC also substantially reduces the distractibility of large language models to irrelevant context. The gain on micro accuracy is notably large on 0-CoT (35.5 percentage points). Furthermore, the correct answer for 99.7% of the problems is in the 20 sampled answers for both CoT and LtM. Even for 0-CoT, the recall of correct solutions within 20 samples is 96.5%. Despite these improvements, the best macro accuracy among all prompting techniques is only \(45\%\), suggesting that for more than half of the base problems, SC fails to prevent the model from being distracted by different variants of irrelevant information. These results imply that a better algorithm may be developed to further reduce the distractibility based on a few sampled solutions. Footnote 6: If there is a tie, we take a random top-tier result for evaluation, following Wang et al. (2022c) and Shi et al. (2022a). ### Break-Down Analysis #### 5.2.1 Factors of the Irrelevant Contexts We analyze the performance of CoT, LtM and Program with respect to the considered factors (SS3.1) of the irrelevant sentences (Table 3). We find that (1) in-topic sentences with (2) role name overlap and (3) in-range numbers are generally more challenging, which is examplified by Figure 4. For LtM, the latter two factors do not have a large effect on the micro accuracy. The difference is more significant for the macro accuracy and, as an anomaly, using distractors with in-range numbers turns out to be less challenging than out-of-range numbers when using irrelevant context in the exemplar. In addition, LtM outperforms CoT and Program on all investigated sub-categories. #### 5.2.2 Break-Down Accuracies w.r.t. # Steps We analyze the break-down accuracies for problems with respect to the reasoning steps (Figure 5). While we see a significant drop for CoT and Program on problems that require four or more steps in the reasoning process, the performance of LtM is fairly consistent across difficulty. In addition to the advantage of LtM on clean problems for complicated reasoning (Zhou et al., 2022), our results show that LtM is also less sensitive to irrelevant context for complicated problems that require more steps to solve. ### Instructed Prompting Improves Robustness to Irrelevant Contexts We have shown that using exemplars with distractors improves robustness to irrelevant context. We also compare the performance of instructed prompting and that of the prompts without instructions in Table 2. Adding instructions to CoT, LtM, and Program consistently improves their performance. Surprisingly, instructed prompting with original exemplars reaches comparable or even better performance than uninstructed prompting that uses exemplars with distractors for both CoT and LtM. Note that adding the instruction _"Solve grade school math problems."_ alone does not significantly improve the performance, and it is the instruction _"Feel free to ignore irrelevant information given in the questions."_ that makes the difference. Similar to the instruction _"Let's think step by step."_ employed by 0-CoT, this shows that language models are--to some extent--able to follow natural language instructions in a way that dramatically changes their problem solving behavior, suggesting that such instructions may be useful for guiding the behavior of language models on more tasks. On the original GSM8K development set (Cobbe et al., 2021; Zhou et al., 2022), we do not observe a drop in accuracy when using exemplars with irrelevant information, adding natural language instructions, or both (Table 4). The same holds for SVAMP (Patel et al., 2021), an arithmetic reasoning benchmark constructed by applying different types of variations to math problems from existing clean datasets, e.g., changing sentence structures, asking different questions with the same information, etc. This is impressive because the results on GSM-IC show that prompt exemplars with irrelevant information and instructed prompting both improve robustness. For the Program prompt, we find that using exemplars with distractors even increases performance on SVAMP. ### Complicated Prompts May Hurt the Robustness to Irrelevant Contexts We compare our 1-exemplar CoT prompt (Figure 3) to a 4-exemplar prompt (Appendix D of Zhou et al., 2022), which is reported as the best-performing CoT prompt on GSM8K, on GSM-IC (Table 5). Note that the 1-exemplar Figure 5: Micro accuracies on GSM-IC-4K with respect to the number of required reasoning steps. CoT prompt only includes a problem with a 2-step solution, while the 4-exemplar prompt includes problems that require more reasoning steps. While the 4-exemplar prompt leads to better performance on the original GSM8K development set, the 4-exemplar prompt is surprisingly more susceptible to the distraction provided by the irrelevant context. In particular, the 4-exemplar prompt is consistently worse than the 1-exemplar prompt on problems with more than 2 intermediate steps. Even for 2-step prompts, the accuracy improvement from adding more exemplars is almost negligible when using instructions (79.0 vs 79.2). Overall, this finding indicates that adding more exemplars can make the prompt less robust as it leads to some overfitting. ## 6 Conclusion and Discussion In this work, we introduce GSM-IC, a dataset that supports comprehensive study of the distractibility of large language models when performing arithmetic reasoning in presence of irrelevant contexts. We examine a variety of prompting techniques on GSM-IC, and demonstrate that they are all sensitive to the irrelevant information in the problems. Among the studied techniques, self-consistency (Wang et al., 2022c) leads to a substantial improvement in robustness to irrelevant context across the board, and presenting example problems with irrelevant context in the prompt also consistently improves the performance. Similarly, we find that simply adding an instruction to ignore irrelevant information brings notable performance gains on our benchmark. Despite the improvement achieved by these methods, the fundamental issue remains: a single piece of irrelevant information can distract the models and substantially degrade their performance, even on problems whose clean versions they correctly solve. We encourage researchers to also prioritize improving on this fundamental limitation when developing new training and prompting techniques. We leave further investigation on the distractibility for other tasks and different language models for future work. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Exemplar**} & \multicolumn{2}{c}{**Accuracy**} \\ & w/ IrrCtx? & GSM8K Dev. & SVAMP Test \\ \hline CoT & ✓ & 59.3 & 79.1 \\ & ✗ & 60.3 & 77.6 \\ CoT & ✓ & 59.3 & 79.1 \\ + Inst. & ✗ & 58.8 & 78.7 \\ \hline LtM & ✓ & 61.9 & 76.9 \\ & ✗ & 59.8 & 76.6 \\ LtM & ✓ & 60.9 & 76.2 \\ + Inst. & ✗ & 60.3 & 76.3 \\ \hline Program & ✓ & 58.6 & 80.0 \\ & ✗ & 59.8 & 77.3 \\ Program & ✓ & 59.2 & 77.9 \\ + Inst. & ✗ & 61.1 & 77.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracies (\(\times 100\)) on the GSM8K development set and the SVAMP test set. IrrCtx denotes irrelevant contexts, and +Inst. denotes instructed prompting. The baseline results (i.e., those using the simplest exemplars without irrelevant context and without instructions) are underlined. \begin{table} \begin{tabular}{c|c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{6}{c|}{**Micro Accuracy**} & \multicolumn{6}{c}{**Macro Accuracy**} \\ \cline{2-11} & \multicolumn{2}{c}{**Topic**} & \multicolumn{2}{c}{**Role Overlap**} & \multicolumn{2}{c|}{**Num. Range**} & \multicolumn{2}{c}{**Topic**} & \multicolumn{2}{c}{**Role Overlap**} & \multicolumn{2}{c}{**Num. Range**} \\ \cline{2-11} & In & Off & Yes & No & In & Out & In & Off & Yes & No & In & Out \\ \hline \multicolumn{11}{l}{_Prompting Exemplar w/o Irrelevant Context_} \\ CoT & 63.1 & 80.7 & 68.3 & 76.6 & 70.2 & 74.6 & 10.2 & 33.0 & 10.3 & 22.2 & 11.0 & 19.0 \\ LtM & **70.8** & **83.4** & **77.0** & **78.2** & **77.2** & **23.5** & **45.0** & **25.8** & **35.4** & **27.0** & **29.0** \\ Program & 44.1 & 63.5 & 50.7 & 58.4 & 54.3 & 54.5 & 4.1 & 24.0 & 9.3 & 16.2 & 7.0 & 11.0 \\ \hline \multicolumn{11}{l}{_Prompting Exemplar w/ Irrelevant Context_} \\ CoT & 70.2 & 82.7 & 73.6 & 80.2 & 76.1 & 77.7 & 18.4 & 43.0 & 21.6 & 32.3 & 22.0 & 26.0 \\ LtM & **73.0** & **87.5** & **81.4** & **80.2** & **80.0** & **81.4** & **28.6** & **58.0** & **37.1** & **42.4** & **41.0** & **35.0** \\ Program & 52.9 & 70.5 & 60.2 & 64.5 & 61.5 & 62.8 & 10.2 & 37.0 & 14.4 & 23.2 & 15.0 & 17.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Breakdown accuracies (\(\times 100\)) w.r.t. the factors of the added irrelevant sentence. Lower accuracy indicates the model is more fragile to the corresponding type of irrelevant contexts. Note that the macro average accuracies are higher than the corresponding ones reported in Table 2, as we only include a subset of created problems (i.e., those corresponding to the appropriate factor) here to compute the metric. The best result in each column is in **boldface**. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Method**} & **\#Prompting** & **GSM8K** & **GSM-IC** \\ & **Exemplars** & Dev. & 2 Steps & \(>\) 2 Steps \\ \hline CoT & 1 & 60.3 & 73.6 & 70.8 \\ & 4 & 66.3 & 78.0 & 69.4 \\ CoT & 1 & 58.8 & 79.0 & **76.0** \\ + Inst. & 4 & **66.5** & **79.2** & 70.6 \\ \hline \hline \end{tabular} \end{table} Table 5: Micro accuracies (\(\times 100\)) on the GSM8K development set and GSM-IC-4K. # Prompting exemplars denotes the number of exemplars used in the prompt. The best number in each column is in **boldface**.
2308.16872
Constraining the geometry of the reflection nebula NGC 2023 with [O I]: Emission & Absorption
We have mapped the NGC 2023 reflection nebula in the 63 and 145 micron transitions of [O I] and the 158 micron [C II] spectral lines using the heterodyne receiver upGREAT on SOFIA. The observations were used to identify the diffuse and dense components of the PDR traced by the [C II] and [O I] emission, respectively. The velocity-resolved observations reveal the presence of a significant column of low-excitation atomic oxygen, seen in absorption in the [O I] 63 micron spectra, amounting to about 20-60% of the oxygen column seen in emission in the [O I] 145 micron spectra. Some self-absorption is also seen in [C II], but for the most part it is hardly noticeable. The [C II] and [O I] 63 micron spectra show strong red- and blue-shifted wings due to photo evaporation flows especially in the southeastern and southern part of the reflection nebula, where comparison with the mid- and high-J CO emission indicates that the C+ region is expanding into a dense molecular cloud. Using a two-slab toy model the large-scale self-absorption seen in [O I] 63 micron is readily explained as originating in foreground low-excitation gas associated with the source. Similar columns have also been observed recently in other Galactic photon-dominated-regions (PDRs). These results have two implications: for the velocity-unresolved extra-galactic observations this could impact the use of [O I] 63 micron as a tracer of massive star formation and secondly the widespread self-absorption in [O I] 63 micron leads to underestimate of the column density of atomic oxygen derived from this tracer and necessitates the use of alternative indirect methods.
Bhaswati Mookerjea, Goeran Sandell, Rolf Guesten, Helmut Wiesemeyer, Yoko Okada, Karl Jacobs
2023-08-31T17:21:07Z
http://arxiv.org/abs/2308.16872v1
# Constraining the geometry of the reflection nebula NGC 2023 with [O i]: Emission & Absorption ###### Abstract We have mapped the NGC 2023 reflection nebula in the 63 and 145 \(\mu\)m transitions of [O i] and the 158 \(\mu\)m [C ii] spectral lines using the heterodyne receiver upGREAT on SOFIA. The observations were used to identify the diffuse and dense components of the PDR traced by the [C ii] and [O i] emission, respectively. The velocity-resolved observations reveal the presence of a significant column of low-excitation atomic oxygen, seen in absorption in the [O i] 63 \(\mu\)m spectra, amounting to about 20-60% of the oxygen column seen in emission in the [O i] 145 \(\mu\)m spectra. Some self-absorption is also seen in [C ii], but for the most part it is hardly noticeable. The [C ii] and [O i] 63 \(\mu\)m spectra show strong red- and blue-shifted wings due to photo evaporation flows especially in the southeastern and southern part of the reflection nebula, where comparison with the mid- and high-\(J\) CO emission indicates that the C\({}^{+}\) region is expanding into a dense molecular cloud. Using a two-slab toy model the large-scale self-absorption seen in [O i] 63 \(\mu\)m is readily explained as originating in foreground low-excitation gas associated with the source. Similar columns have also been observed recently in other Galactic photon-dominated-regions (PDRs). These results have two implications: for the velocity-unresolved extra-galactic observations this could impact the use of [O i] 63 \(\mu\)m as a tracer of massive star formation and secondly the widespread self-absorption in [O i] 63 \(\mu\)m leads to underestimate of the column density of atomic oxygen derived from this tracer and necessitates the use of alternative indirect methods. keywords: ISM: clouds - ISM: kinematics and dynamics - submillimetre: ISM - ISM: structure - stars: formation - ISM:individual (NGC 2023) ## 1 Introduction The fine-structure lines of [C ii] at 158 \(\mu\)m and [O i] at 63 and 145 \(\mu\)m are the most important cooling lines in the far-infrared (FIR) and have long been used to study photon-dominated regions (PDRs). In the rest of the paper we refer to the [O i] 63 \(\mu\)m and [O i] 145 \(\mu\)m fine-structure lines simply as [O i] 63 and [O i] 145. PDRs are regions where far-ultraviolet (FUV; 6 eV \(<\) hr 13.6 eV) radiation from young massive stars dominates the physics and the chemistry of the interstellar medium and correspond to the transition from ionized to molecular gas (Hollenbach & Tielens, 1997). PDRs were first studied extensively with the Infrared Space Observatory, though the lack of spectral and spatial resolution was a severe limitation. This changed with the Kuiper Airborne Observatory (in limited regions) and with the Heterodyne Instrument for the Far Infrared (HIFI) on Herschel, which had both velocity resolution and sensitivity to even enable determination of the optical depth of [C ii] by also observing \({}^{13}\)[C ii] (Ossenkopf et al., 2013). After Herschel this work was taken over by first GREAT on SOFIA and later with the array receiver upGREAT. These studies have found that the [C ii] emission often can be moderately optically thick and sometimes significantly self-absorbed (Mookerjea et al., 2018; Guevara et al., 2020; Graf et al., 2012), which may underestimate [C ii] column densities by a factor of two to three. The [O i] 63 \(\mu\)m can also be optically thick and self-absorbed, sometimes even more strongly than [C ii] (see e. g., Mookerjea et al., 2021). NGC 2023, illuminated by the B2 star HD 37903, is one of the best-studied reflection nebulae. It is nearby, \(\sim\)400 pc, and exhibits a strong, nearly edge-on PDR in which the cavity expands into the surrounding dense molecular cloud. It has served as a test bed for exploring PDR models (e.g. Kaufman et al., 2006). It was mapped with GREAT on SOFIA in [C ii] and CO(11-10) with the single pixel L1/L2 mixer (Sandell et al., 2015), who also mapped the nebula in \({}^{13}\)CO(3-2), and various transitions of CO from J=3-2 up to J=7-6. Here we have additionally mapped the NGC 2023 PDR in the 63 and 145 \(\mu\)m fine-structure transitions of [O i] and at the same time obtained a deeper map of [C ii] using upGREAT on SOFIA. The [O i] observations enable us for the first time to study the physical conditions in the south eastern and southern part of the nebula, where the C\({}^{+}\) region is expanding into the dense surrounding molecular cloud as well as estimate the relative contribution of the dense and diffuse PDR gas to the [C ii] emission. The newly obtained observations are analyzed in combination with the previously published maps of (i) \({}^{13}\)CO(3-2), CO(6-5), CO(7-6) observed with APEX with beamsizes of 18''.5, 9''1 and 7''.7' respectively and (ii) CO(11-10) observed with SOFIA/GREAT with a beamsize of 23'' (Sandell et al., 2015). ## 2 Observations The reflection nebula NGC 2023 was observed on a 90 minute leg with the SOFIA upGREAT1 receiver (Risacher et al., 2016) in GREAT consortium time on SOFIA flight #668 (Project ID: 83_0731) out of Palmdale, CA, on March 6, 2020. Although NGC 2023 was observed at low flight altitude, 11.6 km (38,000 feet), the observing conditions were very good with a precipitable water vapor (PWV) around 5-10 \(\mu\)m. The upGREAT was in the Low Frequency Array/High Frequency Array (LFA/HFA) configuration with both arrays operating in parallel. The V polarization of the LFA was tuned to [C ii] at 1.9005369 THz while the H polarization was tuned to the [O i] 145 \(\mu\)m line at 2.06006886 THz. The HFA was tuned to the [O i] 63 \(\mu\)m at 4.747749 THz. We made total power on-the fly (Ofter) maps in classical position switched mode. The reference position was at +390''.7'' (70'' relative to HD 37903 (\(\alpha\)\(J\)2000: 05\({}^{h}\) 41\({}^{m}\) 38.39\({}^{\circ}\), \(\delta\)\(J\)2000: -02deg15' 32''.5). The OTF map was done in two coverages, scanning in both x and y. The map was rotated by 45deg. The sampling was done every 3'' with a sampling time of 0.3 second per dump. This resulted in maps for the LFA array of \(\sim\) 4\(\aas@@fstack{\prime\prime}\)9 \(\times\) 3\(\aas@@fstack{\prime\prime}\)9, while the map size for the HFA array was 3.0'\(\times\) 2\(\aas@@fstack{\prime\prime}\)1, which was enough to cover the SE quadrant of the reflection nebula, where [O i] was expected to be strong. Footnote 1: The German REceiver for Astronomy at Terahertz frequencies (upGREAT) is a development by the MPI für Radioastronomie and the KOSMA/Universität zu Köln, in cooperation with the DLR Institut für Opscheins Forsysteme. The observations were reduced and calibrated by the GREAT team. The GREAT team also provided beam sizes (14''.1 for [C ii], 13''.0 for [O i] 145 \(\mu\)m, and 6''.3 for [O i] 63 \(\mu\)m) and beam efficiencies derived from planet observations. The data were corrected for atmospheric extinction, flat-fielded and calibrated in \(T_{\rm mb}\). Further processing of the data was made with the CLASS2 software. Footnote 2: CLASS is part of the Grenoble Image and Line Data Analysis Software (GILDAS), which is provided and actively developed by IRAM, and is available at [http://www.iram.fr/IRAMFR/GILDAS](http://www.iram.fr/IRAMFR/GILDAS) We also retrieved some high quality [C ii] data from the SOFIA archive, which extended the [C ii] map to the north. This data set was from project 02_0090 (PI. Els Peeters) and fully calibrated in the SOFIA archive. All data in this paper are presented in main-beam temperature scale. ## 3 Results ### Morphology & Kinematics of the Region Sandell et al. (2015) used maps of multiple CO rotational transitions combined with the [C ii] fine-structure line at 158 \(\mu\)m to probe the overall morphology of the reflection nebula NGC 2023. These observations showed that the [C ii] emission traces an expanding ellipsoidal shell-like PDR region region, which is surrounded by a hot molecular shell. In the southeast, where the [C ii] region expands into a dense, clumpy molecular cloud ridge, they also detected emission from high-\(J\) CO lines, apparently originating in a thin, hot molecular shell surrounding the [C ii] emission. These authors found that there was a clear velocity gradient across the nebula, with the emission being more blue-shifted in the south and southeast and more redshifted in the north and northwest, some of which appeared to be due to an expansion of the nebula. Sandell et al. also noted that high angular resolution images of vibrationally excited H\({}_{2}\) and PAH emission suggested that the PDR was far from smooth, exhibiting lumpy ridges and bright filamentary structures. The [C ii] and high-\(J\) CO maps looked smoother, partly because they had insufficient spatial resolution to see such details. Here we have additionally mapped the fine-structure transitions of [O i] at 63 and 145 \(\mu\)m, which are good tracers of the warm and dense regions of the PDR. The upper energy levels of the 63 \(\mu\)m and 145 \(\mu\)m [O i] transitions are 227.7 K and 326.6 K respectively, while formally the critical densities for these transitions are 5 10\({}^{5}\) cm\({}^{-3}\) and 5 10\({}^{6}\) cm\({}^{-3}\) respectively, for collisions with H\({}_{2}\)(Goldsmith, 2019). The [O i] 63 emission, however, can be very optically thick and self-absorbed, which was seen toward the S1 PDR in RhoDph (Mookerjea et al., 2021) and the same is true for NGC 2023. However, here we have also observed [O i] 145, which is unaffected by self-absorption. The [O i] 145 emission is largely optically thin and shows that the dense PDR has a lumpy and filamentary structure. The [O i] 145 map looks qualitatively similar to the [C ii] map (Fig. 1), but the [O i] emission is concentrated to the dense south-eastern and southern part of the PDR and more structured, while the [C ii] emission is rather smooth. In the north and northwest the gas densities are too low to collisionally excite [O i]. The [O i] 63 map, which has a spatial resolution of 6''.3, shows that the emission is extremely patchy over the area where it is detected (Fig. 2). The emission is essentially completely self-absorbed in the southeast and south, while it is seen in emission in the east and northeast, where the absorption from the foreground cloud is less severe. Even here it is strongly affected by absorption from colder foreground gas giving the integrated line emission a very patchy and lumpy appearance. We also compare the [O i] 145 map with maps of CO(7-6) and CO(11-10) (Sandell et al., 2015), which also trace the hot, dense PDR as well as with C91\(\alpha\), which was mapped by Wyrowski et al. (2000) (Fig. 3). The overall morphology of [O i] 145 matches CO(7-6) and CO(11-10) quite well. Both appear to trace hot gas in the leading edge of the expanding PDR shell. That CO(7-6) emission is not seen from the entire PDR where [O i] is detected, is likely due to the fact that the emission is strong enough only from a narrow shell and needs to be viewed almost tangentially to have enough column density to be detected. The emission from the remaining hot gas is likely below the sensitivity of our observations. C91\(\alpha\) shows a different morphology than CO(11-10), although they both appear to originate in approximately the same region. However, where C91\(\alpha\) is strong, CO(11-10) is faint, like in the southern part of the PDR shell, where C91\(\alpha\) shows two strong peaks. Both of these peaks are seen in [O i] 145, which appear to trace the same gas as C91\(\alpha\). In the east, where C91\(\alpha\) is faint there is no clear peak in [O i] 145 either. We note that although the [O i] 145 traces the high density ridge and the clumps in the ridge, the emission is slightly offset compared to CO(11-10) and C91\(\alpha\), which appears to be further outside in the PDR shell, i.e., closer to the cold molecular cloud (Fig. 3). Based on the detailed analysis of the high-\(J\) CO lines, Sandell et al. (2015) estimated the hot molecular shell to be between 90-120 K and have densities \(n\sim 10^{5}\)-\(10^{6}\) cm\({}^{-3}\). The velocity-channel map of [O i] 145 emission clearly show that the dense PDR clumps along the ridge are also at different velocities with the south-western clumps being more red-shifted than the north-eastern clump (Fig. 4). At v = 9 km s\({}^{-1}\) the region northeast of HD 37903 dominates the [O i] 145 and [C ii] emission. This emission peak, at \(\sim\) +27'', +18'', coincides with the strongest [C ii] emission peak, \(\sim 55\) K, in the whole map. It is also seen as a distinct peak in CO(6-5), CO(7-6) and CO(11-10), but it is relatively faint compared to the emission at \(\sim\) v = 10.5 km s\({}^{-1}\), which dominates the emission in the southeastern part of the PDR. This suggests that the northeastern emission peak may have somewhat lower gas densities than the PDR emission in the ridge. Optical images show that the NGC 2023 reflection nebula is quite large with a size of \(\sim 10^{\prime}\times 10^{\prime}\). In the near- and mid-IR it is only about half the size, \(\sim 5\aas@@fstack{\prime}2\times 5\aas@@fstack{\prime}3\) (Mookerjea et al., 2009). This is the regime where the emission is dominated by the PDR. In the near-IR one can see that HD 37903 illuminates a bright ridge \(\sim 120^{\prime\prime}-150^{\prime\prime}\) to the north, which intercepts some of the FUV radiation. This ridge is dominated by fluorescent H\({}_{2}\) emission, see e.g, the narrowband H\({}_{2}\) imaging by Field et al. (1998). The feature Field et al. calls the IR triangle is quite bright in [C ii] (Fig. 1). The peak of the [C ii] emission (+27\({}^{\prime\prime}\), +18\({}^{\prime\prime\prime}\)) coincides with the head of what Field et al. refer to as the Seahorse. It is clear that the PDR shell opens up to the north, as well as in a region east of the IR triangle, Fig. 1. Even in the southeast, where the C\({}^{+}\) region is surrounded by the dense molecular cloud, one can see that FUV radiation escapes, like the tongue of [C ii] emission sticking out at a position angle (measured counter-clockwise from North) of \(\sim 110^{\prime}\) (Fig. 1). The position-velocity diagrams, Fig. 5, provides us with an alternative way to examine the morphology of the PDR and what the different PDR tracers show us. There is a clear velocity gradient from south to north with the emission in the northern part of the PDR being more red-shifted, seen in all the tracers, even [O i] 145, although the [O i] 145 map only partially covers the PDR ridge. This suggest that the IR triangle and the northern ridge is on the backside of the PDR. Near HD 37903 the [C ii] emission shows both blue-and red-shifted Figure 1: Integrated intensity images (in both color and contours) of [C ii] at 158 \(\mu\)m (Left) and [O i] at 145 \(\mu\)m (Right). The color scale in K km s\({}^{-1}\) for each panel shown on the top. Contour levels (in K km s\({}^{-1}\)) for [C ii] are 40, 46, 65, 70, 76, 83, 90, 98, 109, 131, 141, 150, 160, 169 and 173. Contour levels (in K km s\({}^{-1}\)) for [O i] 145 are 3, 5 to 50 in steps of 5. The coordinates are shown as offsets relative to the center RA: 05\({}^{\rm h}\)41\({}^{\rm m}\)38.39\({}^{\rm h}\) **Dec**: -02\({}^{\circ}\)15\({}^{\prime}\)32\({}^{\prime\prime}\)5, which is also the location of the illuminating star HD 37903 marked by a star symbol. Positions which are studied in detail are shown as \(+\)’. The dashed lines drawn in the left panel show the directions along which position-velocity diagrams have been studied in Fig.5. The beamsizes of the [C ii] and [O i] 145 maps are 14\(\aas@@fstack{\prime\prime}\)1 and 13\({}^{\prime\prime}\) respectively and are shown hatched circles in each panel. Figure 2: Integrated intensity image (color) of integrated intensities of [O i] 63 \(\mu\)m overlaid with contours of [O i] 145 \(\mu\)m intensities. The color scale is in K km s\({}^{-1}\) and beamsize for the [O i] 63 \(\mu\)m map is shown as hatched circles in the left bottom corner of the figure. Contour levels (in K km s\({}^{-1}\)) for [O i] 145 \(\mu\)m are 3, 5 to 50 in steps of 5. Rest of the details are the same as in Fig. 1. emission, suggesting that the whole C\({}^{+}\) region is embedded in the cloud, because one sees photo evaporation flows both from the front and the back side of the PDR. The CO(4-3) and CO(6-5) show an outflow just south of HD 37903 and strong blue-and red-shifted emission, where the position-velocity diagram crosses the outflows from Sellgren C and D. Although [C ii] also shows strong blue-shifted emission in the south, it is almost certainly due to strong photo evaporation, as one can see from the velocity-channel maps (Fig. 4). These maps show strong blue-shifted emission over the whole PDR in the southeast. The east-west position-velocity diagram shows the strong blue-shifted photo-evaporation flow inside the PDR shell and it appears that some FUV radiation "leaks" through the shell, i.e., one can see [C ii] emission at distances \(>\) 100\(\arcsec\) from HD 37903 at roughly the same velocity, \(\sim\) 9.9 km s\({}^{-1}\). Closer to the star the [C ii] emission appears to be somewhat self-absorbed. [O i] 145 shows blue-shifted emission east of HD 37903, indicating that the photo evaporating gas is so dense that it is seen even in [O i] 145. To the west, the [O i] 145 becomes more red-shifted, which is not really noticeable in [C ii]. The CO lines pick up the blue-shifted outflow slightly west of HD 37903. The position-velocity diagram at PA 110\({}^{\circ}\) shows one of the regions where [C ii] breaks through the PDR shell. However, it looks about the same as in the east-west cut. Towards the west-northwest the [C ii] splits up into two velocity components. However, both [O i] 145 and \({}^{13}\)CO(3-2) only show a single velocity component roughly halfway between the two [C ii] peaks, indicating that [C ii] must be self-absorbed. The position-velocity diagram along the ridge cuts through the two strong C91\(\alpha\) peaks. The southern one is faint in [C ii] while it is quite prominent in [O i] 145. Here the emission is shifted toward \(\sim\) 10.7 km s\({}^{-1}\), whereas the clump to the northeast of it is \(\sim\) 10.2,km s\({}^{-1}\) (Fig. 5). The velocity of the [O i] 145 emission appears to follow that of C91\(\alpha\) (Wyrowski et al., 2000), while these emission clumps are unnoticeable in [C ii]. The CO lines show strong blue-shifted emission from a molecular outflow powered by the young star Sellgren D. ### Spectral Profiles of [O i] lines Based on the model proposed by (Sandell et al., 2015) the spectral profiles of CO and [C ii] detected NGC 2023 consist of several velocity components: the quiescent cloud seen in low J CO lines and \({}^{13}\)CO, the PDR, and line wings from high velocity molecular outflows seen in the outskirt of the reflection nebula. The [C ii] spectra also show prominent blue or red-shifted wings from photo evaporation flows. In order to see which of these components contribute to the [O i] emissions, we have selected six positions, some of which already were analyzed by (Sandell et al., 2015). All spectra are smoothed to a common angular resolution of 15\(\arcsec\) (Fig. 6). We have also fitted the [O i] 145 spectra using Gaussian profiles consisting of one or two velocity components as appropriate and compared the fitted parameters with those derived for [C ii] (this work) and CO(6-5), (7-6) and (11-10) transitions (Table 1). We have not used [O i] 63 \(\mu\)m, because the line is strongly self-absorbed over the whole area that we have mapped. The [O i] 145 spectra are all single-peaked, although we do see clear blue-shifted line wings in several positions, like at 40\(\arcsec\), -40\(\arcsec\) and 23\(\arcsec\), -60\(\arcsec\), where [C ii] shows very strong blueshifted wings. The position 23\(\arcsec\), -60\(\arcsec\), which coincides with the C91\(\alpha\) emission peak 2, also shows blue-shifted emission in [O i] 145, and even in C91\(\alpha\). At position 60\(\arcsec\), -60\(\arcsec\), the [C ii] line is strongly self-absorbed at the velocity of [O i] 145, making the blue-shifted line wing appear stronger than the emission from the PDR (Table 1). Sandell et al. (2015) interpreted the double-peaked [C ii] spectra that they saw in their [C ii] map as emission coming from the front and the backside of the PDR shell, but the optically thin [O i] 145 spectra show that the dip in the [C ii] spectra is due to self-absorption, not two separate velocity components. Neither can we distinguish more than one velocity component in the CO spectra, although there are areas, see Fig. 6 and Table 1, where the lines appear broadened, most likely due to contribution both from the front and the back wall of the PDR. In most of the positions the [O i] 63 spectra show wings and line-widths which are comparable to the [C ii] spectra, although they are often completely self-absorbed at the center of the line. The [O i] 145 spectra are much narrower than the [O i] 63 \(\mu\)m and show profiles which are almost identical to the CO(7-6) emission, except at position -2\(\arcsec\), -78\(\arcsec\) where the CO(7-6) shows strong blue Figure 3: Integrated intensity images (color) of CO(7–6), CO(11–10) and C91 \(\alpha\) compared with contours of intensities of [O i] 145 \(\mu\)m. The color scale is in K km s\({}^{-1}\). The beamsizes for the CO(7–6), CO(11–10) and C91 \(\alpha\) maps (in color) are 7\(\aas@@fstack{\prime\prime}\), 23\(\arcsec\) and 20\(\arcsec\) respectively and shown on top right corner of each panel as a filled white circle. The data for the CO(7–6), CO(11–10) emission observed with APEX and GREAT/SOFIA respectively, are from Sandell et al. (2015) and the C91 \(\alpha\) map is taken from Wyrowski et al. (2000). Contour levels (in K km s\({}^{-1}\)) for [O i] 145 are 3, 5 to 50 in steps of 5. Rest of the details are the same as in Fig. 1. Figure 4: **Top panels:** Velocity channel maps for [O i] 145 \(\mu\)m in color overlaid with contours at 3 to 30 K in steps of 2 K. **Bottom panels:** Velocity channel maps (in color) for [C ii] overlaid with the [O i] 145 \(\mu\)m map in contours from 3 K to 30 K in steps of 2 K. The color scale for the channel maps is shown at the top of each set. The position of HD 37903 is shown as a “star”. The coordinates in both panels are offsets in RA and Dec relative to the position of HD 37903 as in Fig. 1. shifted from the outflow powered by Sellgren's star C (Sandell, priv. communication). ### Column Density of the Atomic Oxygen The [O i] 63 \(\mu\)m emission is strongly self-absorbed over most of the reflection nebula, whereas the [O i] 145 is always observed in emission with no evidence for self-absorption. Even though [C ii] and [O i] 63 \(\mu\)m shows bright wings due to photo-evaporation flows, the photo-evaporating gas is not dense enough to be seen in [O i] 145 except in a few areas, see e.g., spectra at positions 40'', -40'', and 23'', -60'', where [O i] 145 shows a clear blue-shifted line wing. In order to derive the distribution of atomic oxygen based on the 63 and 145 \(\mu\)m spectra, we fit the [O i] 63 spectra using a two-slab toy model: the hot PDR shell (lying on either side of HD37903 along the line-of-sight) is assumed to be the background layer, while the colder oxygen on the front side is taken to be the foreground layer. While the gas in this layer, where the PDR merges into the cold molecular cloud is largely molecular, most of the oxygen is still atomic. We further assume that the emission from the background layer is captured by the [O i] 145 emission. In this case the colder foreground oxygen primarily attenuates the background spectral emission at 63 \(\mu\)m by the factor \(\exp\left[-\tau_{0}\exp\left[-4\ln 2\left(v-v_{0}\right)^{2}/\Delta v^{2} \right]\right]\). Here \(\tau_{0}\), \(\tau_{0}\), and \(\Delta v\) denote the peak optical depth, the velocity at which \(\tau_{0}\) of the foreground cloud occurs and the full width at half maximum (FWHM) of the absorption profile due to the foreground material. As pointed out earlier, the [O i] 63 emission arising from a transition involving lower energy levels also trace the blue- and red-shifted gas in photo-evaporation flows. The [O i] 145 emission, which is only excited in the warm and dense gas, does not show these broad line wings. The background [O i] 63 emission is modeled by Gaussian profiles generated from fits to the observed [O i] 145 spectrum scaled by a factor of 2, which corresponds to the typical intensity ratio between the two [O i] lines in temperature units and is equal to \(I_{63\mu m}/I_{145\mu m}\) = 25 in energy units. Calculations for warm, medium density PDRs with \(T_{\rm kin}\) = 100 K and \(n\)=10\({}^{5}\) cm\({}^{-3}\) suggest a \(I_{63\mu m}/I_{145\mu m}\) ratio between 20 to 33 (Fig.10 Goldsmith 2019). Assumption of a higher ratio of 33 leads to an increase in the derived oxygen column density by 10-20% of the value obtained in this work. Figure 7 shows an example of the results of the fits to the [O i] 63 using the two-slab toy model at the selected positions and Table 2 presents the corresponding fitted parameters. At the selected positions, the central velocity of the absorbing foreground cloud lies between 10.1-10.7 km s\({}^{-1}\) which matches well with the velocity of the [O i] 145 emission spectra and have FWHM between 1.8-2.4 km s\({}^{-1}\). This suggests that the [O i] 63 absorption occurs in the colder foreground layers of the Figure 5: Position-velocity diagrams for [C ii], \({}^{13}\)CO(3–2), CO(4–3), CO(6–5) and [O i] 145 (from top to bottom in each column) along directions given by position angles PA of 0, 90\({}^{\circ}\) and 110\({}^{\circ}\) as well as along the ridge. The position angles are measured counterclockwise from north. The colorscales (in K) are shown next to each panel. All the plots are at their respective original resolutions. same PDR gas that accounts for the emission of [O i] 145. We have performed the fitting of the [O i] 63 profile over a larger number of positions in the map and obtained the distribution of the foreground cold absorbing component (Fig. 8). The fits using the [O i] 145 spectra as the model for the background component do not reproduce the red- and blue-shifted wings observed in the [O i] 63 spectra, since the gas in the photo-evaporation flows causing these wings is not dense enough to excite [O i] 145. We estimate the column density of the absorbing foreground layer by assuming that all the oxygen atoms reside in the lowest level of their ground state. The column density of all O atoms in the foreground layer can be then estimated from the center opacity \(\tau_{0}\) and width (in km s\({}^{-1}\)) of the [O i] 63 line by \[N(\mathrm{O}^{0})=2\times 10^{17}\tau_{0}\Delta v\ \ \ \ \mathrm{cm}^{-2} \tag{1}\] The estimated column densities for the foreground gas ranges from 8 10\({}^{17}\) to 2 \(\times 10^{18}\) cm\({}^{-2}\). We can estimate a lower limit to the extinction if we assume that all the oxygen is atomic. With a typical gas abundance of 3.2\(\times 10^{-4}\) for oxygen and assuming most of the hydrogen to be molecular we estimate \(N(\mathrm{H}_{2})\) of the foreground gas Figure 8: The column density distribution of foreground PDR gas responsible for the [O i] 63 absorption features in the NGC 2023 region plotted in color and overlaid with the dense PDR gas traced by [O i] 145 emission. The contour levels in K km s\({}^{-1}\)for the [O i] 145 are 3, 5 to 50 in steps of 5. This image shows that the densest, most opaque area of the foreground cloud is in the south and southwest, while the foreground extinction is much lower in the east and in the north. Figure 6: Comparison of the [O i] 145 \(\mu\)m spectra at selected positions in the NGC 2023 region (marked in Fig. 1) with spectra of [C ii], [O i] at 63 \(\mu\)m, CO(11–10) and CO(7–6). As indicated the spectra are scaled by arbitrary factors for better visibility. The temperature scale is in \(T_{\mathrm{mb}}\)(K). The CO(7–6) and CO(11–10) are from Sandell et al. (2015). The CO(11–10) spectra are at the native resolution of 23\({}^{\prime\prime}\), all other spectra are at a common resolution of 15 \({}^{\prime\prime}\). Figure 7: The smooth curve (green) shows the fit to the [O i]63 spectrum (red) obtained by attenuating the scaled (by a factor 2 corresponding to the typical intensity ratio between the two [O i] lines in temperature units) [O i] 145 \(\mu\)m spectrum (filled grey histogram) by absorption due to foreground material. to range between 1.2-3.2 10\({}^{21}\) cm\({}^{-2}\), which translates to a minimum Av of 1.3. However, since one third of the oxygen is tied up in CO in molecular gas a more realistic estimate for the A\({}_{\rm V}\) is 2 mag. For the estimated column densities, based on non-LTE calculations using RADEX (Van der Tak et al., 2007), the [O i] 145 line is optically thin over a large range of temperatures (20-300 K) and volume densities (10\({}^{4}\)-10\({}^{7}\) cm\({}^{-3}\)). This is also consistent with our observations. Since the \(T_{\rm mb}\) ratio for the two [O i] lines depends on the physical conditions, the derived values of the peak optical depth are subject to the ratio assumed in this work. However, the fitted central velocity and line widths for the absorption profile are fairly robust against the assumed scale factor. Furthermore, in the estimate of \(N\)(O) we have assumed that all O atoms in the absorbing layer are in the ground \({}^{3}\)P\({}_{2}\) level, which is reasonable because there is no hint of absorption in [O i] 145. Since the [O i] 63 \(\mu\)m is optically thick it cannot be used to estimate the column density of oxygen atoms in the background layer. We thus use the integrated intensities of the [O i] 145 \(\mu\)m spectral lines to estimate \(N\)(O). For this we need an estimate of the temperature and density of the emitting gas. Since the [O i] 145 emission seems to follow the CO(11-10) emission quite closely, we can use the temperature and density estimated for CO(11-10), for which Sandell et al. (2015) estimated a kinetic temperature of 120 K and densities of 2 10\({}^{5}\)-1 10\({}^{6}\) cm\({}^{-3}\). Table 2 presents the column density of atomic oxygen at the seven selected positions that is required to produce the observed [O i] 145 emission (Table 1), estimated based on non-LTE calculations using RADEX (Van der Tak et al., 2007) for these kinetic temperatures and densities (Table 2). Figure A1 shows a plot of the intensity of [O i] 145 line as a function of \(N\)(O) for \(T_{\rm k}\) = 120 K and \(n\)(H\({}_{2}\)) = (0.2-1) 10\({}^{6}\) cm\({}^{-3}\). \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline Offset & Tracer & \(\int T_{\rm mb}d\nu\) & \(V_{\rm LSR}\) & \(\Delta V\) & \(\int T_{\rm mb}d\nu\) & \(V_{\rm LSR}\) & \(\Delta V\) \\ (\({}^{\prime\prime}\),\({}^{\prime\prime}\)) & & (K km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (K km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) \\ \hline (0.0,0.0) & [C ii]\({}^{a}\) & 113.7\(\pm\)1.2 & 10.24\(\pm\)0.02 & 3.22\(\pm\)0.04 & & & \\ & \({}^{13}\)CO(3–2) & 32.0\(\pm\)0.4 & 10.24\(\pm\)0.01 & 1.61\(\pm\)0.02 & & & \\ & [O i] 145 & 27.2\(\pm\)1.2 & 10.48\(\pm\)0.04 & 2.16\(\pm\)0.10 & & & \\ & CO(6–5) & 132.0\(\pm\)0.5 & 10.25\(\pm\)0.01 & 2.38\(\pm\)0.01 & & & \\ & CO(7–6) & 85.0\(\pm\)0.8 & 10.24\(\pm\)0.01 & 2.02\(\pm\)0.02 & & & \\ & CO(11–10) & 4.4\(\pm\)0.2 & 10.51\(\pm\)0.05 & 1.91\(\pm\)0.13 & & & \\ \hline (27,18) & [C ii]\({}^{b}\) & 159.7\(\pm\)3.9 & 9.10\(\pm\)0.02 & 2.02\(\pm\)0.04 & 51.5\(\pm\)4.1 & 11.19\(\pm\)0.06 & 1.97\(\pm\)0.12 \\ & [O i] 145 & 23.9\(\pm\)0.5 & 10.11\(\pm\)0.02 & 2.35\(\pm\)0.04 & & & \\ & [O i] 145 & 43.2\(\pm\)1.3 & 9.89\(\pm\)0.04 & 2.50\(\pm\)0.09 & & & \\ & CO(6–5) & 128.6\(\pm\)0.3 & 10.05\(\pm\)0.01 & 2.77\(\pm\)0.01 & & & \\ & CO(7–6) & 80.3\(\pm\)1.0 & 10.07\(\pm\)0.01 & 2.51\(\pm\)0.03 & & & \\ & CO(11–10) & 7.0\(\pm\)0.4 & 10.63\(\pm\)0.06 & 2.10\(\pm\)0.15 & & & \\ \hline (40,-40) & [C ii] & 72.8\(\pm\)0.0 & 10.73\(\pm\)0.01 & 2.80\(\pm\)0.03 & 79.0\(\pm\)0.5 & 8.73\(\pm\)0.0.01 & 3.58\(\pm\)0.02 \\ & \({}^{13}\)CO(3–2) & 53.5\(\pm\)0.4 & 10.06\(\pm\)0.01 & 1.44\(\pm\)0.01 & & & \\ & [O i] 145 & 48.2\(\pm\)1.6 & 10.60\(\pm\)0.04 & 2.46\(\pm\)0.09 & 5.6\(\pm\)1.4 & 7.71\(\pm\)0.43 & 3.00\(\pm\)0.00\({}^{c}\) \\ & CO(6–5) & 165.2\(\pm\)0.3 & 10.30\(\pm\)0.01 & 2.13\(\pm\)0.01 & & & \\ & CO(7–6) & 116.2\(\pm\)0.7 & 10.24\(\pm\)0.01 & 1.95\(\pm\)0.01 & & & \\ & CO(11–10) & 28.7\(\pm\)0.3 & 10.29\(\pm\)0.81 & 1.61\(\pm\)0.02 & & & \\ \hline (60,-60) & [C ii] & 39.6\(\pm\)1.1 & 10.08\(\pm\)0.04 & 2.20\(\pm\)0.00\({}^{c}\) & 15.2\(\pm\)1.4 & 7.65\(\pm\)0.09 & 2.40\(\pm\)0.24 \\ & \({}^{13}\)CO(3–2) & 47.7\(\pm\)0.4 & 10.22\(\pm\)0.01 & 1.09\(\pm\)0.01 & & & \\ & [O i] 145 & 10.6\(\pm\)0.9 & 10.41\(\pm\)0.05 & 1.13\(\pm\)0.12 & 5.8\(\pm\)2.8 & 8.87\(\pm\)1.04 & 3.00\(\pm\)0.00\({}^{c}\) \\ & CO(6–5) & 133.9\(\pm\)0.3 & 10.37\(\pm\)0.01 & 1.58\(\pm\)0.01 & & & \\ & CO(7–6) & 102.8\(\pm\)0.7 & 10.35\(\pm\)0.01 & 1.47\(\pm\)0.01 & & & \\ & CO(11–10) & 18.1\(\pm\)0.9 & 10.49\(\pm\)0.03 & 1.13\(\pm\)0.07 & & & \\ \hline (-2,-78) & [C ii] & 126.5\(\pm\)1.1 & 10.17\(\pm\)0.01 & 2.87\(\pm\)0.03 & & & \\ & \({}^{13}\)CO(3–2) & 55.5\(\pm\)0.5 & 10.11\(\pm\)0.01 & 1.98\(\pm\)0.02 & & & \\ & [O i] 145 & 50.2\(\pm\)1.1 & 10.77\(\pm\)0.02 & 1.64\(\pm\)0.04 & & & \\ & CO(6–5) & 87.0\(\pm\)0.3 & 10.81\(\pm\)0.01 & 1.56\(\pm\)0.01 & & & \\ & CO(7–6) & 68.6\(\pm\)0.8 & 10.68\(\pm\)0.01 & 1.51\(\pm\)0.02 & & & \\ & CO(11–10) & 14.4\(\pm\)1.2 & 10.70\(\pm\)0.07 & 1.49\(\pm\)0.15 & & & \\ \hline (23,-60) & [C ii] & 112.4\(\pm\)2.7 & 10.27\(\pm\)0.03 & 2.49\(\pm\)0.09 & 50.9\(\pm\)1.8 & 8.19\(\pm\)0.05 & 2.42\(\pm\)0.17 \\ & \({}^{13}\)CO(3–2) & 49.3 ### The morphology of the reflection nebula and the PDR illuminated by HD 37903 Sandell et al. (2015) proposed that the PDR illuminated by HD 37903 formed an expanding thin, hot shell which is roughly spheroidal with the south-eastern part being almost spherical because the expansion is slowed down by the dense surrounding molecular cloud and the shell being much more extended to the north-west. They interpreted the double peaked [C ii] spectra seen on the north-western side of the star as originating from the front and backside of the PDR shell. They argued that the reason for not seeing clear double split spectra on the southeastern side, was because the expansion was slowed down by the dense molecular cloud. While their model appears reasonable, we now know that it has some serious flaws. Even though we do see double peaked [C ii] spectra north-west of HD 37903, see e.g. Fig. 5, [O i] 145 is single peaked and centered between the two [C ii] peaks. This confirms that what we see is self-absorption in [C ii], not two velocity components. The [C ii] lines are quite broad, so it is probably true that they are broadened by emission from both the front and the back side of the PDR, but they cannot be separated into two distinct velocity components. The foreground extinction is definitely lower in the north and northwest. We do see some blue-shifted [C ii] wings due to photo-evaporation flows. This is consistent with the cloud emission being more redshifted in both \({}^{13}\)CO(3-2) and [C ii] indicating that it is behind the PDR. It appears more likely that the PDR is fan-shaped rather than 'egg-shaped' and that it is most likely open in the north and north-west allowing FUV photons to escape, hence creating a large reflection nebula dominating the emission north and north-west of HD 37903. ## 4 Discussion Analysis of the observed [O i] 63 and [O i] 145 spectra suggests that the strong absorption features seen in the [O i] 63 profiles are caused by colder atomic oxygen lying between the observer and the warm PDR emission excited by HD 37903. The [C ii] spectra are also affected by self-absorption, but it is far less severe and barely noticeable over most of the reflection nebula. At the peak (27'', 18'') of the [C ii] emission (Fig. 6), the self-absorption results in the shifting of the [C ii] spectra toward blue-shifted velocities compared to [O i] 145 and the high-\(J\) CO lines, which are unaffected by self-absorption. Even the [C ii] spectrum on HD 37903 appears to be somewhat affected by self-absorption. The two-slab toy model, which we used in Section 3.3 to model the observed [O i] 63 absorption features, shows that the column density of atomic oxygen in the foreground absorbing gas is quite substantial, \(\sim 10^{18}\) cm\({}^{-2}\). This is because most of the oxygen remains atomic way into the cloud, i.e. from hot PDR layer up to about an A\({}_{V}\) of 10, as already shown by Tielens & Hollenbach (1985), while most of the carbon becomes molecular, i.e., gets tied up in CO. The contribution to the oxygen column density from the surface of the molecular cloud, which only gets ionized by the general interstellar radiation that is a factor of \(10^{4}\) or \(10^{5}\) times weaker than the PDR illuminated by HD37903, is negligible. The estimated [O i] column densities depend to some extent on the "model" for the background emission, which have been constructed using the [O i] 145 spectra, but as we can see in Table 2 the computed column densities only depend weakly on the assumed gas densities. The [O i] 63 spectral profiles seen in NGC 2023 are similar to those observed in other strongly self-absorbed regions like S1 in RhoOph (Mookerjea et al., 2021) and toward W3 (Goldsmith et al., 2021), where the absorption was estimated to be due to colder foreground gas with \(N\)(O) of \(3\,10^{18}\) and 2-\(7\,10^{18}\) cm\({}^{-2}\) respectively. The method for estimating the background for the SPI PDR in Rho Oph was similar to what we used in this paper, while for W3, which was not observed in [O i] 145 it was instead estimated by fitting the [O i] 63 spectra for positions which showed almost no evidence for absorption. In the [O i] 63 map in NGC 2023 there are only a few positions which show a single emission peak without any trace of absorption, thus the approach to use the [O i] 145 emission which has been measured is more robust. Since we detect the [\({}^{13}\)C ii] \(F\) = 2-1 line in essentially the entire area mapped in [C ii], it is possible to check whether the [C ii] emission is optically thick although the [C ii] map does not go deep enough to measure [C ii] to [\({}^{13}\)C ii] ratio in individual positions. We have therefore divided up the map into smaller regions of about 1 sq, arcmin, each containing about 50-70 spectra. When we did this we found that the ratio of intensities between [C ii] to [\({}^{13}\)C ii] within errors is \(\sim\) 30-40 after correcting for the relative intensity of the \(F\) = 2-1 line, 0.625. Considering the isotope ratio \({}^{12}\)C/\({}^{13}\)C of 70 for the Orion region, this suggests the [C ii] optical depth to be \(\sim\) 1.5-2, consistent with the value of 2 derived by Sandell et al. (2015). We obtain an estimate of the minimum kinetic temperature \(T_{\rm kin}\)(C\({}^{+}\)) from \begin{table} \begin{tabular}{c c c c c c c c c} \hline Position & \(\tau_{0}\) & \(v_{\rm LSR}\) & \(\Delta v\) & \(N_{\rm shb}\)(O) & \(N_{\rm em}\)(O)\({}^{a}\) & \(T_{\rm kin}^{b}\)(C\({}^{+}\)) & \(N\)(C\({}^{+}\)) & \(N\)(H\({}_{2}\))\({}^{c}\) \\ & & km s\({}^{-1}\) & km s\({}^{-1}\) & (\(10^{18}\) cm\({}^{-2}\) & (\(10^{18}\)) cm\({}^{-2}\)) & K & (\(10^{18}\) cm\({}^{-2}\)) & (\(10^{21}\) cm\({}^{-2}\)) \\ \hline \hline (0, 0) & 2.2 & 10.4 & 2.0 & 0.88 & 3.4(2.8) & 74 & 3.4 & 7.8 \\ (27,18) & 2.2 & 10.3 & 2.2 & 0.97 & 6.6(5.5) & 107 & 3.6 & 9.1 \\ (40,-40) & 2.4 & 10.4 & 2.0 & 0.96 & 7.3(6.0) & 80 & 3.8 & 22.9 \\ ( 60,-60) & 2.9 & 10.4 & 1.8 & 1.0 & 2.0(1.6) & 60 & 2.4 & 23.9 \\ ( -2,-78) & 3.3 & 10.3 & 1.9 & 1.3 & 7.0(5.7) & 84 & 3.2 & 23.0 \\ ( 23,-60) & 3.5 & 10.1 & 1.9 & 1.3 & 9.1(7.2) & 81 & 3.8 & 29.0 \\ \hline \hline \end{tabular} \({}^{a}\) Estimated from integrated [O i] 145 intensity (Table 1) assuming \(T_{\rm kin}\) = 120 K and \(n\)(H\({}_{2}\))= \(2\,10^{5}\)\(\,\&\,10^{6}\)\(\,\)cm\({}^{-3}\). The number in brackets corresponds to \(N\)(O) for \(n\)(H\({}_{2}\))=\(10^{6}\) cm\({}^{-3}\). \({}^{b}\) Estimated from Planck-corrected peak of the [C ii] spectrum as described in Sec. 4 \({}^{c}\) Estimated from pixel-by-pixel grey-body fitting of 160, 250, 350 and 500 \(\mu\)m dust continuum emission using _hires_, an improved algorithm for the derivation of high-resolution surface densities from multiwavelength far-infrared Herschel images (Men’shchikov, 2021). \end{table} Table 2: Peak optical depth, corresponding velocity, and the FWHM of the foreground absorbing gas derived from modeling the [O i] 63 \(\mu\)m spectra with a two-slab model. Column density of oxygen in foreground cloud (\(N_{\rm shb}\)) is estimated using equation (1) and in the background PDR (\(N_{\rm em}\)) estimated using RADEX (Fig. A1). the Planck-corrected peak of the [C ii] spectrum assuming a beam filling factor of unity. The lower limit of \(T_{\rm kin}\)(C\({}^{+}\)) thus estimated for the entire map ranges between 60-92 K. Assuming an optical depth of 2 and \(T_{\rm kin}\)(C\({}^{+}\)), and the integrated [C ii] intensity to estimate \(N\)(C\({}^{+}\)) using the Equation (26) from Goldsmith et al. (2012) modified for optically thick emission as follows: \[N\left(C^{+}\right)=2.91\times 10^{15}\left[1+0.5\rm{e}^{91.25/T_{\rm kin}} \left(1+\frac{A_{\rm ul}}{C_{\rm ul}}\right)\right]\frac{\tau}{1-e^{-\tau}} \int T_{\rm mb}\rm{d}\nu \tag{2}\] where \(A_{\rm ul}=2.3\times 10^{-6}\,\rm{s}^{-1}\), \(T_{\rm kin}\) is the gas kinetic temperature, the collision rate is \(C_{\rm ul}=R_{\rm ul}n\) with \(R_{\rm ul}\) being the collision rate coefficient with H\({}_{2}\) or H\({}^{0}\), which depends on \(T_{\rm kin}\), and \(n\) is the volume density of H. Since the critical density of the [C ii] transition is \(n_{\rm cr}\)=3000 cm\({}^{-3}\), and since it is likely that most of the [C ii] detected could be at such densities along with some emission arising from clumps with densities exceeding 10\({}^{5}\) cm\({}^{-3}\), we assume a density of 10\({}^{4}\) cm\({}^{-3}\) to estimate the \(N\)(C\({}^{+}\)) density distribution of the region. For this calculation we use the kinetic temperatures of C\({}^{+}\) estimated as above, a density of \(n\)=10\({}^{4}\) cm\({}^{-3}\), and excitation from C\({}^{+}\)-H\({}_{2}\) collisions, with \(R_{ul}=3.8\times 10^{-10}\) cm\({}^{3}\) s\({}^{-1}\). The value of \(N\)(C\({}^{+}\)) estimated in the region show a small range of values between (1.8-4.4)\(\times 10^{18}\) cm\({}^{-2}\). We note that barring a few exceptions (Okada et al., 2015; Mookerjea et al., 2021) for the estimate of \(N\)(C\({}^{+}\)) often a kinetic temperature of 100-120 K is assumed in contrast to using an estimate from the observed [C ii] peak temperatures as we have done here. From equation (2) we find that for a temperature of 60 K the \(N\)(C\({}^{+}\)) estimated will be approximately twice the value that would be estimated for \(T_{\rm kin}\)=120 K. As seen at the selected positions, throughout the entire region the ratio of column densities of C\({}^{+}\) and O atoms (seen in emission) is consistent with the comparable to the solar [O]/[C] abundance ratio of 3.5 within the uncertainties (Table 2). This is also consistent with the structure of the PDR expected where the [O i] 145 emission arises from a dense hot region which also emits in [C ii], although [C ii] emission additionally arises from the more diffuse and colder PDR gas. Based on the far-infrared dust continuum maps at 160 to 500 \(\mu\)m we estimate the molecular hydrogen column density to be around 7\(\times 10^{21}\) cm\({}^{-2}\), while to the south, beyond the ridge and into the molecular cloud the values are typically \(\sim 2\times 10^{22}\) cm\({}^{-2}\). The estimated H\({}_{2}\) and C\({}^{+}\) column densities at the selected positions are consistent with the solar C/H abundance ratio of 3\(\times 10^{-4}\) (Table 2). ## 5 Summary & Conclusions We have studied the geometry of the NGC 2023 reflection nebula using spectrally resolved observations of the 63 and 145 \(\mu\)m transitions of [O i] along with the [C ii] 158 \(\mu\)m transition which enable us to map the three-dimensional distribution of the photodissociated gas vis-a-vis the location of the illuminating source HD 3703. Combination of the velocity-resolved [O i] 145 spectra with the [C ii] spectra have allowed us to improve our understanding of the morphology of the region. We conclude that the PDR is fan-shaped rather than 'egg-shaped' and that it is most likely open in the north and north-west allowing FUV photons to escape, that creates a large reflection nebula dominating the emission north and north-west of HD 37903. The [C ii] emission detects PDR gas uniformly distributed with \(N\)(C\({}^{+}\))\(\sim\) 0.9-2.2 10\({}^{18}\) cm\({}^{-2}\) suggest the contribution of both diffuse and dense components, while the [O i] 145 emission detects dense PDR gas with \(N\)(O) between (2-10) 10\({}^{18}\) cm\({}^{-2}\). Additionally, the self-absorbed profile of the [O i] 63 indicate the presence of cold low-excitation atomic oxygen with \(N\)(O)\(-\)0.8-1.5 10\({}^{18}\) cm\({}^{-2}\) associated with the nebula and lying in the foreground between the illuminating source and the observer that is most pronounced towards the southern and south-western part of the nebula. The self-absorbed [O i] 63 profiles observed in NGC 2023 together with similar results in other Galactic PDRs (Mookerjea et al., 2021; Goldsmith et al., 2021) suggest the need to exercise caution when modelling galactic and extragalactic [O i] 63 intensities, in particular if spectrally unresolved, without accompanying observations of the optically thin [O i] 145 line. ## Acknowledgements BM acknowledges the support of the Department of Atomic Energy, Government of India, under Project Identification No. RTI 4002. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France (DOI : 10.26093/cds/vizier). The original description of the VizieR service was published in 2000, A&AS 143, 23" ## Data Availability The new data observed with SOFIA that is presented here will be available from the SOFIA data archive ([https://irsa.ipac.caltech.edu/applications/sofia/](https://irsa.ipac.caltech.edu/applications/sofia/)).
2309.16597
Transfer Learning for Bayesian Optimization on Heterogeneous Search Spaces
Bayesian optimization (BO) is a popular black-box function optimization method, which makes sequential decisions based on a Bayesian model, typically a Gaussian process (GP), of the function. To ensure the quality of the model, transfer learning approaches have been developed to automatically design GP priors by learning from observations on "training" functions. These training functions are typically required to have the same domain as the "test" function (black-box function to be optimized). In this paper, we introduce MPHD, a model pre-training method on heterogeneous domains, which uses a neural net mapping from domain-specific contexts to specifications of hierarchical GPs. MPHD can be seamlessly integrated with BO to transfer knowledge across heterogeneous search spaces. Our theoretical and empirical results demonstrate the validity of MPHD and its superior performance on challenging black-box function optimization tasks.
Zhou Fan, Xinran Han, Zi Wang
2023-09-28T17:01:43Z
http://arxiv.org/abs/2309.16597v2
# Transfer Learning for Bayesian Optimization on Heterogeneous Search Spaces ###### Abstract Bayesian optimization (BO) is a popular black-box function optimization method, which makes sequential decisions based on a Bayesian model, typically a Gaussian process (GP), of the function. To ensure the quality of the model, transfer learning approaches have been developed to automatically design GP priors by learning from observations on "training" functions. These training functions are typically required to have the same domain as the "test" function (black-box function to be optimized). In this paper, we introduce MPHD, a _model pre-training_ method on _heterogeneous domains_, which uses a neural net mapping from domain-specific contexts to specifications of hierarchical GPs. MPHD can be seamlessly integrated with BO to transfer knowledge across heterogeneous search spaces. Our theoretical and empirical results demonstrate the validity of MPHD and its superior performance on challenging black-box function optimization tasks. ## 1 Introduction Many real-world applications require finding the best hyperparameter values by evaluating a series of configurations of those hyperparameters. Some examples include tuning machine learning (ML) models (Snoek et al., 2012; Turner et al., 2021), learning robot control strategies (Wang et al., 2021), synthesizing functional chemicals (Shields et al., 2021), and discovering new materials (Frazier and Wang, 2015). For these problems, there exists an underlying black-box function that scores the utility of hyperparameters. One popular way of formulating such problems is Bayesian optimization (BO): optimizing an unknown function by reasoning about the Bayesian beliefs about this function. In BO, we often use Gaussian processes (GPs) as Bayesian beliefs for unknown functions. Given a GP prior and observations on the function, we can obtain a GP posterior and use its uncertainty predictions to make decisions on which datapoints to acquire. For example, one popular strategy is to greedily evaluate the inputs that achieve the highest upper confidence bounds on function values (Srinivas et al., 2010). A prerequisite of BO is to specify a GP prior, which can be difficult to do in practice. To address this issue, much progress has been made to learn the GP prior using transfer learning based approaches (Swersky et al., 2013; Yogatama and Mann, 2014; Wang et al., 2018; Perrone et al., 2018; Volpp et al., 2019; Wistuba and Grabocka, 2021; Wang et al., 2022). These approaches typically assume that we have data on a set of "training" functions, and the goal is to generalize a learned GP model or a learned strategy to a "test" function, which has the _same domain_ as the training functions. While these methods have been shown to perform well on a variety of tasks, they cannot be easily used to generalize to test functions that do not share the same domain as the training functions. In practice, the data available for transfer learning might not have the ideal setups to ensure the domains of all functions are well-aligned. For example, we might have collected many datapoints by experimenting with several robot skills that have different (but potentially overlapping) sets of control parameters (Wang et al., 2021). Or, we might have ML model tuning data from a commercial BO tool (Golovin et al., 2017; Balandat et al., 2020), where different people might tune different hyperparameters and use different names for the same type of hyperparameters. With current methods, it can be difficult to transfer knowledge from one task to another in these cases. In this paper, we introduce Model Pre-training on Heterogeneous Domains (MPHD), a new transfer learning method for BO on heterogeneous search spaces. In the pre-training stage, MPHD learns a model with a mapping from domain-specific contexts to specifications of hierarchical GPs. This allows transferring knowledge from a set of training functions with different domains to a new test function to be optimized on an unseen search space. Using the pre-trained model, MPHD can generate a customized hierarchical GP as the prior for the test function, and then this hierarchical GP can be combined with an existing acquisition strategy to perform BO. An illustration can be found in Fig. 6. Through theoretical and empirical case studies (SS3), we show that MPHD is asymptotically consistent, meaning that it converges to the ground truth solution as the number of training functions increases. We also show that the hierarchical GPs generated by MPHD can accurately capture test functions with new domains. To verify the usefulness of MPHD for BO (SS4), we conducted extensive experiments on real world BO transfer learning problems with heterogeneous search spaces. We tested benchmarks including HPO-B (Pineda-Arango et al., 2021) and PD1 (Wang et al., 2022), which involve 17 search spaces in total. Our results have shown significant improvement made by MPHD on sample efficiency for BO on functions with unseen search spaces. In this paper, we make three **contributions**: (1) We identify a practical problem in BO, and propose a new problem formulation: the transfer learning problem for functions with different domains. (2) We propose a new method, MPHD, to solve this problem. (3) We show the effectiveness of MPHD both theoretically and experimentally, and prove the consistency of MPHD building on our new theoretical results for constructing sufficient statistics of training functions. To the best of our knowledge, MPHD is the first GP-based framework that can be used to transfer knowledge for BO on heterogeneous search spaces. Related work.Researchers have developed methods for transferring knowledge between BO tasks. For example, Swersky et al. (2013) and Yogatama & Mann (2014) proposed to learn a multi-task GP and use the similarity between tasks for generalization. Feurer et al. (2018) aimed to learn warm-starting strategies from previous BO tasks using an ensemble. Recently, Wang et al. (2018); Perrone et al. (2018); Wistuba & Grabocka (2021); Wang et al. (2022) found that learning a GP prior from evaluations on training functions can be an effective approach to transfer knowledge for BO if all the functions share the same domain. Another line of related work is end-to-end black-box function optimization. Chen et al. (2017) trained a recurrent neural network (RNN) on a large number of BO trajectories. The RNN can then be used to generate the next point to evaluate for a new BO problem. Chen et al. (2022) introduced OptFormer, a transformer-based transfer learning method for hyperparameter tuning on universal search spaces. OptFormer trains a giant transformer model on millions of BO trajectories to learn to propose hyperparameters in an end-to-end fashion. Note that Feurer et al. (2018) and Chen et al. (2017; 2022) all require previous optimization runs, meaning that they cannot make use of raw evaluations on training functions without simulating BO trajectories. Our approach, MPHD, focuses on transferring knowledge about functions by pre-training a surrogate model on raw evaluations, and does not require data in the form of BO trajectories. MPHD can be naturally combined with other components of BO methods, e.g. acquisition functions, input and output warping, cross validation etc, to complete a practical BO software. For example, MPHD can be directly incorporated in BoTorch (Balandat et al., 2020) and Vizier (Golovin et al., 2017) by replacing their default hierarchical GP model. ## 2 MPHD: Model Pre-training in Heterogeneous Domains We present MPHD, a model pre-training framework for functions with heterogeneous domains. Given data collected on training functions, MPHD aims to learn a distribution over functions to model an unseen test function. The domains of all training functions and the test function can have different numbers of dimensions and each input dimension can have different meanings. ### Problem formulation We first define terms on datasets. A _super-dataset_ is a collection of data points from all training functions with different domains, along with the contexts associated with each domain. A _dataset_ is a collection of data points from training functions with the same domain. A _sub-dataset_ is a collection of data points from the same training function. Fig. 2 illustrates a super-dataset example. Formally, we use \(D=\{(D_{i},S_{i})\}_{i=1}^{N}\) to denote a super-dataset, where \(D_{i}\) is a dataset and \(S_{i}\) is a domain-specific context. Each dataset \(D_{i}\) consists of observations on a collection of training functions \(F_{i}=\{f_{ij}:\mathcal{X}^{(i)}\rightarrow\mathbb{R}\}_{j=1}^{M_{i}}\) where functions in \(F_{i}\) share the same compact domain \(\mathcal{X}^{(i)}\subset\mathbb{R}^{d_{i}}\). The information about domain \(\mathcal{X}^{(i)}\) is encoded into the context \(S_{i}\). Let dataset \(D_{i}=\{D_{ij}\}_{j=1}^{M_{i}}\), where each sub-dataset \(D_{ij}=\{(x_{ij}^{(l)},y_{ij}^{(l)})\}_{l=1}^{L_{ij}}\). \(L_{ij}\) is the number of observations on function \(f_{ij}\) perturbed by _i.i.d._ additive Gaussian noise, i.e., \(\mathcal{V}_{ij}^{(l)}\sim\mathcal{N}(f_{ij}(x_{ij}^{(l)}),\sigma_{i}^{2})\). Noise variance \(\sigma_{i}^{2}\) is specific to each domain \(\mathcal{X}^{(i)}\). We assume that all functions in \(F_{i}\) with domain \(\mathcal{X}^{(i)}\) are _i.i.d._ function samples from the same Gaussian process, \(\mathcal{GP}_{i}=\mathcal{GP}(\mu_{i},k_{i};\theta_{i})\), where \(\mu_{i}\) is a constant mean function, \(k_{i}\) is a stationary kernel, and \(\theta_{i}=[\theta_{i}]_{h=1}^{H_{i}}\in\mathbb{R}^{H_{i}}\) are GP parameters1, including parameters of the mean and kernel functions as well as the noise variance \(\sigma_{i}^{2}\). Each GP parameter \(\theta_{ih}\) is sampled independently from its prior distribution \(\Theta(\alpha_{ih})\), where \(\alpha_{ih}=\phi(s_{ih})\in\mathbb{R}^{d_{a}}\), \(s_{ih}\) is the context of GP parameter \(\theta_{ih}\), and \(\phi\) maps from contexts to hyperparameters (parameters of the priors). The domain-specific context \(S_{i}\) is composed of the contexts of all GP parameters, i.e., \(S_{i}=[s_{ih}]_{h=1}^{H_{i}}\). Fig. 1 illustrates the graphical model. Footnote 1: In the classic GP literature (Rasmussen & Williams, 2006), \(\theta_{i}\) are called hyperparameters of a GP. But from a functional perspective, a GP is a parameterized distribution over random functions, and so \(\theta_{i}\) can also been seen as parameters of a GP. The goal of MPHD is to pre-train this probabilistic model so that for any new domain, the model can generate the prior distributions over the GP parameters to construct a domain-specific hierarchical GP. ### Our method As shown in Fig. 1, the model can be compactly described by function \(\phi\). Thus, model pre-training in MPHD is equivalent to training function \(\phi\) on the super-dataset, \(D=\{(D_{i},S_{i})\}_{i=1}^{N}\), such that the model can generalize to test functions with new contexts. We define the pre-training objective to be the log marginal data likelihood as follows, \[\mathcal{L}(\phi) =\sum_{i=1}^{N}\log p(D_{i}\mid\phi,S_{i})=\sum_{i=1}^{N}\log\int_{ \theta_{i}}p(D_{i}\mid\theta_{i})p(\theta_{i}\mid\phi,S_{i})\,\mathrm{d}\theta_ {i}\] \[\approx\sum_{i=1}^{N}\log\sqrt{\frac{(2\pi)^{H_{i}}}{M_{i}^{H_{i} }\det\frac{\mathrm{d}^{2}}{\mathrm{d}\theta^{2}}(-\frac{1}{M_{i}}\log p(D_{i} \mid\theta_{i}))\big{|}_{\theta_{i}=\theta_{i}}}}p(\hat{\theta}_{i}\mid\phi,S_ {i})p(D_{i}\mid\hat{\theta}_{i}) \tag{1}\] \[\propto\sum_{i=1}^{N}\Big{(}\log p(D_{i}\mid\hat{\theta}_{i})+ \log p(\hat{\theta}_{i}\mid\phi,S_{i})\Big{)}\propto\sum_{i=1}^{N}\log p(\hat {\theta}_{i}\mid\phi,S_{i}) \tag{2}\] where \(\hat{\theta}_{i}=\arg\max_{\theta_{i}}p(D_{i}\mid\theta_{i})\) and \(\theta_{i}=[\theta_{ih}]_{h=1}^{H_{i}}\). The approximation in Eq. 1 uses Laplace's method (see derivations in SSB). Eq. 2 removes terms irrelevant of optimizing \(\phi\). To summarize, model pre-training in MPHD has two steps: \[\raisebox{-0.86pt}{\includegraphics[height=14.226378pt]{fig-1.pdf}}\forall i \in[N],\hat{\theta}_{i}\leftarrow\arg\max_{\theta_{i}}p(D_{i}\mid\theta_{i}), \raisebox{-0.86pt}{\includegraphics[height=14.226378pt]{fig-1.pdf}}\hat{\phi} \leftarrow\arg\max_{\phi}\sum_{i=1}^{N}\log p(\hat{\theta}_{i}\mid\phi,S_{i}).\] Step \(\raisebox{-0.86pt}{\includegraphics[height=14.226378pt]{fig-1.pdf}}\) can be done using gradient-based optimization methods for each of the \(N\) likelihood functions, \[\log p(D_{i}\mid\theta_{i})=\sum_{j=1}^{M_{i}}\log p(D_{ij}\mid\theta_{i})=- \sum_{j=1}^{M_{i}}\left(\left(\mathbf{y}_{ij}^{(\theta_{i})}\right)^{\top}\left(K _{ij}^{(\theta_{i})}\right)^{-1}\mathbf{y}_{ij}^{(\theta_{i})}+\frac{1}{2}\log|K_{ ij}^{(\theta_{i})}|+\frac{L_{ij}}{2}\log 2\pi\right), \tag{3}\] where vector \(\mathbf{y}_{ij}^{(\theta_{i})}=[(y_{ij}^{(l)}-\mu_{i}(x_{ij}^{(l)};\theta_{i})]_{l =1}^{L_{ij}}\) and matrix \(K_{ij}^{(\theta_{i})}=[k_{i}(x_{ij}^{(l)},x_{ij}^{(l^{\prime})};\theta_{i})]_{ l=1,l^{\prime}=1}^{L_{ij}}\). Step \(\raisebox{-0.86pt}{\includegraphics[height=14.226378pt]{fig-1.pdf}}\) requires computing the approximated objective \[\mathcal{L}(\phi)\approx\hat{\mathcal{L}}(\phi)=\sum_{i=1}^{N}\log p(\hat{ \theta}_{i}\mid\phi,S_{i})=\sum_{i=1}^{N}\sum_{h=1}^{H_{i}}\log p_{\Theta}( \hat{\theta}_{ih}\mid\alpha_{ih}=\phi(s_{ih})),\] which depends on the exact forms of the prior distributions \(\Theta(\alpha_{ih})\). Moreover, we need to specify the function space for \(\phi\) such that optimization is possible. In this work, we use a neural network to parameterize function \(\phi\), and Step \(\raisebox{-0.86pt}{\includegraphics[height=14.226378pt]{fig-1.pdf}}\) can be done by optimizing over the weights in \(\phi\) with gradient-based methods. ## 3 Case studies for validating MPHD We validate MPHD via case studies. SS3.1 presents theoretical analyses on convergence and consistency. SS3.2 shows empirical results on synthetic data to compare pre-trained models with the ground truth. ### Theoretical analyses For the theoretical analysis in this section, we assume zero mean and anisotropic Matern kernels with known smoothness term \(\nu\) (e.g. \(\nu=5/2\)), and the GP parameters \(\theta_{i}=[\theta_{ih}]_{h=1}^{H_{i}}\) to be learned only include length-scales. For simplicity, we also assume that contexts \(s_{ih}\) are the same across domains \(\mathcal{X}^{(i)},i\in[N]\) and the length-scale priors are normal or Gamma distributions. Under mild conditions, we show that (1) For each \(i\in[N]\), as \(M_{i}\) (the number of sub-datasets) increases, the estimated GP parameters \(\tilde{\theta}_{i}\) for the respective domain \(\mathcal{X}^{(i)}\) converge to the ground truth parameters \(\theta_{i}^{*}\), and consequently (2) For each \(h\in[H_{i}]\), as \(N\) (the number of datasets) increases, the learned hyperparameters \(\hat{\alpha}_{ih}\) of the normal or Gamma priors converge to the ground truth values \(\alpha_{ih}^{*}\). Background.The theoretical soundness of pre-training on heterogeneous domains relies on the quality of the estimated GP parameters from each domain. The asymptotic behavior of MLE for covariance parameters under a single GP has commonly been studied in two asymptotic frameworks with fixed or increasing domains (Bevilacqua et al., 2019; Zhang and Zimmerman, 2005). Under the _fixed domain_ setting, observations are sampled in a bounded set and thus become increasingly dense with more samples. In the _increasing domain_ setting, observations are collected with a minimal spacing in between data points and thus make the sampling domain unbounded as the number of observations increases; i.e., for a sub-dataset \(\{(x^{(l)},y^{(l)})\}_{l=1}^{L}\) with \(L\) observations, there exists a fixed \(\Delta>0\) such that \[\|x^{(l)}-x^{(l^{\prime})}\|\geq\Delta,\quad\forall l\neq l^{\prime},1\leq l,l ^{\prime}\leq L. \tag{4}\] Mardia and Marshall (1984) and Stein (1999) showed that, given the observations from a single function sampled from a zero-mean GP, MLE estimates of covariance parameters are consistent and asymptotically normal with mild regularity conditions for increasing domains. Bachoc (2014) showed that while the MLE given finite observations may not be unique, it converges to a unique global minimizer with probability converging to one as \(L\) goes to infinity. Additionally, Bachoc et al. (2020) discusses extensions of these results to GP models with non-zero mean functions. Our results.The critical part of our proof is to show that the setup of MPHD belongs to the increasing domain setting. The key property of the increasing domain setting is that there is vanishing dependence between observations that are far apart (Bachoc, 2014). Thus, a larger sample size would be more informative of the covariance structure. **Lemma 1**.: _For any \(i\in[N]\), \(\mathcal{GP}_{i}(\mu_{i},k_{i};\theta_{i})\) and its corresponding dataset\(D_{i}=\{D_{ij}\}_{j=1}^{M_{i}}\), there exists a pseudo sub-dataset \(\bar{D}\) observed at inputs \(x^{(1)},x^{(2)},\ldots,x^{(L)}\) on a function \(f^{\prime}\sim\mathcal{GP}_{i}\), such that \(\bar{D}\) satisfies the increasing domain assumption in Eq. 4, and \(\bar{D}\) is a sufficient statistic of \(D_{i}\) with \(p(D_{i}\mid\theta_{i})\equiv p(\bar{D}\mid\theta_{i})\)._ Lemma 1 highlights the important connection between the increasing domain setting and our setups with an increasing number of sub-datasets. Intuitively, this lemma shows that observations from multiple independently generated functions from a fixed GP can be viewed as being sampled from one function, with infinitely large interval between sub-datasets' observations. We illustrate this process on a 1-dimensional domain in Fig. 3. We prove Lemma 1 in SSC. To show the asymptotic properties of MPHD, we make the following **assumptions**: 1. Dataset \(D_{i}=\{D_{ij}\}_{j=1}^{M_{i}}\) (\(i\in[N]\)) contains a finite number of observations in each of its sub-datasets. There exists a minimum spacing \(\delta>0\) such that \(\|x^{(l)}_{ij}-x^{(l^{\prime})}_{ij}\|\geq\delta,(l\neq l^{\prime})\) for all sub-datasets \(D_{ij}\). 2. The ground truth GP parameters \(\theta^{*}_{i}\) belong to \(C\), the space of \(\theta_{i}\), used for estimating \(\hat{\theta}_{i}\) in Step 1. 3. For each \(i\in[N]\), there is sufficient information in sampling locations \(x^{(1)},x^{(2)},\ldots,x^{(L)}\) of the pseudo sub-dataset \(\bar{D}\) (Lemma 1) to distinguish covariance functions \(k(\cdot,\cdot;\theta_{i})\) with \(k(\cdot,\cdot;\theta^{*}_{i})\)(Bachoc, 2014). That is, for any \(\theta_{i}\in C\), \[\liminf_{L\to\infty}\inf_{\|\theta_{i}-\theta_{i}^{*}\|\geq\epsilon}\frac{1} {L}\sum_{l\neq l^{\prime}}\left(k(x^{(l)},x^{(l^{\prime})};\theta_{i})-k(x^{( l)},x^{(l^{\prime})};\theta^{*}_{i})\right)^{2}>0.\qquad\quad\text{(Asymptotic Identifiability)}\] **Theorem 2**.: _Given assumptions (1)-(3), for dataset \(D_{i}\) with \(M_{i}\) sub-datasets generated from the same mean and covariance function, as \(M_{i}\to\infty\), we have \(\hat{\theta}_{i}\overset{p}{\rightarrow}\theta_{i}^{*}\)._ **Theorem 3**.: _Given assumptions (1)-(3), as the number of datasets, and sub-datasets \(N,M_{i}\to\infty,\forall i\), MLE for the prior distribution of each GP parameter \(\Theta(\alpha_{ih}),h\in[H_{i}]\) is consistent; i.e., \(\hat{\alpha}_{ih}\overset{p}{\rightarrow}\alpha_{ih}^{*}\)._ The proofs of Theorem 2 and 3 can be found in SSC. The above theorems complete the claims we made at the beginning of SS3.1. Moreover, MPHD has the following asymptotic MLE behaviors in Step 2 with \(n\) estimated samples of GP parameters \(\hat{\theta}_{i}\). **Remark**.: _(Ye and Chen (2017) and Rice (2006))_ _(1) When the prior is assumed to be from a Gamma distribution parameterized by shape \(a^{*}\) and rate \(b^{*}\), the _MLE \((\hat{a},\hat{b})\) is consistent and asymptotically normal with distribution \(\mathcal{N}((a^{*},b^{*}),\frac{1}{n}\mathcal{I}(a^{*},b^{*})^{-1})\), where \(\mathcal{I}\) is the Fisher information matrix. (2) Similar results hold when the prior is a normal distribution parametrized by mean \(c^{*}\) and standard deviation \(d^{*}\), where the MLE satisfies \((\hat{c},\hat{d})\to(c^{*},d^{*})\)._ A key observation from our theoretical analyses is that with sufficient observations in each dataset \(D_{i}\), increasing the number of sub-datasets can effectively improve the estimation of the covariance parameters and thus the parameters of the prior distribution. We show the empirical asymptotic behavior in SS3.2. ### Empirical analysis with synthetic data In this section, we present empirical results to further demonstrate the asymptotic properties of MPHD. For these empirical results, we assume constant mean and anisotropic Matern kernels with known smoothness term \(\nu\). The asymptotic properties of learning the length-scale parameter are included in this section, while the results for other GP parameters can be found in SSF. We generated two synthetic super-datasets following the generative process illustrated in Fig. 1 with two different settings where we use Gamma distributions as the prior for length-scales. **Synthetic Super-dataset (S)** is a smaller and simpler one out of the two synthetic super-datasets, which uses the same Gamma prior for all domains (i.e., function \(\phi\) returns constant hyperparameters). It includes 20 datasets (each with its own domain) with 10 sub-datasets in each dataset. Each sub-dataset includes noisy observations at 300 random input locations in its respective domain. The dimension of each domain is randomly sampled between 2 and 5. **Synthetic Super-dataset (L)** is a larger and more complex super-dataset and is also the synthetic data used for BO evaluations in SS4. It has domain-specific Gamma priors where the hyperparameters linearly depend on the number of dimensions of each domain (i.e., function \(\phi\) is a linear function of the domain dimension \(d_{i}\)). It includes 20 datasets (each with its own domain) with 20 sub-datasets in each dataset. Each sub-dataset includes noisy observations at 3000 random input locations in its respective domain. The dimension of each domain is randomly sampled between 2 and 14. We split each of the synthetic super-datasets into training data and test data: for Synthetic Super-dataset (S), we used 80% of datasets as training datasets and the remaining 20% as test datasets; for Synthetic Super-dataset (L), we used 80% sub-datasets within each dataset as the training sub-datasets and the remaining 20% as the test sub-datasets. All experiments are repeated 5 times with different random seeds. More details on the setups can be found in SSE. #### 3.2.1 Length-scales with the same Gamma prior For Synthetic Super-dataset (S), Fig. 4(a) shows \(\hat{a}\) of the pre-trained Gamma prior \(\Gamma(\hat{a},\hat{b})\) for length-scales, where \(\hat{a}\) is the shape parameter and \(\hat{b}\) is the rate parameter. As the number of training datasets increases, the variance of the estimated \(\hat{a}\) gradually decreases and the mean becomes closer to the ground-truth prior. Figure 3: Illustration of how to construct a pseudo sub-dataset in Lemma 1 from observations on functions \(f_{1},f_{2},\cdots\). We rearrange the original sub-datasets in the domain such that datapoints from different sub-datasets have a distance of at least \(Q>0\) in the constructed pseudo sub-dataset. Fig. 4(b) plots the PDF of both the pre-trained and ground-truth Gamma priors, showing that more training datasets help the stability of pre-training. These results are consistent with our theoretical analyses in SS3.1. In Fig. 4(c), we show the NLL of test datasets (the negative of the objective \(\mathcal{L}(\phi)\) in SS2.2, but applied on test data instead of training data) with an increasing number of training datasets. Both the mean and variance of the NLL drop as the number of training datasets increases, indicating no overfitting. #### 3.2.2 Length-scales with domain-specific Gamma priors For Synthetic Super-dataset (L) with domain-specific Gamma length-scales priors, we ran MPHD with two versions of the function \(\phi\). **Variants of MPHD:** (1) MPHD Standard, which uses a 2-hidden-layer neural net (NN) to represent the length-scale prior for any domain dimension taking as input the context \(s_{ih}\) that corresponds to the length-scale of that domain dimension and generating the length-scale Gamma prior for that domain dimension as output. On Synthetic Super-dataset (L), MPHD Standard uses a domain-specific context that specifies the number of dimensions as all domain dimensions are continuous. For each of the other GP parameter types such as signal variance, MPHD Standard directly learns a shared prior distribution without NN for all domains. (2) MPHD Non-NN HGP: the model is a simplified version that learns a shared length-scale Gamma prior for all search space dimensions without using an NN-based length-scale prior and is pre-trained with the two-step approach. Essentially, function \(\phi\) outputs a constant for every hyperparameter type, same as SS3.2.1. Priors for the other GP parameter types such as signal variance are the same as MPHD Standard. For both versions of MPHD, the underlying GP uses constant mean and anisotropic Matern kernels, and function \(\phi\) outputs hyperparameters in Gamma priors. See more details of the NN and context setups in SSE. Fig. 5 shows the KL divergence between the ground-truth Gamma prior and the Gamma prior with pre-trained function \(\phi\) as the number of training datasets increases. The KL divergence with respect to ground-truth Gamma prior is averaged over all possible number of domain dimensions between 2 and 14. The KL divergence Figure 4: For Synthetic Super-dataset (S) with a fixed one-dimensional length-scale prior, we plot (a) estimated shape parameter \(\hat{a}\) of Gamma distribution prior for the length-scale GP parameter, (b) the PDF of the shared Gamma prior for length-scales (§3.2.1), and (c) NLLs of the test datasets on pre-trained priors w.r.t. the number of training datasets. We show the mean and violin plots over 5 random seeds. The pre-trained Gamma distributions with 16 training datasets are more stable than those with 2 training datasets and match well with the ground-truth prior. Figure 5: For Synthetic Super-dataset (L) with domain-specific length-scale priors, we plot the average KL divergence of the pre-trained length-scale prior of the two variants of MPHD with respect to the ground-truth priors in the test datasets as the number of training datasets increases from 1 to 4. The average is taken over 5 random seeds. values of both versions of MPHD decrease as the number of training datasets increases, which shows that the pre-trained model gradually becomes closer to the ground truth. Moreover, MPHD with NN \(\phi\) achieves lower KL divergence values than MPHD with constant \(\phi\), showing the advantage of using an expressive function \(\phi\) in MPHD for complex problems. ## 4 MPHD for Bayesian optimization The goal of BO is to optimize a black-box test function \(f\) with as few evaluations on \(f\) as possible. BO works by optimizing a series of acquisition functions to sequentially select inputs and observe their function values. In every iteration of BO, a surrogate model is constructed based on all the observations on function \(f\), and the acquisition function is defined over the predictions from the surrogate model. MPHD can generate domain-specific hierarchical GPs as surrogate models for BO on new search spaces. More precisely, to optimize function \(f\) over its domain \(\mathcal{X}^{(f)}\) (i.e., search space for BO2), we use the domain-specific context \(S_{f}=\left[s_{fh}\right]_{h=1}^{H_{f}}\) of domain \(\mathcal{X}^{(f)}\) to obtain the prior distributions \(p(\theta_{fh}\mid\alpha_{fh})\), where \(\alpha_{fh}=\phi(s_{fh}),\forall h\in[H_{f}]\). See Fig. 6 for an illustration of how MPHD generates the surrogate model. Footnote 2: Without loss of generality, we use “search spaces” and “domains” interchangeably in this paper. At each iteration of BO, and we compute the MAP estimate for the GP parameters \(\theta_{f}=\left[\theta_{fh}\right]_{h=1}^{H_{f}}\): \[\hat{\theta}_{f}=\arg\max_{\theta_{f}}p(\theta_{f}\mid D_{f},S_{f},\phi=\hat {\phi})=\arg\max_{\theta_{f}}p(D_{f}\mid\theta_{f},S_{f},\phi=\hat{\phi}) \prod_{h=1}^{H_{f}}p(\theta_{fh}\mid\alpha_{fh}=\hat{\phi}(s_{fh})), \tag{5}\] where \(D_{f}\) is the set of observations made on function \(f\). We can then use a GP parameterized by \(\hat{\theta}_{f}\), \(\mathcal{GP}(\mu_{f},k_{f};\hat{\theta}_{f})\), to compute the acquisition function. See the full algorithm in SSD. In the rest of this section, we present the experiment results and analyses to verify the usefulness of MPHD for decision making in BO. SS4.1 introduces the 3 types of datasets we experimented with. SS4.2 lists the compared methods including 2 variants of MPHD and 9 different baselines. SS4.3 presents the results on transfer learning for Bayesian optimization. Figure 6: Illustration of how the learned MPHD model is used for BO. Only the second step of the pre-training of MPHD is shown. In this example, domain A has 2 dimensions so there are 2 length-scale parameters for it, while domain T has 3 dimensions and there are 3 corresponding length-scale parameters. ### Datasets We used both synthetic data and real-world data in our experiments. The synthetic data used was the Synthetic Super-dataset (L) introduced in SS3.2.2, where the ground truth GP parameters for all domains were sampled from their prior distributions. The real-world data were collected on hyperparameter tuning tasks for classic machine learning models (Pineda-Arango et al., 2021) and near state-of-the-art deep learning models (Wang et al., 2022). **HPO-B Super-dataset**(Pineda-Arango et al., 2021) is a large-scale multi-task benchmark for hyperparameter optimization. As a super-dataset, it consists of 16 different search spaces and more than 6 million evaluations in total. The dimensions of these search spaces vary from 2 to 18. **PD1 Dataset**(Wang et al., 2022) is a large hyperparameter tuning dataset collected by training expensive deep neural network models on popular image and text datasets, as well as a protein sequence dataset. As a dataset (instead of a super-dataset), it contains 24 sub-datasets of the same 4-dimensional search space. For HPO-B Super-dataset and PD1 Dataset, we normalized the range of every domain dimension as well as function values to \([0,1]\). Synthetic Super-dataset (L) was generated with every domain dimension in \([0,1]\), and its function values were kept unnormalized in order to test its ground-truth priors. **Train/test splits:** For any super-dataset \(D=\{(D_{i},S_{i})\}_{i=1}^{N}\) of the two, we split every dataset \(D_{i}\) in the super-dataset into a training dataset \(D_{i}^{\text{train}}\) and a test dataset \(D_{i}^{\text{test}}\), each containing a disjoint subset of sub-datasets in \(D_{i}\). As mentioned in SS3.2.2, we used 80% sub-datasets within each dataset as the training sub-datasets and the remaining 20% as the test sub-datasets for Synthetic Super-dataset (L). HPO-B Super-dataset comes with a pre-specified per-dataset train/test split and we used the same setup. To show the generalization capability across different real-world datasets, we evaluated the BO performances of MPHD on PD1 (Wang et al., 2022), where the model was pre-trained only on HPO-B (Pineda-Arango et al., 2021). Although MPHD models were never pre-trained on PD1 during the evaluation, a train/test split for PD1 is still needed because the homogeneous meta BO methods have to be pre-trained on PD1. Out of the 24 sub-datasets in PD1, we abandoned the ImageNet ResNet50 1024 task as it only has 100 datapoints. We randomly sampled 19 (\(\sim 80\%\)) of the remaining 23 sub-datasets as training sub-datasets and used the remaining 4 (\(\sim 20\%\)) sub-datasets as test sub-datasets. For convenience, we denote the entire PD1 Dataset by \(D_{P}\), its training part by \(D_{P}^{\text{train}}\), and its test part by \(D_{P}^{\text{test}}\). ### Experiment setups and compared methods To test the capability of MPHD to generalize to new tasks with both seen and unseen search spaces, we designed experiments to compare MPHD with competitive meta BO baselines including HyperBO (Wang et al., 2022), which is known to be the state-of-the-art GP prior pre-training method for BO. For MPHD, we tested the performance of both the two variants MPHD Standard and MPHD Non-NN HGP to understand different ways of setting function \(\phi\) in Fig. 1. The context used for MPHD Standard is a 4-dimensional domain-dimension-specific vector that includes information on whether the domain dimension is continuous or discrete, the number of continuous dimensions, and the number of discrete dimensions in the domain. **Baselines:** (1) Random sampling. (2) A Hand-specified Hierarchical GP prior, fixed across all search spaces. (3) A Non-informative Hierarchical GP prior, fixed across all search spaces. See the details in SSE. (4) HyperBO (Wang et al., 2022). (5) ABLR (Perrone et al., 2018). (6) FSBO (Wistuba and Grabocka, 2021). (7) Base GP, a single GP that uses the MLE of GP parameters \(\hat{\theta}_{i}\) of the search space that it is being tested in, which is the same MLE achieved in Step 1 of the pre-training of MPHD. This baseline is essentially a simplified version of HyperBO that does not have the MLP base for its GP kernel and uses a constant mean function. (8) The Ground-truth Hierarchical GP prior, which is only available for Synthetic Super-dataset (L). (9) The Ground-truth GP for the test dataset, which is also only available for Synthetic Super-dataset (L). HyperBO and FSBO use an anisotropic Matern kernel (\(\nu=5/2\)) with an MLP base, while ABLR uses a dot-product kernel with an MLP base. All the remaining GP-base methods use an anisotropic Matern kernel (\(\nu=5/2\)) without an MLP base. HyperBO uses a linear mean function with an MLP base. ABLR and FSBO use a zero mean function. The rest of the GP-based methods all use a constant mean function. Please see SSE for the exact configurations, including GP prior parameters, of all compared methods. Based on whether a method requires pre-training and how it is pre-trained, we can group all compared methods by the following categories: (1) Methods that are not pre-trained. This category includes Random, the Hand-specified HGP prior, the Non-informative HGP prior, the Ground-truth HGP prior, and the Ground-truth GP. (2) Heterogeneous meta BO methods, which are meta BO methods that can generalize over search spaces. This category only includes variants of MPHD. (3) Homogeneous meta BO methods, which are meta BO methods that can only train and test on a single search space. This category includes HyperBO, ABLR, FSBO, and Base GP. When testing the BO performances of compared methods in the search space \(\mathcal{X}^{(i)}\) corresponding to a dataset \(D_{i}\) within a super-dataset \(D\) (either Synthetic Super-dataset (L) or HPO-B Super-dataset), we used the test part of that dataset, \(D_{i}^{\text{test}}\), to run BO with every tested method. For every method, we report its average normalized simple regret on all test sub-datasets in all search spaces, i.e., all sub-datasets in \(\{D_{i}^{\text{test}}\}_{i=1}^{N}\). Methods that are not pre-trained can be directly tested. In order to test the ability of MPHD to generalize to new functions in seen search spaces as well as unseen search spaces, we designed two settings for its model pre-training. In the default setting, the MPHD models were pre-trained on training sub-datasets from all search spaces in the super-dataset, i.e., \(\{(D_{j}^{\text{train}},S_{j})\}_{j=1}^{N}\). In the second setting denoted by NToT (Not Trained on Test Search Space), the MPHD models were pre-trained on the training super-dataset without the training dataset for the test search space, i.e., \(\{(D_{j}^{\text{train}},S_{j})\}_{j=1}^{N}\setminus\{(D_{i}^{\text{train}},S_{ i})\}\). Therefore, in the NToT setting, the pre-trained model of MPHD is tested on functions from an unseen search space. Because homogeneous meta BO methods need to be pre-trained in the same search space used for BO test, their models were pre-trained on \(\{(D_{i}^{\text{train}},S_{i})\}\). For PD1, we report the average normalized simple regret on all test sub-datasets in PD1, i.e., sub-datasets in \(D_{P}^{\text{test}}\). Same as above, methods that are not pre-trained can be directly tested. As a special case of the NToT setting, when testing the BO performance of MPHD on PD1 Dataset, the MPHD model were pre-trained on the entire HPO-B Super-dataset \(\{(D_{j}^{\text{train}},S_{j})\}_{j=1}^{N}\) but not on PD1 Dataset. In this case, MPHD needs to generalize to an unseen search space that is not even in the same super-dataset that it is trained on. Models of homogeneous meta BO methods were pre-trained on \(D_{P}^{\text{train}}\), the training sub-datasets in PD1. Models of all homogeneous meta BO methods in every search space were pre-trained by minimizing Eq. 3 using Adam (Kingma & Ba, 2015; Wistuba & Grabocka, 2021). MPHD models were pre-trained by minimizing Eq. 2 across all training search spaces using Adam. More details can be found in SSE and SSE. ### Results on Bayesian optimization For all of the following experiments, the budget for BO is 100, and there are a set of 5 initial observations that are randomly sampled for each of the 5 random seeds. The acquisition function used for all GP-based methods is _Probability of Improvement_ (PI) (Kushner, 1964) with target value \(\max(y_{t})+0.1\). As shown by Wang & Jegelka (2017), PI can obtain high BO performance by setting good target values. Results on other acquisition functions are included in SSE. Synthetic Super-dataset (L).Fig. 7 (left) shows the BO performances of compared methods on Synthetic Super-dataset (L). MPHD Standard, which has an NN-based length-scale prior model, outperformed all the baselines except for the Ground-truth HGP and Ground-truth GP. This demonstrates the ability of MPHD to effectively learn a good prior model across heterogeneous search spaces during pre-training. Moreover, while Base GP should be more customized to the search space than MPHD Non-NN HGP, MPHD Non-NN HGP was able to achieve much better performance than Base GP; this shows the importance of Step 2 in MPHD regardless of using domain-specific contexts. Another observation is that MPHD Standard outperformed MPHD Non-NN HGP by a large margin, demonstrating the effectiveness of using an NN to capture the dependence of domain-specific priors on the context features. Interestingly, HyperBO achieved better performance in the initial few BO iterations but it fell behind other methods after about 10 BO iterations for the test functions in Synthetic Super-dataset (L). One possible reason is that the posterior GPs in HyperBO significantly deviated from the ground truth GP. As noted in Wang et al. (2022), the difference between the predicted posterior variance and the ground truth posterior variance increases with the number of observations. This can be a critical issue for complex datasets like Synthetic Super-dataset (L), though it was not a severe problem for HPO-B and PD1 (results below). FSBO and ABLR can be viewed as special cases of HperBO with zero mean function, which makes it difficult to obtain a good prior or posterior compared to the ground truth. On the contrary, the posterior inference in MPHD was done for a hierarchical GP instead of a single GP in HyperBO. Since a hierarchical GP can typically assign probability density to a larger space of functions than a single GP, MPHD was able to achieve much more robust performance than HyperBO. Fig. 7 (right) shows the BO results on Synthetic Super-dataset (L) of compared methods in the NToT setting where the search space used for BO test is not used for pre-training. HyperBO, ABLR, and FSBO cannot be tested in this setting as they require pre-training in homogeneous search spaces for each test function. MPHD Standard in the NToT setting achieved superior performance over baseline methods such as the Hand-specified HGP prior and the Non-informative HGP prior, which demonstrates the ability of MPHD to generalize its pre-trained model to functions in new search spaces that are unseen during pre-training. Similar to the results in Fig. 7 (left), MPHD Standard again outperformed MPHD Non-NN HGP, showing the importance of using domain-specific priors. across search spaces in real-world problems. MPHD Standard (NToT) outperformed all other methods in this setting, again demonstrating its ability to generalize to new test functions in unseen search spaces. Figure 8: Results on HPO-B Super-dataset. Left: in the default setting, HyperBO had superior performance at first during the BO iterations, but MPHD Standard eventually achieved the lowest regret after 100 BO iterations. Right: in the NToT setting, MPHD Standard (NToT) achieved the best performance among all methods that do not require pre-training on the test search space. Figure 9: Results on PD1 Dataset. Left: in the NToT setting, MPHD Standard (NToT) achieved the best performance among all methods that do not require pre-training on the test search space, demonstrating its ability to generalize to unseen search spaces. Right: we include homogeneous meta BO methods in addition to NToT methods. Homogeneous meta BO methods HyperBO, ABLR, and FSBO outperformed MPHD Standard (NToT), which is expected since these methods were directly pre-trained on PD1 Dataset while MPHD Standard (NToT) was not. PD1.Fig. 9 (left) shows the BO results of methods valid in the NToT setting on PD1 Dataset. Here the MPHD models were pre-trained on training datasets in HPO-B Super-dataset, but were not pre-trained on PD1. MPHD Standard (NToT) achieved the best performance in the NToT setting. Even though HPO-B and PD1 are two separately collected real-world hyperparameter tuning datasets, MPHD was capable of generalizing the model pre-trained on HPO-B to test functions in PD1. Fig. 9 (right) also includes the BO performances of the homogenous meta BO methods on PD1 Dataset. MPHD Standard (NToT) was outperformed by HyperBO, ABLR, and FSBO, which is not surprising as these single-search-space baselines were pre-trained on training sub-datasets in PD1 while MPHD Standard (NToT) was not. ## 5 Conclusions In this work, we propose MPHD, the first GP-based transfer learning BO method that works on heterogeneous search spaces. The key idea is to pre-train a hierarchical GP with domain-specific priors on training data from functions in different domains. Our theoretical and empirical analyses showed that MPHD enjoys appealing asymptotic properties and performs competitively on challenging hyperparameter tuning tasks. ### Broader Impact Statement Hyperparameter tuning for machine learning (ML) models, especially deep learning models, can be very costly if we repeatedly evaluate a large number of hyperparameters. Each single evaluation of a hyperparameter value requires training and testing a new instantiation of the model. Our framework MPHD, with its superior BO performance discussed in SS4.3, can help to reduce the number of evaluations needed for hyperparameter tuning tasks and thus reduce their computational cost and carbon footprint potentially by a large margin. ### Limitations and Future Work MPHD learns a prior distribution on Gaussian processes (GPs) from existing data to improve the performance of Bayesian optimization (BO) on new task. There are other components that can also be meta-learned, such as acquisition functions (Volpp et al., 2020), to maximize the effectiveness of the BO system. One direction of future work is jointly pre-train all components of BO to allow more flexibility and further improve the performance. In addition, MPHD currently assumes a stationary kernel function and constant mean function when fitting the GP parameters. Possible future work includes relaxing this assumption and incorporating architecture search for the kernel function and mean function to enrich the space of hierarchical GP priors. #### Acknowledgments We thank Richard Zhang for feedback, and Jasper Snoek and Eytan Bakshy for helpful comments on the previous version of this work (Fan et al., 2022), which was presented at NeurIPS 2022 Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems. The key difference to this previous work is the inclusion of domain-specific hierarchical Gaussian processes as opposed to using a universal model. Our work also benefited from Microsoft Azure credits provided by the Harvard Data Science Initiative, as well as Google Cloud Platform Credit Awards provided by Google.
2309.06784
Understanding Molecular Abundances in Star-Forming Regions Using Interpretable Machine Learning
Astrochemical modelling of the interstellar medium typically makes use of complex computational codes with parameters whose values can be varied. It is not always clear what the exact nature of the relationship is between these input parameters and the output molecular abundances. In this work, a feature importance analysis is conducted using SHapley Additive exPlanations (SHAP), an interpretable machine learning technique, to identify the most important physical parameters as well as their relationship with each output. The outputs are the abundances of species and ratios of abundances. In order to reduce the time taken for this process, a neural network emulator is trained to model each species' output abundance and this emulator is used to perform the interpretable machine learning. SHAP is then used to further explore the relationship between the physical features and the abundances for the various species and ratios we considered. \ce{H2O} and CO's gas phase abundances are found to strongly depend on the metallicity. \ce{NH3} has a strong temperature dependence, with there being two temperature regimes (< 100 K and > 100K). By analysing the chemical network, we relate this to the chemical reactions in our network and find the increased temperature results in increased efficiency of destruction pathways. We investigate the HCN/HNC ratio and show that it can be used as a cosmic thermometer, agreeing with the literature. This ratio is also found to be correlated with the metallicity. The HCN/CS ratio serves as a density tracer, but also has three separate temperature-dependence regimes, which are linked to the chemistry of the two molecules.
Johannes Heyl, Joshua Butterworth, Serena Viti
2023-09-13T08:16:55Z
http://arxiv.org/abs/2309.06784v1
# Understanding Molecular Abundances in Star-Forming Regions Using Interpretable Machine Learning ###### Abstract Astrochemical modelling of the interstellar medium typically makes use of complex computational codes with parameters whose values can be varied. It is not always clear what the exact nature of the relationship is between these input parameters and the output molecular abundances. In this work, a feature importance analysis is conducted using SHapley Additive exPlanations (SHAP), an interpretable machine learning technique, to identify the most important physical parameters as well as their relationship with each output. The outputs are the abundances of species and ratios of abundances. In order to reduce the time taken for this process, a neural network emulator is trained to model each species' output abundance and this emulator is used to perform the interpretable machine learning. SHAP is then used to further explore the relationship between the physical features and the abundances for the various species and ratios we considered. H\({}_{2}\)O and CO's gas phase abundances are found to strongly depend on the metallicity. NH\({}_{3}\) has a strong temperature dependence, with there being two temperature regimes (< 100 K and > 100K). By analysing the chemical network, we relate this to the chemical reactions in our network and find the increased temperature results in increased efficiency of destruction pathways. We investigate the HCN/HNC ratio and show that it can be used as a cosmic thermometer, agreeing with the literature. This ratio is also found to be correlated with the metallicity. The HCN/CS ratio serves as a density tracer, but also has three separate temperature-dependence regimes, which are linked to the chemistry of the two molecules. keywords: stars: abundances - astrochemistry - methods: statistical ## 1 Introduction Modelling the interstellar medium and star formation is often a complex matter. This is normally done using computational codes that take in a number of physical parameters and use these to integrate the system of coupled ordinary differential equations (ODEs) that represent a chemical network (Taquet et al., 2012; Ruaud et al., 2016; Holdship et al., 2017). However, due to the non-linear nature of the chemistry, it is often unclear what the exact relationship is between the initial parameters and the output chemical abundances of the molecules of interest. This is often complicated by the fact that the various parameters have differing effects on the output abundances for different ranges. It has been customary in astrochemistry to consider grids of models in which the various parameters are varied (Taquet et al., 2012; Tunard and Greve, 2016; Viti, 2017; Bianchi et al., 2019; James et al., 2020; Holdship and Viti, 2022; Heyl et al., 2023). The time-consuming and computationally expensive nature of many computational codes often limits the total number of model evaluations possible. This makes drawing conclusions about the importance of various parameters difficult. In this work, we look to address both of these issues. We make use of SHapley Additive exPlanations (SHAP) (Lundberg and Lee, 2017) to help improve our understanding of a chemical code. SHAP provides us with a means of understanding why a machine learning model outputs a particular value. By considering various combinations of inputs and outputs, these techniques will tell us what the relationship is. This has found use in astrophysics recently (Machado Poletti Valle et al., 2021; Ansari et al., 2022) in the context of interpreting the outputs of machine learning models. To improve the efficiency of this process, we employ statistical emulation. The process of statistical emulation involves fitting a statistical function to model the relationship between the inputs and outputs of a forward model (Grow and Hilton, 2018). A significant amount of work has been done in recent years in applying statistical emulation to astrochemistry. de Mijolla et al. (2019) used a feed-forward neural network to accelerate the Bayesian inference process, while Grassi et al. (2011) used these to accelerate the forward modelling. Branca and Pallottini (2023) considered how a physics-informed neural network could be used to reduce the computational cost of predicting the time evolution of chemistry. Holdship et al. (2021) utilised autoencoders to model temperature and abundance time evolution. We adopt the approach taken by de Mijolla et al. (2019) and Grassi et al. (2011) in this work by using an emulator to simulate the final outputs of a chemical code and then evaluate these a number of times for the purposes of the machine learning interpretability algorithm. Work has been done in this area to simplify chemical networks to improve interpretability (Hoffmann et al., 2019; Grassi et al., 2022), but this is not an approach we wish to to consider. Instead we build on the work done in de Mijolla et al. (2019) and look to use the interpretability techniques on these emulators. The purpose of using an emulator is that it accurately predicts the output of the forward model it is emulating in a fraction of the time. Furthermore, if the emulator is an accurate approximation for the forward model output, then it stands to reason that it accurately captures the mapping between the input parameters and the output. By using machine learning interpretability algorithms, we can identify these. In Section 2, we introduce the chemical code that we will be looking to emulate. In Section 3 we introduce statistical emulation and machine learning interpretability. Section 4 is dedicated to discussing the results of the analysis. ## 2 The Chemical Code and Network In this work, we use the open source publicly available time-dependent astrochemical code UCLCHEM (Holdship et al., 2017). This astrochemical code has been developed with several updates (Viti et al., 2004; Roberts et al., 2007; Holdship et al., 2017). UCLCHEM is a time-dependent gas-grain astrochemical code. It utilises a rate equation approach to modelling the abundances of the gas phase and surface species. The initial elemental abundances are listed in Table 1. The default values in the code for the radiation field and the cosmic ray ionisation rate are \(\psi=1\) Habing and \(\zeta=1.3\times 10^{-17}\) s\({}^{-1}\). Radiation is attenuated by the visual extinction. Gas and dust are rescaled from solar values. Extensive documentation on the inner workings of UCLCHEM can be found on the GitHub page1. Footnote 1: [https://uclchem.github.io/](https://uclchem.github.io/) In this work, we use UCLCHEM in two phases of modelling. Phase 1 corresponds to the isothermal gravitational collapse of a diffuse gas cloud modelled as a Bonnor-Ebert sphere. However, this stops once the internal pressure begins to balance out the gravitational pressure. This increase in internal pressure is accompanied by an increase in temperature which is when Phase 2 begins, which models a protostar. At this point, the temperature continues to increase and grain-surface species begin to evaporate as the temperatures near their respective evaporation temperatures. In Phase 1, the gas cloud collapses isothermally at 10 K from 100 cm\({}^{-3}\) to some final density, which is left as a free parameter. Phase 2 starts off at this density and begins to heat up. It is Phase 2 that has a number of physical parameters that can be varied in order to model various star-forming scenarios. There are a number of free parameters that we vary in this work, which are the same as in de Mijolla et al. (2019). These are: * Final density of Phase 1 or initial density of Phase 2 (cm\({}^{-3}\)) * Metallicity (a scaling factor of all abundances) * Radiation Field (Habing) * Cosmic ray ionisation rate (in units of 1.3x10\({}^{-17}\) s\({}^{-1}\)) * Final Temperature of Phase 2 (in Kelvin) The ranges over which we vary the parameters are summarised in Table 1. The grain-surface network we utilise is the default one in the GitHub repository that has been able to reproduce the abundances of the main observed grain-surface species for example in Holdship et al. (2017) and Heyl et al. (2023). The grain-surface reaction mechanisms that are used in UCLCHEM include the Eley-Rideal mechanism as well as the Langmuir-Hinshelwood grain-surface diffusion mechanism. These were implemented into the code in Quenard et al. (2018), along with the competition formula from Chang et al. (2007) and Garrod & Pauly (2011). The binding energies that are required in order to calculate diffusion reaction rates are taken from Wakelam et al. (2017). The gas-phase network is taken from UMIST (McElroy et al., 2013). While the grain network has undergone minor modifications since de Mijolla et al. (2019), the gas network has remained the same. Since we are only considering gas-phase species, minor modifications to the grain network are unlikely to be influential. ## 3 Machine Learning Interpretability and Statistical Emulation ### Machine Learning Interpretability It is often unclear why a model provides a certain output for a given input. This is not exclusive to machine learning algorithms, but can also be an issue with computational codes that integrate systems of differential equations, such as UCLCHEM. As a result, identifying the effect that a specific physical parameter, which we refer to as a feature in this work, has on the output becomes difficult. The concept of feature importance refers to the size of the contribution of a specific feature in determining the model output. There exist many methods by which one can interpret the effect of a parameter in making a certain prediction value, such as permutation feature importance or Local Surrogate Models. For an overview of the various methods, see Molnar (2022). We use Shapley values, a method from game theory, to quantify the importance of the features (Shapley, 2016). This is the first such application in the area of astrochemistry. While we provide an overview of the method we use in this paper below, we refer the reader to Shapley (2016), Lundberg & Lee (2017) and Lundberg et al. (2018) for further details. The Shapley value of the \(i\)th feature, \(\phi_{i}^{j}\),2 is defined as the marginal contribution of that feature in mapping the \(j^{th}\) data point in our dataset, \(x^{j}\), to its corresponding output \(f(x^{j})\) averaged over all possible coalitions. A coalition is defined as a subset of the set of features. Notice that in this case, the function \(f\) corresponds to UCLCHEM and \(x^{j}\) corresponds to a particular input vector consisting of one entry for each feature in Table 1 that we modify. Each Shapley value, \(\phi_{i}^{j}\), is specific to each parameter of each data point. \begin{table} \begin{tabular}{l l l l l} \hline \hline \multicolumn{5}{c}{Parameter ranges} \\ \hline Parameter & Minimum & Maximum & Unit & Scale \\ \hline n & \(10^{4}\) & \(\mathbf{10^{7}}\) & cm\({}^{-3}\) & Logarithmic \\ \(\zeta\) & 1 & 1000 & 1.3x10\({}^{-17}\) s\({}^{-1}\) & Logarithmic \\ T & 10 & 200 & K & Linear \\ m\({}_{z}\) & 0 & 2 & **solar value** & Linear \\ \(\psi\) & 1 & \(10^{3}\) & Habing & Logarithmic \\ \hline \hline \end{tabular} \end{table} Table 1: The range of values used for each parameter as well as their units and scales. In the context of the machine learning application in this work, we refer to these parameters as the features of the model. Shapley value explanations are given as a linear model (Molnar, 2022). We define a feature explanation model, \(\hat{g}\) in the following way: \[\hat{g}(x^{\prime j})=\phi_{0}+\sum_{i=1}^{n}\phi_{i}x_{i}^{\prime j}, \tag{1}\] where \(\phi_{0}=\mathbb{E}[f(x)]\) is the value of the average prediction in our dataset, \(\phi_{i}\) is the explained feature effect of the \(i\)th feature, \(n\) is the number of features and \(x_{i}^{\prime j}\) is an element of the "coalition vector", \(x^{\prime j}\), where \(x^{\prime j}\in\{0,1\}^{n}\). The coalition vector is a vector consisting of zeros and ones with a zero indicating that a feature is "absent" and a one indicating it is "present". One can imagine that this feature explanation model gives us an understanding of what happens when we choose to remove certain features, that is set a particular \(x_{i}^{\prime j}\) to equal zero. If we want to be able to calculate the feature importance of a specific feature, then we need to be able to selectively "remove" features and see how this impacts our model output. When we say that we "remove" a feature, what we effectively mean is that we replace that value in the input vector by a random value from the dataset for that feature. The logic behind Shapley values is that we wish to see the contribution of a specific feature when we include or exclude it from our data point for varying coalitions of features. More formally, we can calculate the feature value importance as follows for a data point: \[\phi_{i}^{j}=\sum_{S\subseteq N}\frac{|S|!(n-|S|-1)!}{n!}(g(x_{i}^{\prime j}) -\hat{g}(x_{-i}^{\prime j})), \tag{2}\] where \(N\) is the set of features, \(n\)**is the number of features**, \(S\) is the subset, \(g(x_{i}^{\prime j})\) is the explanatory model evaluated when the feature is included and \(g(x_{-i}^{\prime j})\) the explanatory model evaluated when the feature is not included. We refer to \(\phi_{i}^{j}\) as a Shapley value. We can specifically make a connection between the function we are trying to explain, \(f(x)\), and the explanation function by noting that \(\phi_{0}=\mathbb{E}[f(x)]=\frac{1}{d}\sum_{j=1}^{d}f(x_{j})\), where \(d\) is the number of data points. By setting all \(x_{i}^{\prime j}\) equal to 1 we obtain: \[f(x^{\prime})=\hat{g}(x^{\prime j})=\mathbb{E}[f(x)]+\sum_{i=1}^{n}\phi_{i}^{ j}, \tag{3}\] which implies that the value of a function at a given data point is equal to the global average of the function (i.e. \(\mathbb{E}[f(x)]\)) plus the feature value importances we calculate for that data point. We now explain what this entails practically. Say that we have a data point of the form (\(n\), \(\zeta\), \(T\), \(m_{z}\), \(\psi\)) = (\(10^{3}\), 500, 50, 1, 500) and we are interested in determining the contribution of the temperature being 50 K in producing an abundance of, say, \(10^{-6}\). What this entails is taking all subsets of the set of features. Two of these subsets might be: * All of the original features * All of the original features except the density For the first of these subsets, we consider the change in the value of the explanatory model, \(\hat{g}\), when we include and exclude the temperature value of \(x_{3}=50\). "Excluding" simply means that we replace the 50 K with a randomly drawn temperature value from our dataset of temperatures. We then compute the feature explanation model when this temperature value is included and take the difference as seen in Equation 2. For the second sample subset, we repeat this process except we always take a random value for the density as this is excluded from this subset. This is done for all subsets to calculate the feature importance for temperature. However, observe that the calculation across all the subsets becomes computationally unfeasible as the number of features grows, with the number of coalitions growing exponentially. We employ SHAP (Lundberg and Lee, 2017) to allow us to address this issue. SHAP is particularly useful, as it approximates the Shapley values, greatly reducing the time taken to compute them. SHAP has been found to be the theoretically optimal means of calculating feature attribution (Lundberg and Lee, 2017; Lundberg et al., 2018). This is done through the use of the TreeSHAP algorithm (Lundberg et al., 2018). TreeSHAP is an algorithm that exactly computes the SHAP values for tree-based algorithms, such as XGBoost or random forests. One drawback of TreeSHAP is that it can give unintuitive explanations when the features are related (Molnar, 2022). This is unlikely to be the case in this work, as we work with five physically unrelated physical features that we sample independently when we generate our data set. We can also provide a ranking of the various features in terms of global feature importance. As Shapley values can be negative, this can be achieved by averaging the absolute value of all Shapley values for each feature across all datapoints. Formally, this is defined as: \[I_{i}=\frac{1}{d}\sum_{j=1}^{d}|\phi_{i}^{j}|, \tag{4}\] where \(d\) is the number of data points and \(I_{j}\) is the average absolute value of the \(j\)th feature. In principle, if we wished to compute the relative importances of the features we can do this by taking the above-mentioned average of the absolute values for a single feature and normalising this by the sum of the average of absolute values for all the features. We can then define the "relative importance" for a feature \(i\), \(\hat{I_{i}}\), as: \[\hat{I_{i}}=\frac{\sum_{j=1}^{d}|\phi_{i}^{j}|}{\sum_{m=1}^{n}\sum_{j=1}^{d}| \phi_{m}^{j}|}, \tag{5}\] where \(n\) is the number of features. This quantity effectively gives us a fractional contribution of each feature to the average behaviour of the model. We summarise the relative importance of each parameter in predicting the outputs we consider in this work in Table 2. ### Implementation While the use of SHAP greatly reduces the time taken to obtain the Shapley values relative to calculating the Shapley values in full, this process is still likely to take long due to the time taken per evaluation of the forward model, i.e. UCLCHEM. Each evaluation of the forward model takes on the order of 1-2 minutes. This makes considering an ensemble of models with 100000 runs or more unfeasible. To circumvent this, we elect to train a statistical emulator to reproduce the results of UCLCHEM. If the emulator has a sufficiently high accuracy, then it is safe to assume it is able to capture the internal workings of the original code, which we wish to probe. We now discuss the emulator and how we build it. To train the emulator, we generated 120,000 points in parameter space using a Latin Hypercube sampling scheme (McKay et al., 1979), which was implemented using the Python surrogate modelling toolbox (Bouhlel et al., 2019). Data points in parameter space were generated such that all values were in the ranges given in Table 1. For those features that spanned several orders of magnitude, we elected to sample in log-space. Each species' final log-abundance was used as the output of the algorithm. This was to ensure that all orders of magnitude were treated equally. The input parameters were also scaled to be in the range 0 to 1. All abundances less than \(10^{-12}\) were set equal to \(10^{-12}\) to ensure that the emulator was not being trained to learn what was effectively numerical noise. This limit was chosen because this is typically the lowest observed gas-phase abundance in the literature. We summarise the range of outputs for each species and ratio we consider in this work in Table A3. An XGBoot regressor was trained for the emulation process (Chen & Guestrin, 2016). XGBoost is a gradient-boosted decision tree regressor. We used the Python implementation for XGBoost to train our model3. It was found that better performance was obtained if a separate emulator was trained for each species, as opposed to having one network trained to predict the final abundances of all 239 species in the network. While we trained an emulator for every species in the network, we only present the results of a handful of molecules in this work. We elected to train an XGBoost model instead of using a neural network as in de Mijolla et al. (2019), as XGBoost has been found to perform better on tabular datasets such as the one we consider, while also requiring less tuning (Shwartz-Ziv & Armon, 2022). Footnote 3: [https://xgboost.readthedocs.io/en/stable/index.html](https://xgboost.readthedocs.io/en/stable/index.html) In order to find the best set of hyperparameters for each emulator, we utilised Bayesian Hyperparameter optimisation. Under this procedure, we tune the hyperparameters on a validation set and find the best combination of parameters that minimise the L2 loss. Unlike a grid-search approach to hyperparameter tuning, Bayesian optimisation uses the model performance on previous hyperparameter combinations to choose a next best option, thereby saving a considerable amount of time compared to a grid-search approach. XGBoost has five tunable hyperparameters that we varied using the Bayesian Optimisation Python library (Nogueira, 2014). We list the ranges over which we varied these in Table 2. For integer hyperparameters, we would round to the nearest integer. When evaluating the accuracy of each trained emulator, we considered both the L2 loss obtained obtained from the performance of the emulator on the test dataset as well as the \(R^{2}\) coefficient. All emulators in this work had \(R^{2}\) scores greater than 0.98. ## 4 Results We now look to consider a number of molecules of interest and explore how machine learning interpretability adds to our understanding of their equilibrium abundances in Phase 2. Note that all the molecules we will be considering will be gas-phase molecules, as they evaporate during the warm-up phase. We only considered a small number of molecules as a proof-concept for this method and provide the figures for these here. Figures for other molecules can be found in a dedicated repository.4. Footnote 4: [https://github.com/Bamash/MLinAstrochemistry](https://github.com/Bamash/MLinAstrochemistry) ### Molecules We begin by first considering individual molecules of interest to demonstrate what can be done with machine learning interpretability. We elect to consider three molecules: H\({}_{2}\)O, CO and NH\({}_{3}\). CO is considered as it is the most abundant molecule besides H\({}_{2}\) and also plays a role in molecular gas cooling (Goldsmith, 2001; Shi et al., 2015). H\({}_{2}\)O is of interest due to its high abundance in planetary system and of course its importance in the area of astrobiology (Gensheimer et al., 1996). NH\({}_{3}\) is speculated to be one of the main carriers of nitrogen and it often used as a tracer molecule of cold, dense clouds (Benson & Myers, 1989; Caselli et al., 2019). #### 4.1.1 H\({}_{2}\)O We now investigate the importance of the various physical parameters on the value of the abundance of H\({}_{2}\)O. Figure 1 is a beeswarm plot which is meant to serve as an information-dense qualitative summary of feature importances. **Each point in the beeswarm plot represents a data point from our test set.** The features are arranged from top to bottom in decreasing order of importance to the model output, which is measured by Equation 4. Recall that the SHAP value measures the impact of each feature on the value of the prediction, relative to some baseline value, which is simply the global average, i.e. the average logarithm of the abundance. Along the horizontal axis, individual predictions are plotted in terms of their SHAP value. The points are colour-coded according to their value with the colour bar indicating the value relative to the range of values of that feature. A single colour bar is used for all the features in order to quilatiatively show the relationship between the feature value and the SHAP value. It is for this reason that the colour bar range is from "Low" (indicating the lowest value of the respective feature) to "High" (indicating the highest value that the feature can take. Furthermore, the vertical clustering of the points indicates the density of the points in a manner akin to a violin plot. We emphasise again that this plot is meant to help provide easy-to-use qualitative explanations for observers. We also consider a more quantitative plot to delve deeper into some of the finer points of the beeswarm plot. This is useful if one wishes to consider the nature of the relationship between each feature and the log-ratio. While one can deduce that for temperature and metallicity the relationship is monotonic and increasing, this might still not be enough. It is for this reason that we can plot dependence plots such as Figure 2 which plots the SHAP value for all the variables as a function of the individual variable. Notice that in the plot for each feature \(i\), the SHAP value corresponds to the importance of only that feature, \(\phi_{i}^{f}\), for a point \(j\). Effectively, these dependence plots gives us the marginal contribution of each feature \(i\) to the output. We can also consider the relationship between the abundance of water (instead of the SHAP value) as a function of each of the features. This is plotted in Figure 3. Notice that in order to compute the abundance, we must utilise Equation 3. This means that to compute the abundance we must add the mean log-abundance of water, \(\phi_{0}=\mathbb{E}[f(x)]\), to the SHAP values of each of the features for that \begin{table} \begin{tabular}{l l l} \hline \hline **Hyperparameter** & **Range of values** & **Data type** \\ \hline Maximum Depth & (3, 100) & Integer \\ Maximum features & (0.8, 1.0) & Float \\ Learning Rate & (0.01, 1.0) & Float \\ Number of Estimators & (80, 150) & Integer \\ Sub-sample & (0.8, 1) & Float \\ \hline \hline \end{tabular} \end{table} Table 2: Table of the hyperparameter ranges used when tuning the XGBoost regressor. data point. As a result of the explanatory model being linear in nature, we do not see the same relationships in Figure 3 and in fact observe that there is no relationship between all the parameters besides metallicity and the log-abundance. This is because many of the SHAP importances cancel each other out. Only metallicity still has a noticeable relationship with the log-abundance when we add up the importances of all the parameters. However, in the interest of better understanding the impact of each parameter's individual relationship with the log-abundance, we consider the marginal effects in Figure 2. We would like to emphasise that it is still useful to consider the marginal effects. While we consider a wide range of physical conditions, many observational and modelling exercises relating to tracers will be far more restrictive in their parameter ranges as well as the number of varying parameters. A typical observational environment will not contain the parameter ranges we consider here. It is precisely for these tasks that this methodology will be useful. We observe that the relationships between these variables and the log-abundance of H\({}_{2}\)O are mostly monotonic. However, we will only consider the impact of metallicity, as this has the strongest impact on the model output. The SHAP values for the other features range between -0.4 and 0.4 in log-abundance space which corresponds to factors of 2.5 relative to the average water abundance. Throughout this work, we will only consider features whose SHAP values exceed 1 in log-abundance space. It is clear that metallicity will play a significant role in the abundance of water. While there exists some debate as to what fraction of the ISM oxygen abundance is present in water (van Dishoeck et al., 2021), a decrease in the metallicity will result in a decrease in the amount of oxygen, which in turn will mean that less water will be formed, due to greater competition for the little oxygen present. On the other hand, a large amount of oxygen will result in the opposite effect, to an extent. Water has several destruction pathways that impose an upper limit on how much of it is formed in the gas-phase, regardless of how much oxygen is present. #### 4.1.2 Co Carbon monoxide is an important molecule to consider in astrochemistry. Not only is it an important molecule in the context of grain-surface chemistry and the formation of various complex organic molecules, but it also plays a significant role in gas-phase chemistry. In particular, it is often considered a molecular gas coolant at low temperatures and densities (Goldsmith, 2001; Shi et al., 2015). We are interested in considering how the various parameters we are changing influence its abundance. Figure 4 is a beeswarm plot of the various features and shows that only the metallicity plays a strong role in determining the final CO abundance, which has an \(\dot{I}_{i}\) of 0.91. In order to investigate the exact nature of the relationship, we plot the SHAP dependence plots in Figure 5. We observe an interesting relationship between the metallicity and the CO abundance that is monotonic in nature. We do not observe any notable relationships between its log-abundance and the other parameters, so we only focus on metallicity for now. We observe that for very low metallicities the CO abundance ends up being almost 2 orders of magnitude lower than the 'average' value due to the marginal effect of the metallicity. In Figure 6 we plot the log-abundance of CO as a function of each of the parameters. As we discussed for H\({}_{2}\)O before, to compute the CO abundance we must add the contributions of all the features. As a result of this, only the metallicity appears to have a strong effect on the log-abundance. Work has been done to consider the impact metallicity on CO. In Shi et al. (2015), this was considered in the context of metal-poor galaxies. Here they wished to consider to what extent CO, known to be a coolant in metal-rich galaxies, could also serve the same role in metal-poor ones. It was found that there was significant CO depletion in metal-poor galaxies due to photodissociation. This is unlikely to be the case here as the radiation field is found to be the least influential parameter. The radiation field is only likely to be effective in photodissociation when the density is very low and the radiation field itself is high, which will only be the case for a small number of parameter combinations. We must consider other reasons for the importance of metallicity. Within the UCLCHEM code, the metallicity parameter is a scale factor that scales all elemental abundances of elements heavier than helium by the same factor. This means that as the metallicity parameter is reduced, the abundances of some elements will be reduced so there will be greater competition for them. This results in the abundances of species dropping, as there is simply less of their constituent elements. In the case of CO, we know from Table 1, that there is less C than O which means reducing the metallicity results in C becoming more scarce. It is for this reason that at metallicities close to zero the final abundance of CO drops by 2 orders of magnitude. #### 4.1.3 \(\mathrm{Nh}_{3}\) We now consider ammonia, which is considered one of the significant sources of nitrogen in the interstellar medium. The beeswarm plot in Figure 7 shows the ranking of the 5 features in terms of their relative importance. We consider the nature of the relationship through the use of the SHAP dependence plots in Figure 8 with only temperature being found to have a consistently significant relationship with the log-abundance. Parameters such density and cosmic ray ionisation rate may have individual points with large SHAP values but these are low in frequency compared to the tens of thousands of points plotted, which is why we do not discuss them further. The dependence on metallicity is not as strong as for H\({}_{2}\)O and CO in terms of the tailing-off trend as the metallicity approaches zero. This is likely to be due to the highly non-linear nature of the chemistry. Despite the metallicity decreasing it is likely that there is some reaction that is compensating for the decrease in NH\({}_{3}\) such that more of the now limited nitrogen is now hydrogenated. The dependence on temperature is quite interesting, as we notice that there are two separate temperature ranges over which the abundance takes a different constant value, with the cutoff temperature being 100 K. This is also seen in though Figure 9 which is a plot of the abundances as a function of the individual parameters seems to indicate two regimes. We deduce that these different regimes are related to the chemistry surrounding NH\({}_{3}\) being very different at these stages. The other parameters are of less importance as the deviation from the average NH\({}_{3}\) log-abundance is within about 0.5, or a factor in 3 in actual abundance. The non-temperature parameters cancel each other out in terms of their contributions when these are added together. We can investigate the temperature dependence by considering the relative rates of formation and destruction of ammonia at specific points in time. Figure 10 plots the fractional contributions of the various formation and destruction routes of NH\({}_{3}\) for an instance where the peak temperature is 160 K. We only considered the top reactions that contributed to 99% of the creation or destruction of NH\({}_{3}\). The main NH\({}_{3}\) formation routes are: \[\mathrm{\#H+\#NH_{2}\ \longrightarrow\ NH_{3}} \tag{6}\] Figure 1: A beeswarm plot of the various physical parameters demonstrating their relative importance in predicting the log-abundance of H\({}_{2}\)O. The features are arranged from top to bottom in decreasing order of importance to the model output, which is measured by the mean of the absolute value of the SHAP value averaged across all predictions. Individual predictions are plotted along the horizontal axis according to their SHAP value, which indicates the difference in the value of the model output for that prediction relative to the global average. Furthermore, the points are colour-coded in terms of the size of the feature value relative to the range of values that the respective feature takes. We observe that metallicity (\(\tilde{I}_{m_{e}}=0.54\)) has the greatest impact followed by density (\(\tilde{I}_{m}=0.17\)), cosmic ray ionisation rate (\(\tilde{I}_{\zeta}=0.15\)), temperature (\(\tilde{I}_{T}=0.17\)) and radiation field (\(\tilde{I}_{\theta}=0.02\)). Figure 2: A plot of the SHAP values as a function of the feature values used to predict the log-abundance of H\({}_{2}\)O. Unlike the beeswarm plot, these SHAP dependence plots allow us to see the exact nature of the relationship between the feature value and SHAP value. Recall that the SHAP value tells us the difference in value between the average output value (log-abundance of the water). We see that the logarithms of density and the cosmic ray ionisation rate are roughly linear with respect to the SHAP value with the same being true for the temperature. For metallicity, we observe a significant decrease in the SHAP value for low metallicities, but this seems to level off for values greater than 1. Figure 4: A beeswarm plot of the various physical parameters demonstrating their relative importance in predicting the log-abundance of CO. We observe that metallicity is the only parameter with a significant influence on the value (\(\dot{I}_{m_{2}}=0.91\)) with the other parameters not being very useful predictors (\(\dot{I}_{n}=0.01\), \(\dot{I}_{\xi}=0.04\), \(\dot{I}_{T}=0.04\) and \(\dot{I}_{\phi}=0.00\)). Figure 3: A plot of the log-abundance of H\({}_{2}\)O as a function of the various features. To calculate the log-abundance for a given data point, we needed to sum up the importance values of each feature for that data point. We observe that only metallicity maintains a clear trend. For the other features, we have no discernible trend which can be attributed to the feature importances nullifying each other. Figure 5: A plot of the SHAP values for the various features (besides the radiation field) as a function of the feature values used to predict the log-abundance of CO. As was observed in the beeswarm plot, only metallicity has a significant effect on the abundance. For low metallicities, we observe a large decrease in the SHAP value. The SHAP value monotonically increases with metallicity, eventually levelling off for values greater than 1. Figure 6: A plot of the log-abundance of CO as a function of the various features. To calculate the log-abundance for a given data point, we needed to sum up the importance values of each feature for that data point. Only metallicity maintains a clear trend compared to Figure 5. For the other features, we have no discernible trend. This is due to the marginal feature importances nullifying each other. \[{\rm NH_{4}}^{+}+{\rm e}^{-}\longrightarrow{\rm NH_{3}}+{\rm H} \tag{7}\] \[\#{\rm NH_{3}}\longrightarrow{\rm NH_{3}}\ ({\rm UV\,and\,CR\, desorption}) \tag{8}\] \[\#{\rm NH_{2}}+\#{\rm HCO}\longrightarrow{\rm NH_{3}}+{\rm CO} \tag{9}\] Some of the main destruction mechanisms throughout the hot core phase are: \[{\rm H_{3}}^{+}+{\rm NH_{3}}\longrightarrow{\rm NH_{4}}^{+}+{\rm H_{2}} \tag{10}\] \[{\rm NH_{3}}+{\rm HCO^{+}}\longrightarrow{\rm CO}+{\rm NH_{4}}^{+} \tag{11}\] \[{\rm NH_{3}}+{\rm H_{3}}{\rm O^{+}}\longrightarrow{\rm NH_{4}}^{+}+{\rm H_{2}}{ \rm O} \tag{12}\] \[{\rm NH_{3}}+{\rm CN}\longrightarrow{\rm HCN}+{\rm NH_{2}} \tag{13}\] \[{\rm NH_{3}}+{\rm HNO^{+}}\longrightarrow{\rm NO}+{\rm NH_{4}}^{+} \tag{14}\] \[{\rm H^{+}}+{\rm NH_{3}}\longrightarrow{\rm NH_{3}}^{+}+{\rm H} \tag{15}\] \[{\rm NH_{3}}+{\rm HCNH^{+}}\longrightarrow{\rm HCN}+{\rm NH_{4}}^{+} \tag{16}\] \[{\rm NH_{3}}+{\rm HCNH^{+}}\longrightarrow{\rm HNC}+{\rm NH_{4}}^{+} \tag{17}\] \[{\rm NH_{3}}+{\rm S^{+}}\longrightarrow{\rm NH_{3}}^{+}+{\rm S} \tag{18}\] When the peak temperature is reached, the only formation reaction left is the gas-phase electron addition reaction. This is due to high temperature making grain-surface chemistry untenable, as most of the available grain-surface material has evaporated. In the gas-phase, the destruction routes are still active and recycle some of the gas-phase NH\({}_{3}\) and turn it back into NH\({}_{4}\)\({}^{+}\), but some of it goes on to form HCN and other species, resulting in the eventual decrease in the NH\({}_{3}\) abundance. This is more severe for higher final hot-core temperatures as these destruction reactions see their rates increase, resulting in even lower final NH\({}_{3}\) gas-phase abundances. ### Molecular ratios While species may serve as useful tracers for specific energetic processes under certain density and temperature conditions, it is often more useful to consider intensity ratios between different molecules, especially in extragalactic environments (Viti, 2017; Imanishi et al., 2019; Butterworth et al., 2022). Tracer ratios are often considered in observations to cancel out the beam filling factor. The two tracer ratios we consider are HCN/HNC and HCN/CS, both of which have been extensively studied in the literature. The former is considered a good tracer of temperature and the latter a dense gas tracer. #### 4.2.1 HCN/HNC We begin by considering the ratio of the abundances of HCN to HNC. The ratio of these two molecules has been extensively studied and has also been subject to a considerable amount of debate. These two molecules are of great interest, due to their high abundances, their excitation conditions the areas in which they form as well as the proximity of their transitions in frequency space (Pety et al., 2017; Hacar et al., 2020). Recently, this intensity ratio was suggested as a potential chemical thermometer for the ISM (Hacar et al., 2020). In Figure 11, we observe that temperature is indeed the most important feature. We observe that only the temperature and metallicity have significant impacts on the value of the ratio with relative importance values of \(\dot{T}_{T}=0.7\) and \(\dot{T}_{m_{e}}^{*}=0.24\). The other parameters do not have much influence on the log-abundance, so will not be discussed. Looking at the dependence plot for temperature further in Figure 12, we observe that the log-ratio increases monotonically with temperature, with there appearing to be two different temperature regimes judging by the change in gradient throughout the curve, which is also evident in Figure 13 which is a plot of the log-ratio against the features. Figure 14 is a plot of the ratio (as opposed to the log-ratio) against the temperature. We fit a two-part linear function to the data. The presence of two regimes is in agreement with the literature (Graninger et al., 2014; Hacar et al., 2020). In Hacar et al. (2020), the relationship between the temperature and the ratio was described with a two-part linear function, which is what we roughly observe. The two isomers are formed in roughly equal proportions through the dissociative recombination of HCNH\({}^{+}\)(Herbst et al., 2000). As such, any deviation in the ratio from a value of 1 can be attributed to the destruction routes. The main ones considered in the literature are: \[{\rm HNC}+{\rm H}\longrightarrow{\rm HCN}+{\rm H} \tag{19}\] \[{\rm HNC}+{\rm O}\longrightarrow{\rm NH}+{\rm CO} \tag{20}\] The pre-established energy barriers for both of these reactions have been questioned (see Graninger et al. (2014) for a full discussion of this). We update these values in line with Hacar et al. (2020) and Graninger et al. (2014) to be 200K and 20 K respectively. The first reaction is particularly dominant at high temperature, where we have a large abundance of atomic H, whereas the second reaction is more dominant at low temperatures. However, the second reaction does not appear to be the dominant HNC reaction at low temperatures. This can be seen in Figure 15 where we plot the fractional contribution of the reactions that are responsible for creating and destroying 99% of the HNC at each time step alongside the temperature as a function of time. We see that it is in fact the reaction H\({}_{3}\)\({}^{+}\) + HNC \(\longrightarrow{\rm HCNH^{+}}\) + H\({}_{2}\) as well as freeze-out responsible for this at low temperatures. As such, we still have an explanation for the two regimes observed, but the oxidisation reaction seems to not play as important a role in our model, suggesting further study might be required. However, the inflection point in Hacar et al. (2020) is observed to be at 40K, whereas in this work it is at 65K. This can be explained by noting that we consider a wider variety of physical parameter combinations, whereas the other work considered the ones specific to the Orion A Cloud. As such, a quantitative comparison is difficult to make. However, it is reassuring to observe qualitative agreement. Similarly, we observe no real relationship between the cosmic ray ionisation rate and the log-ratio, which is broadly in agreement with the modelling Figure 8: A plot of the SHAP values for the various features (besides the radiation field) as a function of the feature values used to predict the log-abundance of NH\({}_{3}\). We observe that temperature has an interesting relationship with the SHAP value. What we observe is that there exist three separate temperature regimes under which the final abundance is relatively constant. The abundance does show some non-monotonic variance with respect to the other features, but most of these are within 0.5 of the average value (or a multiplicative factor of 3). Figure 7: A beeswarm plot of the various physical parameters demonstrating their relative importance in predicting the log-abundance of NH\({}_{3}\). We observe that temperature has the largest impact with (\(\dot{T}=0.54\)). The temperature relationships does not seem to be monotonic. The next most important features are metallicity (\(\dot{T}_{m_{\rm c}}=0.17\)), followed by the cosmic ray ionisation rate (\(\dot{I}_{\zeta}=0.14\)), density (\(\dot{I}_{n}=0.11\)), and the radiation field (\(\dot{I}_{\phi}=0.03\)), with the first three also not having monotonic relationships with the SHAP value. done in Meijerink et al. (2011). However, there is some disagreement with observations as seen in Behrens et al. (2022), though this can be attributed to them considering a larger range of cosmic ray ionisation rates. In that paper, the ratio was found to decrease as the cosmic ray ionisation rate increased, though this was only in the presence of mechanical heating which we do not consider here. We observe that for metallicity, we have the same "tailing-off" effect that we have observed previously, though this is only in the marginal case in Figure 12. This is the case for metallicity values between 0 and 1. Again, we can attribute this to increased competition for the individual atomic species which results in the ratio decreasing. Bayet et al. (2012) considered a gas density of \(10^{4}\) cm\({}^{-3}\), radiation field values of 1 Habing, a cosmic ray ionisation rate of 5.0 \(\times 10^{-17}\) s\({}^{-1}\) and metallicities between 1 and 2. For metallicities between 1 and 2, we see a roughly linear marginal increase in the log-ratio. This is in line with what was observed in Bayet et al. (2012) in which an increase in the metallicity results in a linear increase in the log-abundances of HCN and HNC with the HCN having a steeper increase with metallicity. This suggests that their ratio would also increase linearly. #### 4.2.2 HCN/CS We now consider another tracer, the HCN to CS ratio. This ratio has received significant interest in recent years (Izumi et al., 2013, 2016; Butterworth et al., 2022), with one of the reasons being the fact that both HCN and CS are dense gas tracers (Viti, 2017), with the HCN(4-3)/CS(2-1) ratio being a good tracer of active galactic nuclei (AGN) activity. Just as for the ratio of HCN to HNC, we now wish to obtain a sense of the relationship of the five features of interest with this ratio. We begin by considering the relative importance of the five features. Figure 16 is a beeswarm plot demonstrating this. We observe that temperature is once again the most relevant feature followed by density, cosmic ray ionisation rate, metallicity and radiation field. We consider this more in Figures 17 and 18. There is a clear quasilinear relationship between the log-ratio and the log-density which supports the idea that the ratio could serve as a density tracer. The cosmic ray ionisation rate and the radiation field do not appear to have discernible relationships with the ratio. We find there is not a monotonic relationship with temperature. In fact, we once again seem to observe three separate temperature regimes. The former shows the SHAP value as a function of the feature value, which means it shows the marginal effect of each feature. The latter considers the abundance as a function of each feature. As we discussed earlier, the abundances plotted are derived from summing the marginal effects of all the features. We observe that for the temperature variable there are three separate regimes of interest when it comes to the log-ratio: one for below 100K, one for between 100 and 150 K and another for above 150 K. To start off with, we plot the temporal evolution of the abundances of the two molecules and the temperature in Figure 19 for three different values of the final temperature: 47K, 105K and 176K. These were plotted using UCLCHEM. Note that these temperatures are not special in any way, but they are simply chosen as examples to illustrate the points we wish to discuss. Each of these temperatures falls within one of the three different regimes we observe in Figure 17 and were taken from the dataset. We also plot a time series of the ratio in Figure 20. Figure 9: A plot of the log-abundance of NH\({}_{3}\) as a function of the various features. To calculate the log-abundance for a given data point, we needed to sum up the importance values of each feature for that data point. We observe that only temperature maintains a clear trend relative to what we observed in Figure 8. However, we now appear to have something closer to a two-temperature regime rather than a three-temperature one. For the other features, we have no discernible trend which can be attributed to the feature importances nullifying each other. Figure 11: A beeswarm plot of the various physical parameters demonstrating their relative importance in predicting the log-ratio of HCN to HNC. We observe that temperature has the largest impact on the model output with (\(\dot{I}_{T}=0.70\)). The fact that temperature is the most important feature is hardly surprising given that this ratio is seen as a thermometer. The next most important features are metallicity (\(\dot{I}_{m_{z}}=0.24\)), followed by the cosmic ray ionisation rate (\(\dot{I}_{\xi}=0.03\)), density (\(\dot{I}_{n}=0.03\)) and the radiation field (\(\dot{I}_{\phi}=0.00\)), with the first three also not having monotonic relationships with the SHAP value. Figure 10: Top: Plot of the fractional contribution of various ammonia formation routes that contribute to 99% of the NH\({}_{3}\) formation at each time. The temperature as a function of time is also plotted. Bottom: Plot of the fractional contribution of various ammonia destruction routes that contribute to 99% of the NH\({}_{3}\) at each moment in time. We only considered the top reactions that contributed to 99 % of the creation or destruction to limit the number of lines we would have to plot. Figure 12: A plot of the SHAP values for the various features (besides the radiation field) as a function of the feature values used to predict the log-ratio of HCN to HNC. We observe that temperature has an interesting relationship with the SHAP value with there being two regimes under which the ratio increases at different rates. This is in line with what was observed in Hacar et al. (2020) and was approximated there as a two-part linear function. The relationship between the SHAP value and metallicity is similar to what we observed in other molecules. Figure 13: A plot of the log-abundance of HCN/HNC ratio as a function of the various features. To calculate the log-ratio for a given data point, we needed to sum up the importance values of each feature for that data point. We observe that only temperature maintains a clear trend relative to what we observed in Figure 12. For the other features, we have no discernible trend which can be attributed to the feature importances nullifying each other. We observe that at 47K, we initially have a large build-up of HCN until about \(10^{5}\) years. CS is also built-up, but not to the same extent. After this point, both abundances drop sharply, though the CS drops far more, leading to an increase in the value of the ratio. However, for 107K the abundance of CS exceeds that of HCN leading to a smaller HCN/CS ratio. This is still true for 176K, but CS approaches HCN's abundance much more closely. In the low-temperature (\(<\)100K) regime, the dominant destruction reaction of HCN is H\({}_{3}\)\({}^{+}\) + HCN \(\longrightarrow\) HCNH\({}^{+}\) + H\({}_{2}\). Once the maximum temperature is reached, the main formation reactions are \[{\rm HCNH}^{+}+{\rm E}^{-}\longrightarrow{\rm HCN}+{\rm H} \tag{21}\] \[{\rm CH}+{\rm NO}\longrightarrow{\rm HCN}+{\rm O} \tag{22}\] \[{\rm NH}_{3}+{\rm CN}\longrightarrow{\rm HCN}+{\rm NH}_{2}. \tag{23}\] However, the NH\({}_{3}\)-based reaction becomes less efficient over time at this temperature and is replaced by N + HCO \(\longrightarrow\) HCN + O. In the mid-temperature regime (100K-150K), the major formation routes are: \[{\rm NH}_{3}+{\rm CN}\longrightarrow{\rm HCN}+{\rm NH}_{2} \tag{24}\] \[{\rm NH}_{3}+{\rm HCNH}^{+}\longrightarrow{\rm HCN}+{\rm NH}_{4}{}^{+} \tag{25}\] \[{\rm CH}+{\rm NO}\longrightarrow{\rm HCN}+{\rm O} \tag{26}\] \[{\rm HCNH}^{+}+{\rm H}_{2}{\rm CO}\longrightarrow{\rm H}_{3}{\rm CO}^{+}+{ \rm HCN} \tag{27}\] \[{\rm N}+{\rm HCO}\longrightarrow{\rm HCN}+{\rm O} \tag{28}\] \[{\rm CN}+{\rm HCO}\longrightarrow{\rm CO}+{\rm HCN} \tag{29}\] with the major destruction routes being: \[{\rm H}_{3}{\rm O}^{+}+{\rm HCN}\longrightarrow{\rm HCNH}^{+}+{\rm H}_{2}{ \rm O} \tag{30}\] \[{\rm HCN}+{\rm CRP}{\rm HOT}\longrightarrow{\rm CN}+{\rm H} \tag{31}\] \[{\rm H}_{3}{}^{+}+{\rm HCN}\longrightarrow{\rm HCNH}^{+}+{\rm H}_{2} \tag{32}\] \[{\rm HCN}+{\rm H}_{3}{\rm CO}^{+}\longrightarrow{\rm H}_{2}{\rm CO}+{\rm HCNH }^{+} \tag{33}\] \[{\rm CH}_{3}{}^{+}+{\rm HCN}\longrightarrow{\rm CH}_{3}{\rm CNH}^{+}+{ \rm PHOTON} \tag{34}\] Figure 14: Scatter plot of the ratio (note: not the log-ratio) as a function of the temperature. We continue to observe an inflection point at 65 K and fit a two-part linear function. Below 65 K, the trend line is \(y=0.31x+16\) (black) and above it is \(y=0.23x+21\) (red). For the sake of clarity, we have included the entirety of the second part of the red linear function to make the change in gradient easier to see. with the final reaction becoming less efficient after about \(2.3\times 10^{5}\) years. In the high-temperature regime (>150K), the major HCN reserves are built up until \(7.7\times 10^{4}\) years via these reactions: \[\mathrm{NH_{3}+CN}\longrightarrow\mathrm{HCN+NH_{2}} \tag{35}\] \[\mathrm{CH+NO}\longrightarrow\mathrm{HCN+O} \tag{36}\] \[\mathrm{HCNH^{+}+H_{2}CO}\longrightarrow\mathrm{H_{3}CO^{+}+HCN} \tag{37}\] \[\mathrm{HCNH^{+}+E^{-}}\longrightarrow\mathrm{HCN+H} \tag{38}\] \[\mathrm{N+HCO}\longrightarrow\mathrm{HCN+O} \tag{39}\] \[\mathrm{H+H_{2}CN}\longrightarrow\mathrm{HCN+H_{2}}. \tag{40}\] The reactions primarily responsible for the destruction are: \[\mathrm{H_{3}O^{+}+HCN}\longrightarrow\mathrm{HCNH^{+}+H_{2}O} \tag{41}\] \[\mathrm{HCN+CRPHOT}\longrightarrow\mathrm{CN+H} \tag{42}\] \[\mathrm{H_{3}}^{+}+HCN\longrightarrow\mathrm{HCNH^{+}+H_{2}} \tag{43}\] The aforementioned destruction mechanisms are more efficient in the mid-temperature range than in the high-temperature range. This explains why the value of the ratio drops between 100 and 150 K. We observe a weak linear relationship between the SHAP value and the metallicity. This is in line with what has been observed previously (Davis et al., 2013). In that work, they considered molecular regions of galaxies with metallicities ranging from 0.1 - 0.6, temperatures between 90 and 220 K with the remainder of conditions not listed in the paper. With the exception of the 200 - 220 K range, the listed conditions overlap with the ones in this work. What they found is that they were able to obtain a separate linear function fitting the log-ratio to the metallicity for each visual extinction value. We know that the greater the visual extinction, the greater the final density of the cloud. Furthermore, fixing the visual extinction and therefore the density fixes the final temperature that our cloud reaches during the Figure 15: Plot of the fractional contribution of various routes that contribute to 99% of the HNC destruction as a function of time. The temperature as a function of time is also plotted. We observe that for low temperatures, the main sources of gas-phase HNC destruction are \(\mathrm{H_{3}}^{+}+\mathrm{HNC}\longrightarrow\mathrm{HCNH^{+}+H_{2}}\) as well as freeze-out onto the grains, which runs contrary to our expectations of the reaction HNC + O \(\longrightarrow\mathrm{NH+CO}\) playing a dominant role. As the temperature increases we observe that the main destruction mechanism is the isomerisation reaction H + HNC \(\longrightarrow\mathrm{HCN+H}\). Note that the increase in the fractional contribution of the freeze-out reaction after \(10^{3}\) years is not due to the increase in temperature, but rather simply numerical as the other destruction mechanisms become far smaller which leads to its fractional contribution to increase despite the absolute contribution being negligible. We only considered the top reactions that contributed to 99 % of the creation or destruction to limit the number of lines we would have to plot. warm-up phase. Cosmic ray ionisation rates and the radiation field are also taken to be constant in the observed galaxies. This means that each linear relationship provided in Davis et al. (2013) gives the relationship between the log-ratio and the metallicity when our other four parameters are fixed. As such, it is sensible to state that there is qualitative agreement between the linear marginal SHAP relationship for metallicity in Figure 17 and the relationships found in Davis et al. (2013), as both of these assume the other parameters are fixed. Once again, it makes little sense to compare the exact numbers as we consider a far wider range of conditions. However, the qualitative similarity lends support to the validity of this methodology. The fact that the metallicity going to zero does not cause a tail-off in the value of the ratio suggests that there is another reaction compensating for the depletion of this ratio. ## 5 Conclusion In this work, we present the first application of machine learning interpretability techniques to better understand the effect of various physical parameters on molecular abundances. We trained an XGBoost statistical emulator to replicate the outputs of our chemical model, UCLCHEM. From this, we used SHAP to determine a relative ranking of feature importance as well as to identify the nature of the relationships between the input parameters and the output of interest. A quantitative measure for the relative feature importance was also presented. This work essentially presents a sensitivity analysis, but is different in many ways to previous studies. This is the first time that the concept of machine learning interpretability has been applied in astrochemistry to consider the impacts of various parameters on abundances. Our methodology offers a number of advantages. In the first instance, by training a statistical emulator to replace our forward model, UCLCHEM, we are able to significantly reduce the time taken per forward model evaluation, therefore allowing for a much larger grid to be evaluated. Additionally, we are able to quantify the relative importances of the various features as well as comment on the marginal impacts of each of the features. The main takeaways from this work for the various outputs are as follows: * H\({}_{2}\)O and CO's gas phase abundances depend strongly on the metallicity, which we relate to the fact that a low metallicity results in the production of each molecule being constrained by the amount of the less abundant atomic element (O and C, respectively). * NH\({}_{3}\) has a strong temperature dependence. There exist two temperature ranges (! 100K and! 100 K) for which the abundance is constant. We are able to relate this to the chemical reactions in our network and find that the increased temperature results in an increase in the destruction pathways. * HCNH\({}^{+}\) + H\({}_{2}\) instead of HNC + O - NH + CO. We also find a linear relationship between the metallicity and the log-ratio in the range 1-2 which matches what we find in Bayet et al. (2012). * For the HCN/CS ratio, we observe that it serves as a density tracer, as expected. Furthermore, we once again observe three separate regimes for the temperature dependence, which we are able to relate to the chemistry. Another point of interest is that the metallicity parameter often, but not always, leads to a "tailing-off effect" in the abundance in the limit of the metallicity going to zero. This was the case for H\({}_{2}\)O, CO and the HCN/HNC ratio. However, we did not observe this for NH\({}_{3}\) and the HCN/CS ratio. This suggests that despite the scaling down of the metals, there are other reactions that compensate by creating more of the respective molecule from the limited resources. Further work should consider this in more depth potentially by applying SHAP to a reaction network. Throughout this work, we have observed similarities between our results and what has been discussed in the literature. This is encouraging. However, it is difficult to make direct quantitative comparisons, as we consider a wide range of physical parameter combinations. On the other hand, the literature we cited considered actual observations. A follow-up study would need to sample the training data for the machine learning model more precisely in order to be able to better model and understand the relationships between inputs and outputs for a specific astronomical object. ## Acknowledgements We thank the anonymous referee for their constructive comments that improved the quality of the manuscript. J. Heyl is funded by an STFC studentship in Data-Intensive Science (grant number ST/P006736/1). S. Viti acknowledges support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 811312 for the project "Astro-Chemical Origins" (ACO). This work was also supported by European Research Council (ERC) Advanced Grant MOPPEX 833460. ## Data Availability The data underlying this article are available in the article and in its online supplementary material.
2310.20146
A Note on the Convergence of the OGAProx
In this note, we consider the Optimistic Gradient Ascent-Proximal Point Algorithm (OGAProx) proposed by Bo{\c{t}}, Csetnek, and Sedlmayer for solving a saddle-point problem associated with a convex-concave function constructed by a nonsmooth coupling function and one regularizing function. We first provide a counterexample to show that the convergence of the minimax gap function, evaluated at the ergodic sequences, is insufficient to demonstrate the convergence of the function values evaluated at the ergodic sequences. Then under the same assumptions used by Bo{\c{t}} et al.\,for proving the convergence of the minimax gap function, we present convergence results for the function values evaluated at the ergodic sequences generated by the OGAProx with convergence rates of order $\mathcal{O}\left(\frac{1}{k}\right)$, $\mathcal{O}\left(\frac{1}{k^{2}}\right)$, and $\mathcal{O}\left(\theta^{k}\right)$ with $\theta \in (0,1)$ for the associated convex-concave coupling function being convex-concave, convex-strongly concave, and strongly convex-strongly concave, respectively.
Hui Ouyang
2023-10-31T03:30:00Z
http://arxiv.org/abs/2310.20146v1
# A Note on the Convergence of the OGAProx ###### Abstract In this note, we consider the Optimistic Gradient Ascent-Proximal Point Algorithm (OGAProx) proposed by Bot, Csetnek, and Sedlmayer for solving a saddle-point problem associated with a convex-concave function constructed by a nonsmooth coupling function and one regularizing function. We first provide a counterexample to show that the convergence of the minimax gap function, evaluated at the ergodic sequences, is insufficient to demonstrate the convergence of the function values evaluated at the ergodic sequences. Then under the same assumptions used by Bot et al. for proving the convergence of the minimax gap function, we present convergence results for the function values evaluated at the ergodic sequences generated by the OGAProx with convergence rates of order \(\mathcal{O}\left(\frac{1}{k}\right)\), \(\mathcal{O}\left(\frac{1}{k^{2}}\right)\), and \(\mathcal{O}\left(\theta^{k}\right)\) with \(\theta\in(0,1)\) for the associated convex-concave coupling function being convex-concave, convex-strongly concave, and strongly convex-strongly concave, respectively. **2020 Mathematics Subject Classification:** Primary 90C25, 47H05; Secondary 47J25, 90C30. **Keywords:** Convex-Concave Saddle-Point Problems, Proximity Mapping, Gradient Ascent, Convergence, Linear Convergence ## 1 Introduction Throughout this work, let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be real Hilbert spaces, and let \(f:\mathcal{H}_{1}\times\mathcal{H}_{2}\to\mathbf{R}\cup\{-\infty,+\infty\}\) satisfy that \((\forall y\in Y)\ f(\cdot,y):\mathcal{H}_{1}\to\mathbf{R}\cup\{-\infty\}\) is proper, convex, and lower semicontinuous, and that \((\forall x\in X)\ f(x,\cdot):\mathcal{H}_{2}\to\mathbf{R}\cup\{+\infty\}\) is proper, concave, and upper semicontinuous. We say \((x^{*},y^{*})\in\mathcal{H}_{1}\times\mathcal{H}_{2}\) is a _saddle-point_ of \(f\) if \[(\forall(x,y)\in\mathcal{H}_{1}\times\mathcal{H}_{2})\quad f(x^{*},y)\leq f(x^ {*},y^{*})\leq f(x,y^{*}). \tag{1.1}\] In this work, we assume that there exists at least one saddle-point of \(f\), and we aim to solve the following _convex-concave saddle-point problem_: \[\operatorname{maximize}_{y\in\mathcal{H}_{2}}\operatorname{minimize}_{x\in \mathcal{H}_{1}}f(x,y). \tag{1.2}\] In the paper [1] by Bot, Csetnek, and Sedlmayer, the authors proposed the Optimistic Gradient Ascent-Proximal Point Algorithm (OGAProx) for solving a saddle-point problem associated with a convex-concave function with a nonsmooth coupling function and one regularizing function. In particular, the authors proved convergence results on the sequence of the iterations and also the minimax gap function evaluated at the ergodic sequences for the OGAProx. In this work, under the same assumptions used by Bot et al. for showing the convergence of the minimax gap function, we shall complement convergence results on the values of function evaluated at the ergodic sequences generated by the OGAProx. The rest of the work is organized as follows. In Section 2, we work on one example showing that the convergence of the minimax gap function evaluated at the ergodic sequences doesn't imply the convergence of the values of function evaluated at the ergodic sequences for the OGAProx. In Section 3, under the same assumptions used by Bot et al. for showing the convergence of the minimax gap function, we provide the convergence of the values of function evaluated at the ergodic sequences generated by the OGAProx. In particular, as the authors did in [1], we consider three cases of the associated convex-concave function (convex-concave, convex-strongly concave, and strongly convex-strongly concave), and show the convergence of the values of function evaluated at the ergodic sequences generated by the OGAProx with convergence rates of order \(\mathcal{O}\left(\frac{1}{k}\right)\), \(\mathcal{O}\left(\frac{1}{k^{2}}\right)\), and \(\mathcal{O}\left(\theta^{k}\right)\) with \(\theta\in(0,1)\), respectively. To end this section, we provide some notation frequently used in this work below. We use the convention that \(\mathbf{N}:=\{0,1,2,\cdots\}\) is the set of all nonnegative integers. \(\mathbf{R}\), \(\mathbf{R}_{+}\), and \(\mathbf{R}_{++}\) are the set of all real numbers, the set of all nonnegative real numbers, and the set of all positive real numbers, respectively. Let \(\mathcal{H}\) be a real Hilbert space. Let \(g:\mathcal{H}\to\mathbf{R}\cup\{+\infty\}\) be a proper, convex, and lower semicontinuous function. The _proximity operator \(\mathrm{Prox}_{g}\) of \(g\)_ is defined by \[\mathrm{Prox}_{g}:\mathcal{H}\to\mathcal{H}:x\mapsto\mathrm{argmin}_{y\in \mathcal{H}}\left(g(y)+\frac{1}{2}\left\|x-y\right\|^{2}\right).\] ## 2 Counterexample In this section, we consider the function \(f:\mathbf{R}^{2}\to\mathbf{R}\) defined as \[(\forall(x,y)\in\mathbf{R}^{2})\quad f(x,y)=xy.\] It is easy to see that the saddle-point of \(f\) is \((x^{*},y^{*})=(0,0)\). Moreover, we have that \[(\forall(x,y)\in\mathbf{R}^{2})\quad f(x,y^{*})-f(x^{*},y)=0-0=0=f(0,0)=f(x^{ *},y^{*}). \tag{2.1}\] Note that \((\forall(\bar{x},\bar{y})\in\mathbf{R}^{2})\)\(\nabla_{x}f(\bar{x},\bar{y})=\bar{y}\) and \(\nabla_{y}f(\bar{x},\bar{y})=\bar{x}\). Then to satisfy [1, Inequality (2)] (that is also (3.3) below), we can take \(L_{yx}=1\) and \(L_{yy}=0\). Clearly, this function with \((\forall(x,y)\in\mathbf{R}^{2})\)\(\Phi(x,y)=xy\) and \(g(y)\equiv 0\) satisfies all assumptions of the convex-concave function presented in [1, Section 1.1] (which is provided in Assumption 1 below). Let \((x^{0},y^{0})\) be in \(\mathbf{R}^{2}\). Based on [1, Section 1.2], the sequence of iterations \(\big{(}(x^{k},y^{k})\big{)}_{k\in\mathbf{N}}\) generated by the OGAProx (see also (3.4) below for details) is: for every \(k\in\mathbf{N}\), \[y^{k+1} =\mathrm{Prox}_{\sigma_{k}0}\left(y^{k}+\sigma_{k}\left((1+\theta _{k})x^{k}-\theta_{k}x^{k-1}\right)\right) \tag{2.2a}\] \[=\mathrm{argmin}_{y\in\mathbf{R}}\left(0+\frac{1}{2\sigma_{k}} \left|y^{k}+\sigma_{k}\left((1+\theta_{k})x^{k}-\theta_{k}x^{k-1}\right)-y \right|^{2}\right)\] (2.2b) \[=y^{k}+\sigma_{k}\left((1+\theta_{k})x^{k}-\theta_{k}x^{k-1}\right) \tag{2.2c}\] \[x^{k+1}=\operatorname{Prox}_{\tau_{k}f(\cdot,y^{k+1})}x^{k}=\operatorname{ argmin}_{x\in\mathbf{R}}xy^{k+1}+\frac{1}{2\tau_{k}}\left|x-x^{k}\right|^{2}=x^{k}- \tau_{k}y^{k+1}. \tag{2.3}\] By continuing applying formulae above, we know that \[(\forall k\in\mathbf{N})\quad y^{k+1}=y^{0}+\sum_{i=0}^{k}\sigma_{i}\left((1+ \theta_{i})x^{i}-\theta_{i}x^{i-1}\right)\quad\text{and}\quad x^{k+1}=x^{0}- \sum_{i=0}^{k}\tau_{i}y^{i+1}. \tag{2.4}\] Set \[(\forall k\in\mathbf{N}\smallsetminus\{0\})\quad\hat{x}_{k}:=\frac{1}{\sum_{i=0 }^{k-1}t_{i}}\sum_{j=0}^{k-1}t_{j}x^{j+1}\quad\text{and}\quad\hat{y}_{k}:=\frac {1}{\sum_{i=0}^{k-1}t_{i}}\sum_{j=0}^{k-1}t_{j}y^{j+1}, \tag{2.5}\] where \((\forall k\in\mathbf{N})\)\(t_{k}\in\mathbf{R}_{++}\). We show below in Example 2.1 that the convergence \(\lim_{k\to\infty}f(\hat{x}_{k},y^{*})-f(x^{*},\hat{y}_{k})=f(x^{*},y^{*})\) does not imply the convergence of \(\lim_{k\to\infty}f(\hat{x}_{k},\hat{y}_{k})=f(x^{*},y^{*})\). **Example 2.1**.: Let \(f:\mathbf{R}^{2}\to\mathbf{R}\) defined as \[(\forall(x,y)\in\mathbf{R}^{2})\quad f(x,y)=xy.\] Let \(\left((x^{k},y^{k})\right)_{k\in\mathbf{N}}\) be defined as (2.2) and (2.3) with \((x^{-1},y^{-1})=(x^{0},y^{0})=(1,1)\). Let \((\forall k\in\mathbf{N})\)\(t_{k}\equiv t_{0}\in\mathbf{R}_{++}\). Then, via (2.5), we have that \[(\forall k\in\mathbf{N}\smallsetminus\{0\})\quad\hat{x}_{k}=\frac{1}{k}\sum_{j= 0}^{k-1}x^{j+1}\quad\text{and}\quad\hat{y}_{k}=\frac{1}{k}\sum_{j=0}^{k-1}y^{ j+1}.\] Moreover, the following statements hold. * \((\forall k\in\mathbf{N}\smallsetminus\{0\})\)\(f(\hat{x}_{k},y^{*})-f(x^{*},\hat{y}_{k})\equiv 0\). Consequently, \(\lim_{k\to\infty}f(\hat{x}_{k},y^{*})-f(x^{*},\hat{y}_{k})=f(x^{*},y^{*})\). * Suppose \((\forall k\in\mathbf{N})\)\(t_{k}\equiv t_{0}=1\), \(\tau_{k}\equiv\tau\in\mathbf{R}_{++}\), \(\sigma_{k}\equiv\sigma\in\mathbf{R}_{++}\), and \(\theta_{k}\equiv 1\). Then \(f(\hat{x}_{k},\hat{y}_{k})=\frac{1}{k^{2}}\frac{1}{\tau\sigma}\left(\left(x^{0 }-x^{k}-\tau y^{k+1}\right)\left(y^{k+1}-y^{0}-\sigma x^{k}\right)\right)\to f (x^{*},y^{*})\) as \(k\to\infty\). * Let \(\epsilon\in(0,\frac{6}{2\pi^{2}})\). Suppose that \[(\forall k\in\mathbf{N})\quad\sigma_{k}=\epsilon,\theta_{k}=\epsilon,\text{ and }\tau_{k}=\begin{cases}\frac{\epsilon}{y^{k+1}(k+1)^{2}}&\quad\text{if }y^{k+1}\neq 0,\\ 0&\quad\text{otherwise}.\end{cases}\] (2.6) Then we have the following assertions. * \((\forall k\in\mathbf{N})\)\(x^{k}>\frac{1}{2}\) and \(y^{k}>1-\epsilon^{2}>0\). * \((k\in\mathbf{N}\smallsetminus\{0\})\)\(f(\hat{x}_{k},\hat{y}_{k})=\hat{x}_{k}\hat{y}_{k}>\frac{1-\epsilon^{2}}{2}>0\). Consequently, \(f(\hat{x}_{k},\hat{y}_{k})\not\to f(x^{*},y^{*})\) as \(k\to\infty\). Proof.: (i): This is clear from (2.1). (ii): Note that this assumption is consistent with that of [1, Theorem 9]. Hence, due to results from [1, Theorem 9], we know that under this assumption, the sequence of iterations \(\left((x^{k},y^{k})\right)_{k\in\mathbf{N}}\) is bounded. Combine (2.4) with our assumptions of the parameters to deduce that for every \(k\in\mathbf{N}\), \[y^{k+1} =y^{0}+\sigma\sum_{i=0}^{k}x^{i}+\sigma\theta\sum_{i=0}^{k}\left( x^{i}-x^{i-1}\right)=y^{0}+\sigma\sum_{i=0}^{k}x^{i}+\sigma\theta\left(x^{k}-x^{0 }\right),\text{ and}\] \[x^{k+1} =x^{0}-\tau\sum_{i=0}^{k}y^{i+1},\] which yield that \[x^{0}+\sum_{j=0}^{k-1}x^{j+1}=\sum_{i=0}^{k}x^{i}=\frac{1}{\sigma }\left(y^{k+1}-y^{0}\right)+\theta\left(x^{0}-x^{k}\right),\text{ and} \tag{2.7a}\] \[\sum_{i=0}^{k-1}y^{i+1}+y^{k+1}=\sum_{i=0}^{k}y^{i+1}=\frac{1}{ \tau}\left(x^{0}-x^{k+1}\right). \tag{2.7b}\] Therefore, we obtain that for every \(k\in\mathbf{N}\), \[\hat{x}_{k} =\frac{1}{k}\sum_{j=0}^{k-1}x^{j+1}=\frac{1}{k}\left(\frac{1}{ \sigma}\left(y^{k+1}-y^{0}\right)+\left(x^{0}-x^{k}\right)-x^{0}\right)=\frac{ 1}{\sigma k}\left(y^{k+1}-y^{0}\right)-\frac{1}{k}x^{k};\] \[\hat{y}_{k} =\frac{1}{k}\sum_{j=0}^{k-1}y^{j+1}=\frac{1}{k}\left(\frac{1}{ \tau}\left(x^{0}-x^{k}\right)-y^{k+1}\right).\] Therefore, \[f(\hat{x}_{k},\hat{y}_{k})=\hat{x}_{k}\hat{y}_{k}=\frac{1}{k^{2}}\frac{1}{ \tau\sigma}\left(\left(x^{0}-x^{k}-\tau y^{k+1}\right)\left(y^{k+1}-y^{0}- \sigma x^{k}\right)\right),\] which, combined with the boundedness of \(\left((x^{k},y^{k})\right)_{k\in\mathbf{N}}\), guarantees that \(\lim_{k\to\infty}f(\hat{x}_{k},\hat{y}_{k})=0=f(x^{*},y^{*})\). (iii): Note that via (2.2) for every \(k\in\mathbf{N}\), we are able to take \(\sigma_{k}\) and \(\theta_{k}\) based on the values of \((\forall i\in\{0,\cdots,k\})\)\(x^{i}\). Similarly, according to (2.3), for every \(k\in\mathbf{N}\), we are able to take \(\tau_{k}\) based on the values of \((\forall i\in\{0,\cdots,k,k+1\})\)\(y^{i}\). Therefore, our assumption is practical. (iii)(a): We prove below by induction that \((\forall k\in\mathbf{N})\)\(x^{k}>\frac{1}{2}\) and \(y^{k}>1-\epsilon^{2}\). Recall that \((x^{-1},y^{-1})=(x^{0},y^{0})=(1,1)\). Combine (2.4) and (2.6) to derive that \[y^{1}=y^{0}+\epsilon x^{0}+\epsilon^{2}\cdot 0=1+\epsilon>1-\epsilon^{2}>0 \text{ and }x^{1}=x^{0}-\frac{\epsilon}{y^{1}}y^{1}=x^{0}-\epsilon=1-\epsilon>\frac{1}{2}. \tag{2.8}\] Let \(N\in\mathbf{N}\smallsetminus\{0\}\). Assume that \((\forall i\in\{0,1,\cdots,N\})\)\(x^{i}>\frac{1}{2}\) and \(y^{i}>1-\epsilon^{2}>0\). Applying (2.4) and (2.6) again, we have that \[y^{N+1} =y^{0}+\epsilon\sum_{i=0}^{N}x^{i}+\epsilon^{2}\sum_{i=0}^{N}\left( x^{i}-x^{i-1}\right)\] \[=y^{0}+\epsilon\sum_{i=0}^{N}x^{i}+\epsilon^{2}\left(x^{N}-x^{0}\right)\] \[=y^{0}-\epsilon^{2}x^{0}+\epsilon\sum_{i=0}^{N}x^{i}+\epsilon^{2} x^{N}\] \[>y^{0}-\epsilon^{2}x^{0}=1-\epsilon^{2}>0.\] Employing this result, (2.4), and (2.6), we derive that \[x^{N+1} =x^{0}-\sum_{i=0}^{N}\tau_{i}y^{i+1}=x^{0}-\epsilon\sum_{i=0}^{N} \frac{1}{y^{i+1}(i+1)^{2}}y^{i+1}\] \[=x^{0}-\epsilon\sum_{i=0}^{N}\frac{1}{(i+1)^{2}}>x^{0}-\epsilon \sum_{i\in\mathbf{N}}\frac{1}{(i+1)^{2}}=1-\epsilon\frac{\pi^{2}}{6}>1-\frac{ 1}{2}=\frac{1}{2}.\] Altogether, we proved that \((\forall k\in\mathbf{N})\)\(x^{k}>\frac{1}{2}\) and \(y^{k}>1-\epsilon^{2}>0\) by induction. (iii)(b): According to (iii)(a), we have that for every \(k\in\mathbf{N}\smallsetminus\{0\}\), \[\hat{x}_{k}=\frac{1}{k}\sum_{j=0}^{k-1}x^{j+1}>\frac{1}{2}\quad\text{and}\quad \hat{y}_{k}=\frac{1}{k}\sum_{j=0}^{k-1}y^{j+1}>1-\epsilon^{2},\] which guarantees that \[(k\in\mathbf{N}\smallsetminus\{0\})\quad f(\hat{x}_{k},\hat{y}_{k})=\hat{x}_{k} \hat{y}_{k}>\frac{1-\epsilon^{2}}{2}>0.\] This shows that it is impossible to have \(f(\hat{x}_{k},\hat{y}_{k})\to f(x^{*},y^{*})\) as \(k\to\infty\) in this case since \(f(x^{*},y^{*})=0\). \(\blacksquare\) ## 3 Convergence of OGAProx In this section, we shall work on the convergence of the values of function evaluated at the ergodic sequences constructed by the OGAProx. In particular, we will consider three cases of the associated function: convex-concave, convex-strongly concave, and strongly convex-strongly concave. The following result will play an essential role in proving our convergence results later. **Fact 3.1**.: [2, Lemma 2.5] _Let \(f:\mathcal{H}_{1}\times\mathcal{H}_{2}\to\mathbf{R}\cup\{-\infty,+\infty\}\) satisfy that \((\forall y\in\mathcal{H}_{2})\)\(f(\cdot,y)\) is convex and \((\forall x\in\mathcal{H}_{1})\)\(f(x,\cdot)\) is concave. Let \((x^{*},y^{*})\) be a saddle-point of \(f\), let \(((x^{k},y^{k}))_{k\in\mathbf{N}}\) be in \(\mathcal{H}_{1}\times\mathcal{H}_{2}\), and let \((\forall k\in\mathbf{N})\)\(t_{k}\in\mathbf{R}_{+}\) with \(t_{0}\in\mathbf{R}_{++}\). Set_ \[(\forall k\in\mathbf{N})\quad\hat{x}_{k}:=\frac{1}{\sum_{i=0}^{k-1}t_{i}}\sum_ {j=0}^{k-1}t_{j}x^{j+1}\quad\text{and}\quad\hat{y}_{k}:=\frac{1}{\sum_{i=0}^{k -1}t_{i}}\sum_{j=0}^{k-1}t_{j}y^{j+1}. \tag{3.1}\] _Then we have that for every \(k\in\mathbf{N}\),_ \[f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*}) \leq\frac{1}{\sum_{i=0}^{k-1}t_{i}}\sum_{j=0}^{k-1}t_{j}\left(f(x^{j +1},\hat{y}_{k})-f(x^{*},y^{j+1})\right); \tag{3.2a}\] \[f(x^{*},y^{*})-f(\hat{x}_{k},\hat{y}_{k}) \leq\frac{1}{\sum_{i=0}^{k-1}t_{i}}\sum_{j=0}^{k-1}t_{j}\left(f(x^ {j+1},y^{*})-f(\hat{x}_{k},y^{j+1})\right). \tag{3.2b}\] _Consequently, if_ \[\lim_{k\to\infty}\frac{1}{\sum_{i=0}^{k-1}t_{i}}\sum_{j=0}^{k-1}t _{j}\left(f(x^{j+1},\hat{y}_{k})-f(x^{*},y^{j+1})\right)=0,\text{ and}\] \[\lim_{k\to\infty}\frac{1}{\sum_{i=0}^{k-1}t_{i}}\sum_{j=0}^{k-1}t _{j}\left(f(x^{j+1},y^{*})-f(\hat{x}_{k},y^{j+1})\right)=0,\] _then \(\lim_{k\to\infty}f(\hat{x}_{k},\hat{y}_{k})=f(x^{*},y^{*})\)._ In the rest of this work, let \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) be real Hilbert spaces, let \(\Phi:\mathcal{H}_{1}\times\mathcal{H}_{2}\to\mathbf{R}\cup\{+\infty\}\) be a coupling function with \(\operatorname{dom}\Phi:=\{(x,y)\in\mathcal{H}_{1}\times\mathcal{H}_{2}\ :\ \Phi(x,y)<+\infty\}\neq\varnothing\), let \(g:\mathcal{H}_{2}\to\mathbf{R}\cup\{+\infty\}\) be a regulariser, and let \(f:\mathcal{H}_{1}\times\mathcal{H}_{2}\to\mathbf{R}\cup\{-\infty,+\infty\}\) be defined as \[(\forall(x,y)\in\mathcal{H}_{1}\times\mathcal{H}_{2})\quad f(x,y):=\Phi(x,y)- g(y).\] Let \((x^{*},y^{*})\) be a saddle-point of \(f\). Set \(\operatorname{P}_{\mathcal{H}_{1}}(\operatorname{dom}\Phi):=\{u\in\mathcal{H }_{1}\ :\ \exists y\in\mathcal{H}_{2}\text{ such that }(u,y)\in \operatorname{dom}\Phi\}\). We copy assumptions presented in [1, Section 1.1] below. **Assumption 1**.: _Henceforth, we have the following assumptions given in [1, Section 1.1]._ 1. \(g\) _is proper, lower semicontinuous, and convex with modulus_ \(\nu\in\mathbf{R}_{+}\) _, i.e.,_ \(g-\frac{\nu}{2}\left\|\cdot\right\|^{2}\) _is convex;_ 2. \((\forall y\in\operatorname{dom}g)\ \Phi(\cdot,y):\mathcal{H}_{1}\to\mathbf{R}\cup\{+\infty\}\) _is proper, convex, and lower semicontinuous;_ 3. \(\operatorname{P}_{\mathcal{H}_{1}}(\operatorname{dom}\Phi)\) _is closed, and_ \((\forall x\in\operatorname{P}_{\mathcal{H}_{1}}(\operatorname{dom}\Phi))\) _we have that_ \(\operatorname{dom}\Phi(x,\cdot)=\mathcal{H}_{2}\) _and_ \(\Phi(x,\cdot):\mathcal{H}_{2}\to\mathbf{R}\) _is convex and Frechet differentiable;_ 4. _There exist_ \(L_{yx}\in\mathbf{R}_{+}\) _and_ \(L_{yy}\in\mathbf{R}_{+}\) _such that for all_ \((x,y)\) _and_ \((x^{\prime},y^{\prime})\) _in_ \(\operatorname{P}_{\mathcal{H}_{1}}(\operatorname{dom}\Phi)\times\operatorname {dom}g\)_,_ \[\left\|\nabla_{y}\Phi(x,y)-\nabla_{y}\Phi(x^{\prime},y^{\prime})\right\|\leq L_{ yx}\left\|x-x^{\prime}\right\|+L_{yy}\left\|y-y^{\prime}\right\|.\] (3.3) We state the Optimistic Gradient Ascent-Proximal Point Algorithm (OGAProx) proposed in [1, Section 1.2] below. Let \((x^{0},y^{0})\) be in \(\operatorname{P}_{\mathcal{H}_{1}}(\operatorname{dom}\Phi)\times\operatorname {dom}g\) and set \((x^{-1},y^{-1}):=(x^{0},y^{0})\). For every \(k\in\mathbf{N}\), \[y^{k+1} =\operatorname{Prox}_{\sigma_{k}g}\left(y^{k}+\sigma_{k}\left[(1+ \theta_{k})\nabla_{y}\Phi(x^{k},y^{k})-\theta_{k}\nabla_{y}\Phi(x^{k-1},y^{k-1 })\right]\right), \tag{3.4a}\] \[x^{k+1} =\operatorname{Prox}_{\tau_{k}f(\cdot,y^{k+1})}(x^{k}), \tag{3.4b}\] where \((\sigma_{k})_{k\in\mathbf{N}}\) and \((\tau_{k})_{k\in\mathbf{N}}\) are in \(\mathbf{R}_{++}\), and \((\theta_{k})_{k\in\mathbf{N}}\) is in \((0,1]\). From now on, \(\left((x^{k},y^{k})\right)_{k\in\mathbf{N}}\) is the sequence of iterations generated by the OGAProx presented in (3.4) above. Let \((\forall k\in\mathbf{N})\ t_{k}:=\frac{\theta_{0}}{\theta_{0}\theta_{1}\cdots \theta_{k}}\). Set \[(\forall k\in\mathbf{N}\smallsetminus\{0\})\quad\hat{x}_{k}:=\frac{1}{\sum_{i=0 }^{k-1}t_{i}}\sum_{j=0}^{k-1}t_{j}x^{j+1}\quad\text{and}\quad\hat{y}_{k}:=\frac {1}{\sum_{i=0}^{k-1}t_{i}}\sum_{j=0}^{k-1}t_{j}y^{j+1}. \tag{3.5}\] Moreover, we denote \[(\forall k\in\mathbf{N})\quad q_{k}:=\nabla_{y}\Phi(x^{k},y^{k})-\nabla_{y} \Phi(x^{k-1},y^{k-1}). \tag{3.6}\] ### Convex-(Strongly) Concave Setting In this subsection, we consider the convergence of the sequence of iterations generated by the OGAProx under the assumption that the coupling function \(\Phi\) is convex-concave and that the function \(g\) is convex with modulus \(\nu\geq 0\). Note that if \(\nu=0\), then the function \((\forall(x,y)\in\mathcal{H}_{1}\times\mathcal{H}_{2})\ f(x,y)=\Phi(x,y)-g(y)\) is convex-concave, and that if \(\nu>0\), then the function \((\forall(x,y)\in\mathcal{H}_{1}\times\mathcal{H}_{2})\ f(x,y)=\Phi(x,y)-g(y)\) is convex-strongly concave. Throughout this subsection, set for every \((x,y)\in\mathcal{H}_{1}\times\mathcal{H}_{2}\) and for every \(k\in\mathbf{N}\), \[a_{k}(x,y):= \frac{1}{2\tau_{k}}\left\|x-x^{k}\right\|^{2}+\frac{1}{2\sigma_{ k}}\left\|y-y^{k}\right\|^{2}+\theta_{k}\left\langle q_{k},y^{k}-y\right\rangle+ \theta_{k}\frac{L_{yx}}{2\alpha_{k}}\left\|x^{k}-x^{k-1}\right\|^{2}\] \[+\theta_{k}\frac{L_{yy}}{2}\left\|y^{k}-y^{k-1}\right\|^{2},\] \[b_{k+1}(x,y):= \frac{1}{2\tau_{k}}\left\|x-x^{k+1}\right\|^{2}+\frac{1}{2}\left( \frac{1}{\sigma_{k}}+\nu\right)\left\|y-y^{k+1}\right\|^{2}+\left\langle q_{k +1},y^{k+1}-y\right\rangle\] \[+\frac{L_{yx}}{2\alpha_{k+1}}\left\|x^{k+1}-x^{k}\right\|^{2}+ \frac{L_{yy}}{2}\left\|y^{k+1}-y^{k}\right\|^{2},\] \[c_{k}:= \frac{1}{2}\left(\frac{1}{\tau_{k}}-\frac{L_{yx}}{\alpha_{k+1}} \right)\left\|x^{k+1}-x^{k}\right\|^{2}+\frac{1}{2}\left(\frac{1}{\sigma_{k}} -L_{yy}-\theta_{k}\left(L_{yx}\alpha_{k}+L_{yy}\right)\right)\left\|y^{k+1}-y^ {k}\right\|^{2}.\] We borrow some results proved in [1] in the following fact, which will be used in our proofs later. **Fact 3.2**.: _Let \(\nu\geq 0\), let \(c_{\alpha}>L_{yx}\geq 0\), and let \(\theta_{0}=1\), and let \(\tau_{0}\) and \(\sigma_{0}\) be in \(\mathbf{R}_{++}\) such that_ \[\left(c_{\alpha}L_{yx}\tau_{0}+2L_{yy}\right)\sigma_{0}<1.\] _Define_ \[(\forall k\in\mathbf{N})\quad\theta_{k+1}:=\frac{1}{\sqrt{1+\nu \sigma_{k}}},\quad\tau_{k+1}:=\frac{\tau_{k}}{\theta_{k+1}},\quad\text{and} \quad\sigma_{k+1}:=\theta_{k+1}\sigma_{k}.\] _Set_ \[(\forall k\in\mathbf{N})\quad\alpha_{k}:=\begin{cases}c_{\alpha}\tau_{0}& \text{if }k=0,\\ c_{\alpha}\tau_{k-1}&\text{if }k\geq 1,\end{cases}\] _and_ \[\delta:=\min\left\{1-\frac{L_{yx}}{c_{\alpha}},1-\left(c_{\alpha}L_{yx}\tau_ {0}+2L_{yy}\right)\sigma_{0}\right\}.\] _Then the following statements hold._ 1. [1, Proposition 6] \((\forall k\in\mathbf{N})\)\(\tau_{k+1}\geq\frac{\tau_{k}}{\theta_{k+1}}\) and \(\sigma_{k+1}\geq\frac{\theta_{k}}{\theta_{k+1}(1+\nu\sigma_{k})}\). Furthermore, \[\frac{1-\delta}{\tau_{k}}\geq\frac{L_{yx}}{\alpha_{k+1}}\quad\text{and}\quad \frac{1-\delta}{\sigma_{k}}\geq L_{yx}\alpha_{k}\theta_{k}+L_{yy}(1+\theta_{k}).\] 2. [1, Proposition 6] \((\forall k\in\mathbf{N})\)\(t_{k}=\frac{\theta_{0}}{\theta_{0}\theta_{1}\cdots\theta_{k}}=\frac{\tau_{k}}{ \tau_{0}}\). 3. [1, Inequality (17)] _Let \((x,y)\) be in \(\mathcal{H}_{1}\times\mathcal{H}_{2}\). Then for every \(k\in\mathbf{N}\),_ \[\left|\left\langle q_{k},y^{k}-y\right\rangle\right|\leq\frac{L_{yx}}{2}\left( \alpha_{k}\left\|y-y_{k}\right\|^{2}+\frac{1}{\alpha_{k}}\left\|x^{k}-x^{k-1} \right\|^{2}\right)+\frac{L_{yy}}{2}\left(\left\|y-y^{k}\right\|^{2}+\left\| y^{k}-y^{k-1}\right\|^{2}\right)\!.\] 4. [1, Inequality (18)] _Let \((x,y)\) be in \(\mathcal{H}_{1}\times\mathcal{H}_{2}\). Then \((\forall k\in\mathbf{N})\)\(f(x^{k+1},y)-f(x,y^{k+1})\leq a_{k}(x,y)-b_{k+1}(x,y)-c_{k}\)._ 5. [1, Inequality (24)] _Let \((x,y)\) be in \(\mathcal{H}_{1}\times\mathcal{H}_{2}\). Then \((\forall k\in\mathbf{N}\smallsetminus\{0\})\)\(\sum_{i=0}^{k-1}t_{i}\left(f(\hat{x}_{k},y)-f(x,\hat{y}_{k})\right)\)\(\leq\)\(\frac{t_{0}}{2\tau_{0}}\left\|x-x^{0}\right\|^{2}+\frac{t_{0}}{2\sigma_{0}}\left\|y-y^{0} \right\|^{2}-\frac{t_{k}}{2\tau_{k}}\left\|x-x^{k}\right\|^{2}-\frac{t_{k}}{2 }\left(\frac{1}{\sigma_{k}}-\theta_{k}\left(L_{yx}\alpha_{k}+L_{yy}\right) \right)\left\|y-y^{k}\right\|^{2}\)._ 6. [1, Inequality (23) and the expression above] _Let \((x,y)\) be in \(\mathcal{H}_{1}\times\mathcal{H}_{2}\). Then \((\forall k\in\mathbf{N})\)\(t_{k}b_{k+1}(x,y)\geq t_{k+1}a_{k+1}(x,y)\). Furthermore,_ \[(\forall k\in\mathbf{N})\quad c_{k}\geq\sigma\left(\frac{1}{2\tau_{k}}\left\|x _{k+1}-x_{k}\right\|^{2}+\frac{1}{2\sigma_{k}}\left\|y^{k+1}-y^{k}\right\|^{2 }\right)\geq 0.\] **Proposition 3.3**.: _Let \(\nu\geq 0\), let \(c_{\alpha}>L_{yx}\geq 0\), and let \(\theta_{0}=1\), and let \(\tau_{0}\) and \(\sigma_{0}\) be in \(\mathbf{R}_{++}\) such that_ \[\left(c_{\alpha}L_{yx}\tau_{0}+2L_{yy}\right)\sigma_{0}<1.\] _Define_ \[(\forall k\in\mathbf{N})\quad\theta_{k+1}:=\frac{1}{\sqrt{1+\nu \sigma_{k}}},\quad\tau_{k+1}:=\frac{\tau_{k}}{\theta_{k+1}},\quad\text{and} \quad\sigma_{k+1}:=\theta_{k+1}\sigma_{k}.\] _Set_ \[(\forall k\in\mathbf{N})\quad\alpha_{k}:=\begin{cases}c_{\alpha} \tau_{0}&\text{if }k=0,\\ c_{\alpha}\tau_{k-1}&\text{if }k\geq 1,\end{cases}\] _and_ \[\delta:=\min\left\{1-\frac{L_{yx}}{c_{\alpha}},1-\left(c_{\alpha}L _{yx}\tau_{0}+2L_{yy}\right)\sigma_{0}\right\}.\] _We have the following assertions._ 1. _Let_ \((x,y)\) _be in_ \(\mathcal{H}_{1}\times\mathcal{H}_{2}\)_. Then for every_ \(k\in\mathbf{N}\)_,_ \[\sum_{i=0}^{k-1}t_{i}\left(f(x^{i+1},y)-f(x,y^{i+1})\right)\leq \frac{t_{0}}{2\tau_{0}}\left\|x-x^{0}\right\|^{2}+\frac{t_{0}}{2\sigma_{0}} \left\|y-y^{0}\right\|^{2}.\] 2. _Let_ \(k\in\mathbf{N}\smallsetminus\{0\}\)_. Then_ \[-\frac{t_{0}}{2\tau_{0}}\left\|\hat{x}_{k}-x^{0}\right\|^{2}-\frac{t _{0}}{2\sigma_{0}}\left\|y^{*}-y^{0}\right\|^{2} \leq\sum_{i=0}^{k-1}t_{i}\left(f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^ {*})\right)\] \[\leq\frac{t_{0}}{2\tau_{0}}\left\|x^{*}-x^{0}\right\|^{2}+\frac{t _{0}}{2\sigma_{0}}\left\|\hat{y}_{k}-y^{0}\right\|^{2}.\] Proof.: (i): Let \(k\in\mathbf{N}\). In view of Fact 3.2(i), we have that \[\frac{1}{\sigma_{k}}-\theta_{k}\left(L_{yx}\alpha_{k}+L_{yy}\right)\geq\frac{ \sigma}{\sigma_{k}}>0. \tag{3.7}\] Applying Fact 3.2(iv) in the first inequality and employing Fact 3.2(vi) in the second inequality, we derive that \[\sum_{i=0}^{k-1}t_{i}\left(f(x^{i+1},y)-f(x,y^{i+1})\right) \leq\sum_{i=0}^{k-1}t_{i}\left(a_{i}(x,y)-b_{i+1}(x,y)-c_{i}\right) \tag{3.8a}\] \[\leq\sum_{i=0}^{k-1}t_{i}a_{i}(x,y)-t_{i+1}a_{i+1}(x,y)\] (3.8b) \[=t_{0}a_{0}(x,y)-t_{k}a_{k}(x,y). \tag{3.8c}\] Furthermore, recalling the definitions of \(a_{0}(x,y)\) and \(a_{k}(x,y)\) and the fact \((x^{-1},y^{-1})=(x^{0},y^{0})\) and \(q_{0}:=\nabla_{y}\Phi(x^{0},y^{0})-\nabla_{y}\Phi(x^{-1},y^{-1})=0\) in the first equality, and applying Fact 3.2(iii) in the first inequality below, we derive that \[t_{0}a_{0}(x,y)-t_{k}a_{k}(x,y)\] \[= \frac{t_{0}}{2\tau_{0}}\left\|x-x^{0}\right\|^{2}+\frac{t_{0}}{2 \sigma_{0}}\left\|y-y^{0}\right\|^{2}-\frac{t_{k}}{2\tau_{k}}\left\|x-x^{k} \right\|^{2}-\frac{t_{k}}{2\sigma_{k}}\left\|y-y^{k}\right\|^{2}\] \[-t_{k}\theta_{k}\left\langle q_{k},y^{k}-y\right\rangle-t_{k} \theta_{k}\frac{L_{yx}}{2\alpha_{k}}\left\|x^{k}-x^{k-1}\right\|^{2}-t_{k} \theta_{k}\frac{L_{yy}}{2}\left\|y^{k}-y^{k-1}\right\|^{2}\] \[\leq \frac{t_{0}}{2\tau_{0}}\left\|x-x^{0}\right\|^{2}+\frac{t_{0}}{2 \sigma_{0}}\left\|y-y^{0}\right\|^{2}-\frac{t_{k}}{2\tau_{k}}\left\|x-x^{k} \right\|^{2}-\frac{t_{k}}{2\sigma_{k}}\left\|y-y^{k}\right\|^{2}\] \[+t_{k}\theta_{k}\frac{L_{yx}}{2}\left(\alpha_{k}\left\|y-y_{k} \right\|^{2}+\frac{1}{\alpha_{k}}\left\|x^{k}-x^{k-1}\right\|^{2}\right)+t_{k} \theta_{k}\frac{L_{yy}}{2}\left(\left\|y-y^{k}\right\|^{2}+\left\|y^{k}-y^{k-1 }\right\|^{2}\right)\] \[-t_{k}\theta_{k}\frac{L_{yx}}{2\alpha_{k}}\left\|x^{k}-x^{k-1} \right\|^{2}-t_{k}\theta_{k}\frac{L_{yy}}{2}\left\|y^{k}-y^{k-1}\right\|^{2}\] \[= \frac{t_{0}}{2\tau_{0}}\left\|x-x^{0}\right\|^{2}+\frac{t_{0}}{2 \sigma_{0}}\left\|y-y^{0}\right\|^{2}-\frac{t_{k}}{2\tau_{k}}\left\|x-x^{k} \right\|^{2}-\frac{t_{k}}{2}\left(\frac{1}{\sigma_{k}}-\theta_{k}\left(L_{yx} \alpha_{k}+L_{yy}\right)\right)\left\|y-y^{k}\right\|^{2}\] \[\leq \frac{t_{0}}{2\tau_{0}}\left\|x-x^{0}\right\|^{2}+\frac{t_{0}}{2 \sigma_{0}}\left\|y-y^{0}\right\|^{2},\] where in the last inequality, we use (3.7). Altogether, we have the desired result \[\sum_{i=0}^{k-1}t_{i}\left(f(x^{i+1},y)-f(x,y^{i+1})\right)\leq\frac{t_{0}}{2 \tau_{0}}\left\|x-x^{0}\right\|^{2}+\frac{t_{0}}{2\sigma_{0}}\left\|y-y^{0} \right\|^{2}.\] (ii): Combine (3.2a) in Fact 3.1 and the result obtained from (i) above to observe that \[\sum_{i=0}^{k-1}t_{i}\left(f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*})\right)\leq \sum_{j=0}^{k-1}t_{j}\left(f(x^{j+1},\hat{y}_{k})-f(x^{*},y^{j+1})\right)\] \[\leq \frac{t_{0}}{2\tau_{0}}\left\|x^{*}-x^{0}\right\|^{2}+\frac{t_{0} }{2\sigma_{0}}\left\|\hat{y}_{k}-y^{0}\right\|^{2}.\] Similarly, applying (3.2b) in Fact 3.1 and the result obtained from (i) above, we have that \[\sum_{i=0}^{k-1}t_{i}\left(f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{* })\right)\geq -\sum_{j=0}^{k-1}t_{j}\left(f(x^{j+1},y^{*})-f(\hat{x}_{k},y^{j+1})\right)\] \[\geq -\frac{t_{0}}{2\tau_{0}}\left\|\hat{x}_{k}-x^{0}\right\|^{2}- \frac{t_{0}}{2\sigma_{0}}\left\|y^{*}-y^{0}\right\|^{2}.\] Altogether, we obtain the required result. Below, we show the convergence of the OGAProx without strongly convexity or concavity assumption. In fact, the assumption of Theorem 3.4 is the same as that of [1, Theorem 9] which proves the convergence of the sequence of iterations generated by the OGAProx and the convergence of the min-max gap evaluated at the associated ergodic sequences under the convex-concave setting. **Theorem 3.4**.: _Let \(\nu=0\), let \(c_{\alpha}>L_{yx}\geq 0\), and let \(\tau\) and \(\sigma\) be in \(\mathbf{R}_{++}\) such that_ \[\left(c_{\alpha}L_{yx}\tau+2L_{yy}\right)\sigma<1.\] _Let \(\left(\forall k\in\mathbf{N}\right)\)\(\tau_{k}\equiv\tau\), \(\sigma_{k}\equiv\sigma\), and \(\theta_{k}\equiv 1\). Then for every \(k\in\mathbf{N}\smallsetminus\left\{0\right\}\),_ \[-\frac{1}{k}\left(\frac{t_{0}}{2\tau_{0}}\left\|\hat{x}_{k}-x^{0} \right\|^{2}+\frac{t_{0}}{2\sigma_{0}}\left\|y^{*}-y^{0}\right\|^{2}\right) \leq f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*})\] \[\leq\frac{1}{k}\left(\frac{t_{0}}{2\tau_{0}}\left\|x^{*}-x^{0} \right\|^{2}+\frac{t_{0}}{2\sigma_{0}}\left\|\hat{y}_{k}-y^{0}\right\|^{2} \right).\] _Consequently, \(\left(f(\hat{x}^{k+1},\hat{y}^{k+1})\right)_{k\in\mathbf{N}}\) converges to \(f(x^{*},y^{*})\) with a convergence rate of order \(\mathcal{O}\left(\frac{1}{k}\right)\)._ Proof.: In view of [1, Proposition 8], the assumptions above on the parameters \(\left(\sigma_{k}\right)_{k\in\mathbf{N}}\), \(\left(\tau_{k}\right)_{k\in\mathbf{N}}\), and \(\left(\theta_{k}\right)_{k\in\mathbf{N}}\) are exactly the requirements of the parameters in Proposition 3.3 when \(\nu=0\). According to [1, Theorem 9], \(\left(\left(x^{k},y^{k}\right)\right)_{k\in\mathbf{N}}\) weakly converges to \(\left(x^{*},y^{*}\right)\), which implies that \(\left(\left(x^{k},y^{k}\right)\right)_{k\in\mathbf{N}}\) is bounded. Due to (3.5), the boundedness of \(\left(\left(x^{k},y^{k}\right)\right)_{k\in\mathbf{N}}\) guarantees the boundedness of \(\left(\left(\hat{x}^{k+1},\hat{y}^{k+1}\right)\right)_{k\in\mathbf{N}}\) Because \(\left(\forall k\in\mathbf{N}\right)\)\(\theta_{k}\equiv 1\), we know that \(\left(\forall k\in\mathbf{N}\right)\)\(t_{k}=\frac{\theta_{0}}{\theta_{0}\theta_{1}\cdots\theta_{k}}\equiv 1\). Hence, we have that \[\left(\forall k\in\mathbf{N}\smallsetminus\left\{0\right\}\right)\quad\sum_{j=0}^ {k-1}t_{j}=k.\] Combine this result with Proposition 3.3(ii) to deduce that \[-\frac{t_{0}}{2\tau_{0}}\left\|\hat{x}_{k}-x^{0}\right\|^{2}- \frac{t_{0}}{2\sigma_{0}}\left\|y^{*}-y^{0}\right\|^{2} \leq k\left(f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*})\right)\] \[\leq\frac{t_{0}}{2\tau_{0}}\left\|x^{*}-x^{0}\right\|^{2}+\frac{t _{0}}{2\sigma_{0}}\left\|\hat{y}_{k}-y^{0}\right\|^{2},\] which, combining with the boundedness of \(\left(\left(\hat{x}^{k+1},\hat{y}^{k+1}\right)\right)_{k\in\mathbf{N}}\), ensures the required results. In the following result, we present the convergence of the OGAProx with the associated function being convex-strongly concave. Note that the assumption of Theorem 3.5 is exactly the same as [1, Theorem 12] which shows the convergence of the sequence of iterations generated by the OGAProx and the convergence of the min-max gap evaluated at the associated ergodic sequences under the convex-strongly concave setting. **Theorem 3.5**.: _Let \(\nu>0\), let \(c_{\alpha}>L_{yx}\geq 0\), let \(\theta_{0}=1\), and let \(\tau_{0}\) and \(\sigma_{0}\) be in \(\mathbf{R}_{++}\) such that_ \[\left(c_{\alpha}L_{yx}\tau_{0}+2L_{yy}\right)\sigma_{0}<1\quad\text{and}\quad 0 <\sigma_{0}\leq\frac{9+3\sqrt{13}}{2\nu}.\] _Define_ \[\left(\forall k\in\mathbf{N}\right)\quad\theta_{k+1}:=\frac{1}{\sqrt{1+\nu \sigma_{k}}},\quad\tau_{k+1}:=\frac{\tau_{k}}{\theta_{k+1}},\quad\text{and} \quad\sigma_{k+1}:=\theta_{k+1}\sigma_{k}.\] _The following results hold._ 1. \(\left((x^{k},y^{k})\right)_{k\in\mathbf{N}}\) _and_ \(\left((\hat{x}^{k+1},\hat{y}^{k+1})\right)_{k\in\mathbf{N}}\) _are bounded._ 2. _For every_ \(k\in\mathbf{N}\smallsetminus\left\{0\right\}\)_,_ \[-\frac{6}{\nu\sigma_{0}k^{2}}\left(\frac{1}{\tau_{0}}\left\|\hat{x }_{k}-x^{0}\right\|^{2}-\frac{1}{\sigma_{0}}\left\|y^{*}-y^{0}\right\|^{2}\right) \leq f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*})\] \[\leq\frac{6}{\nu\sigma_{0}k^{2}}\left(\frac{1}{\tau_{0}}\left\|x^ {*}-x^{0}\right\|^{2}+\frac{1}{\sigma_{0}}\left\|\hat{y}_{k}-y^{0}\right\|^{2} \right).\] _Consequently,_ \(\left(f(\hat{x}^{k+1},\hat{y}^{k+1})\right)_{k\in\mathbf{N}}\) _converges to_ \(f(x^{*},y^{*})\) _with a convergence rate order_ \(\mathcal{O}\left(\frac{1}{k^{2}}\right)\)_._ Proof.: (i): Let \(k\in\mathbf{N}\smallsetminus\left\{0\right\}\). Because \((x^{*},y^{*})\) is a saddle-point of \(f\), in view of Fact 3.2(v), we have that \(0\leq\sum_{i=0}^{k-1}t_{i}\left(f(\hat{x}_{k},y^{*})-f(x^{*},\hat{y}_{k}) \right)\leq\frac{t_{0}}{2\tau_{0}}\left\|x^{*}-x^{0}\right\|^{2}+\frac{t_{0}} {2\sigma_{0}}\left\|y^{*}-y^{0}\right\|^{2}-\frac{t_{k}}{2\tau_{k}}\left\|x^ {*}-x^{k}\right\|^{2}-\frac{t_{k}}{2}\left(\frac{1}{\sigma_{k}}-\theta_{k} \left(L_{yx}\alpha_{k}+L_{yy}\right)\right)\left\|y^{*}-y^{k}\right\|^{2}\), which, combining with the fact \(t_{0}=1\), implies that \[\frac{1}{2\tau_{0}}\left\|x^{*}-x^{0}\right\|^{2}+\frac{1}{2\sigma_{0}}\left\| y^{*}-y^{0}\right\|^{2}\geq\frac{t_{k}}{2\tau_{k}}\left\|x^{*}-x^{k}\right\|^{2}+ \frac{t_{k}}{2}\left(\frac{1}{\sigma_{k}}-\theta_{k}\left(L_{yx}\alpha_{k}+L _{yy}\right)\right)\left\|y^{*}-y^{k}\right\|^{2}. \tag{3.9}\] In view of Fact 3.2(i)&(ii), we know that \(\frac{1}{\sigma_{k}}-\theta_{k}\left(L_{yx}\alpha_{k}+L_{yy}\right)\geq\frac{ \delta}{\sigma_{k}}\) and \(\frac{t_{k}}{\tau_{k}}=\frac{1}{\tau_{0}}\). Moreover, via [1, Proposition 11], we know that \(\frac{t_{k}}{\sigma_{k}}=\frac{t_{k}}{\tau_{k}}\frac{\tau_{k}}{\sigma_{k}}\geq \frac{1}{\tau_{0}}\frac{\nu^{2}\tau_{0}\sigma_{0}}{9}k^{2}\). Combine these results with (3.9) to derive that \[\frac{1}{2\tau_{0}}\left\|x^{*}-x^{0}\right\|^{2}+\frac{1}{2\sigma _{0}}\left\|y^{*}-y^{0}\right\|^{2}\] \[\geq \frac{t_{k}}{2\tau_{k}}\left\|x^{*}-x^{k}\right\|^{2}+\frac{t_{k} }{2}\left(\frac{1}{\sigma_{k}}-\theta_{k}\left(L_{yx}\alpha_{k}+L_{yy}\right) \right)\left\|y^{*}-y^{k}\right\|^{2}\] \[\geq \frac{1}{2\tau_{0}}\left\|x^{*}-x^{k}\right\|^{2}+\frac{t_{k}}{2 }\frac{\delta}{\sigma_{k}}\left\|y^{*}-y^{k}\right\|^{2}\] \[\geq \frac{1}{2\tau_{0}}\left\|x^{*}-x^{k}\right\|^{2}+\frac{1}{\tau_ {0}}\frac{\nu^{2}\tau_{0}\sigma_{0}}{9}k^{2}\left\|y^{*}-y^{k}\right\|^{2},\] which, via (3.5), implies that \(\left((x^{k},y^{k})\right)_{k\in\mathbf{N}}\) and \(\left((\hat{x}^{k+1},\hat{y}^{k+1})\right)_{k\in\mathbf{N}}\) are bounded. (ii): Applying Fact 3.2(ii) again and [1, Inequality (39)], we know that \[\sum_{i=0}^{k-1}t_{k}=\frac{1}{\tau_{0}}\sum_{i=0}^{k-1}\tau_{k}\geq\frac{1}{ \tau_{0}}\frac{\nu_{T}\sigma_{0}}{3}\sum_{i=0}^{k-1}k=\frac{\nu\sigma_{0}}{6}k (k-1).\] Clearly, if \(k\in\mathbf{N}\smallsetminus\{0,1\}\), then \(k\geq 2\), \(k-1\geq\frac{k}{2}\), and \(\frac{\nu\sigma_{0}}{6}k(k-1)\geq\frac{\nu\sigma_{0}}{12}k^{2}\). Hence, we have that \[(\forall k\in\mathbf{N}\smallsetminus\{0,1\})\quad\frac{1}{\sum_{i=0}^{k-1}t_{k }}\leq\frac{12}{\nu\sigma_{0}k^{2}}.\] Combine results above with Proposition 3.3(ii) to obtain that \[f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*}) \leq\frac{1}{\sum_{i=0}^{k-1}t_{i}}\left(\frac{t_{0}}{2\tau_{0}} \left\|x^{*}-x^{0}\right\|^{2}+\frac{t_{0}}{2\sigma_{0}}\left\|\hat{y}_{k}-y^{ 0}\right\|^{2}\right)\] \[\leq\frac{12}{\nu\sigma_{0}k^{2}}\left(\frac{t_{0}}{2\tau_{0}} \left\|x^{*}-x^{0}\right\|^{2}+\frac{t_{0}}{2\sigma_{0}}\left\|\hat{y}_{k}-y^{ 0}\right\|^{2}\right)\] and \[f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*}) \geq-\frac{1}{\sum_{i=0}^{k-1}t_{i}}\left(\frac{t_{0}}{2\tau_{0}} \left\|\hat{x}_{k}-x^{0}\right\|^{2}+\frac{t_{0}}{2\sigma_{0}}\left\|y^{*}-y^ {0}\right\|^{2}\right)\] \[\geq-\frac{12}{\nu\sigma_{0}k^{2}}\left(\frac{t_{0}}{2\tau_{0}} \left\|\hat{x}_{k}-x^{0}\right\|^{2}+\frac{t_{0}}{2\sigma_{0}}\left\|y^{*}-y^ {0}\right\|^{2}\right).\] Recall that \(t_{0}=1\). Therefore, we obtain that \[-\frac{6}{\nu\sigma_{0}k^{2}}\left(\frac{1}{\tau_{0}}\left\| \hat{x}_{k}-x^{0}\right\|^{2}-\frac{1}{\sigma_{0}}\left\|y^{*}-y^{0}\right\|^{ 2}\right) \leq f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*})\] \[\leq\frac{6}{\nu\sigma_{0}k^{2}}\left(\frac{1}{\tau_{0}}\left\|x ^{*}-x^{0}\right\|^{2}+\frac{1}{\sigma_{0}}\left\|\hat{y}_{k}-y^{0}\right\|^{ 2}\right),\] which, combining with the boundedness of \(\left((\hat{x}^{k+1},\hat{y}^{k+1})\right)_{k\in\mathbf{N}}\), ensures the required results. ### Strongly Convex-Strongly Concave Setting In this subsection, we assume additionally that \((\forall y\in\operatorname{dom}g)\)\(\Phi(\cdot,y):\mathcal{H}_{1}\to\mathbf{R}\cup\{+\infty\}\) is \(\mu\)-strongly convex with \(\mu>0\) and that the function \(g\) is convex with modulus \(\nu>0\). That means we assume that the function \(\left(\forall(x,y)\in\mathcal{H}_{1}\times\mathcal{H}_{2}\right)\)\(f(x,y)=\Phi(x,y)-g(y)\) is strongly convex-strongly concave in this subsection. **Lemma 3.6**.: _Let \((\forall k\in\mathbf{N})\)\(\sigma_{k}\equiv\sigma\in\mathbf{R}_{++}\), \(\tau_{k}\equiv\tau\in\mathbf{R}_{++}\), and \(\theta_{k}\equiv\theta\in(0,1)\) such that_ \[1+\mu\tau=\frac{1}{\theta}\quad\text{and}\quad 1+\nu\sigma=\frac{1}{\theta}.\] _Suppose that there exists \(\alpha\in\mathbf{R}_{++}\) such that_ \[\frac{L_{yx}}{\alpha}\leq\frac{1}{\tau},\quad L_{yy}\leq\frac{1-\theta\sigma( \alpha L_{yx+L_{yy}})}{\sigma},\quad\text{and}\quad 1-\theta\sigma(\alpha L_{yx}+L_{yy})>0. \tag{3.10}\] _Set \(\tilde{\sigma}:=\frac{\sigma}{1-\theta\sigma(\alpha L_{yx}+L_{yy})}\). The following statements hold._ 1. \((\forall k\in\mathbf{N})\)__\(t_{k}=\frac{1}{\theta^{k}}\)_._ 2. _Let_ \((x,y)\) _be in_ \(\mathcal{H}_{1}\times\mathcal{H}_{2}\) _and let_ \(k\in\mathbf{N}\smallsetminus\{0\}\)_. Then we have that_ \[\sum_{i=0}^{k-1}t_{i}\left(f(x^{k+1},y)-f(x,y^{k+1})\right)\] \[\leq \frac{1}{2\tau}\left\|x-x^{0}\right\|^{2}+\frac{1}{2\sigma}\left\| y-y^{0}\right\|^{2}-\frac{1}{\theta^{k}}\frac{1}{2\tau}\left\|x-x^{k}\right\|^{2}- \frac{1}{\theta^{k}}\frac{1-\theta\sigma(\alpha L_{yx}+L_{yy})}{2\sigma}\left\| y-y^{k}\right\|^{2}\] \[-\frac{1}{2\theta^{k-1}}\left(\frac{1}{\tau}-\frac{L_{yx}}{ \alpha}\right)\left\|x^{k}-x^{k-1}\right\|^{2}-\frac{1}{2\theta^{k-1}}\left( \frac{1}{\tilde{\sigma}}-L_{yy}\right)\left\|y^{k}-y^{k-1}\right\|^{2}.\] 3. _Let_ \((x,y)\) _be in_ \(\mathcal{H}_{1}\times\mathcal{H}_{2}\) _and let_ \(k\in\mathbf{N}\smallsetminus\{0\}\)_. Then we have that_ \[\sum_{i=0}^{k-1}t_{i}\left(f(x^{k+1},y)-f(x,y^{k+1})\right)\] \[\leq \sum_{i=0}^{k-1}t_{i}\left(f(x^{k+1},y)-f(x,y^{k+1})\right)+\frac {1}{\theta^{k}}\frac{1}{2\tau}\left\|x-x^{k}\right\|^{2}+\frac{1}{\theta^{k}} \frac{1}{2\tilde{\sigma}}\left\|y-y^{k}\right\|^{2}\] \[\leq \frac{1}{2\tau}\left\|x-x^{0}\right\|^{2}+\frac{1}{2\sigma} \left\|y-y^{0}\right\|^{2}.\] Proof.: (i): Because \((\forall k\in\mathbf{N})\)\(\theta_{k}\equiv\theta\in(0,1)\), we have that \[(\forall k\in\mathbf{N})\quad t_{k}=\frac{\theta_{0}}{\theta_{0}\theta_{1} \cdots\theta_{k}}=\frac{\theta}{\theta^{k+1}}=\frac{1}{\theta^{k}}.\] (ii): This is a direct result of (i) and [1, Inequality (46)]. (iii): Based on (3.10), we know that \[\frac{1}{\tau}-\frac{L_{yx}}{\alpha}\geq 0\quad\text{and}\quad\frac{1}{\tilde{ \sigma}}-L_{yy}=\frac{1-\theta\sigma(\alpha L_{yx+L_{yy}})}{\sigma}-L_{yy} \geq 0.\] Combine this result with (ii) to derive that \[\sum_{i=0}^{k-1}t_{i}\left(f(x^{k+1},y)-f(x,y^{k+1})\right)\] \[\leq \frac{1}{2\tau}\left\|x-x^{0}\right\|^{2}+\frac{1}{2\sigma}\left\| y-y^{0}\right\|^{2}-\frac{1}{\theta^{k}}\frac{1}{2\tau}\left\|x-x^{k}\right\|^{2}- \frac{1}{\theta^{k}}\frac{1}{2\tilde{\sigma}}\left\|y-y^{k}\right\|^{2},\] which ensures the required result clearly. In Theorem 3.7 below, we show the convergence of OGAProx with associated function being strongly convex-strongly concave. Notice that the assumption of Theorem 3.7 below is the same as that of [1, Theorem 14] which shows the convergence of the sequence of iterations generated by the OGAProx and the convergence of the min-max gap evaluated at the associated ergodic sequences under the strongly convex-strongly concave setting. **Theorem 3.7**.: _Let \(\alpha\in\mathbf{R}_{++}\). Set \(\tilde{\theta}:=\max\{\frac{I_{yx}}{\alpha\mu+L_{yx}},\frac{\alpha L_{yx}+2L_{yy} }{\nu+\alpha L_{yx}+2L_{yy}}\}\). Let \(\theta\in(\tilde{\theta},1)\subseteq[0,1)\). Let_ \[(\forall k\in\mathbf{N})\quad\sigma_{k}\equiv\sigma=\frac{1}{\nu}\frac{1- \theta}{\theta},\quad\tau_{k}\equiv\tau=\frac{1}{\mu}\frac{1-\theta}{\theta}, \quad\text{and}\quad\theta_{k}\equiv\theta.\] _Then_ \[-\theta^{k-1}\left(\frac{1}{2\tau}\left\|\hat{x}_{k}-x^{0} \right\|^{2}-\frac{1}{2\sigma}\left\|y^{*}-y^{0}\right\|^{2}\right) \leq f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*})\] \[\leq\theta^{k-1}\left(\frac{1}{2\tau}\left\|x^{*}-x^{0}\right\|^ {2}+\frac{1}{2\sigma}\left\|\hat{y}_{k}-y^{0}\right\|^{2}\right).\] _Consequently, the sequence \(\big{(}f(\hat{x}^{k+1},\hat{y}^{k+1})\big{)}_{k\in\mathbf{N}}\) linearly converges to \(f(x^{*},y^{*})\) with a convergence rate order \(\mathcal{O}\left(\theta^{k}\right)\)._ Proof.: Due to [1, Proposition 13], the assumption above on the parameters \((\sigma_{k})_{k\in\mathbf{N}}\), \((\tau_{k})_{k\in\mathbf{N}}\), and \(\left(\theta_{k}\right)_{k\in\mathbf{N}}\) satisfy related requirements in Lemma 3.6. In view of [1, Theorem 14], we know that \(\big{(}(x^{k},y^{k})\big{)}_{k\in\mathbf{N}}\) linearly converges to \((x^{*},y^{*})\), which, via (3.5), guarantees the boundedness of \(\big{(}(x^{k},y^{k})\big{)}_{k\in\mathbf{N}}\) and \(\big{(}(\hat{x}^{k+1},\hat{y}^{k+1})\big{)}_{k\in\mathbf{N}}\). Let \(k\) be in \(\mathbf{N}\smallsetminus\{0\}\). Combine Fact 3.1 and Lemma 3.6(iii) to derive that \[f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*}) \leq\frac{1}{\sum_{i=0}^{k-1}t_{i}}\sum_{j=0}^{k-1}t_{j}\left(f(x ^{j+1},\hat{y}_{k})-f(x^{*},y^{j+1})\right) \tag{3.11a}\] \[\leq\frac{1}{\sum_{i=0}^{k-1}t_{i}}\left(\frac{1}{2\tau}\left\|x^ {*}-x^{0}\right\|^{2}+\frac{1}{2\sigma}\left\|\hat{y}_{k}-y^{0}\right\|^{2}\right) \tag{3.11b}\] and \[f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*}) \geq-\frac{1}{\sum_{i=0}^{k-1}t_{i}}\sum_{j=0}^{k-1}t_{j}\left(f(x ^{j+1},y^{*})-f(\hat{x}_{k},y^{j+1})\right) \tag{3.12a}\] \[\geq-\frac{1}{\sum_{i=0}^{k-1}t_{i}}\left(\frac{1}{2\tau}\left\| \hat{x}_{k}-x^{0}\right\|^{2}-\frac{1}{2\sigma}\left\|y^{*}-y^{0}\right\|^{2} \right). \tag{3.12b}\] Because \(\theta\in(0,1)\), we know that \(0<\theta^{k}\leq\theta<1\) and \(\frac{1-\theta^{k}}{1-\theta}\geq 1\). This result together with Lemma 3.6(i) yields that \[\sum_{i=0}^{k-1}t_{i}=\sum_{i=0}^{k-1}\frac{1}{\theta^{i}}=\frac{1}{\theta^{k- 1}}\sum_{i=0}^{k-1}\theta^{i}=\frac{1}{\theta^{k-1}}\frac{1-\theta^{k}}{1- \theta}\geq\frac{1}{\theta^{k-1}}. \tag{3.13}\] Combine (3.11), (3.12), and (3.13) to obtain that \[f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*}) \leq\frac{1}{\sum_{i=0}^{k-1}t_{i}}\left(\frac{1}{2\tau}\left\|x^ {*}-x^{0}\right\|^{2}+\frac{1}{2\sigma}\left\|\hat{y}_{k}-y^{0}\right\|^{2}\right)\] \[\leq\theta^{k-1}\left(\frac{1}{2\tau}\left\|x^{*}-x^{0}\right\|^{2 }+\frac{1}{2\sigma}\left\|\hat{y}_{k}-y^{0}\right\|^{2}\right)\] and \[f(\hat{x}_{k},\hat{y}_{k})-f(x^{*},y^{*}) \geq-\frac{1}{\sum_{i=0}^{k-1}t_{i}}\left(\frac{1}{2\tau}\left\| \hat{x}_{k}-x^{0}\right\|^{2}-\frac{1}{2\sigma}\left\|y^{*}-y^{0}\right\|^{2}\right)\] \[\geq-\theta^{k-1}\left(\frac{1}{2\tau}\left\|\hat{x}_{k}-x^{0} \right\|^{2}-\frac{1}{2\sigma}\left\|y^{*}-y^{0}\right\|^{2}\right).\] which, combining with the boundedness of \(\left((\hat{x}^{k+1},\hat{y}^{k+1})\right)_{k\in\mathbf{N}}\), ensures the required results. ## Acknowledgments Hui Ouyang thanks Professor Boyd Stephen for his insight and expertise comments on the topic of saddle-point problems and all unselfish support. Hui Ouyang acknowledges the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number PDF - 567644 - 2022].
2309.15920
Dynamics of Long-lived Axion Domain Walls and Its Cosmological Implications
We perform an updated analysis on a long-lived axion domain wall (DW) network. By simulating the axion field on a 3D lattice and fitting an analytical model for the DW evolution, we identify the leading energy loss mechanisms of the DWs and compute the spectrum of axions emitted from the network. The contribution from the DWs to axion dark matter (DM) density is derived, with viable parameter space given. The application to both QCD axions and general axion-like particles (ALPs) are considered. Due to the new approaches taken, while our results bear consistency with earlier literature, notable discrepancies are also revealed, such as the prediction for DM abundance, which may have a profound impact on axion phenomenology at large.
Chia-Feng Chang, Yanou Cui
2023-09-27T18:00:37Z
http://arxiv.org/abs/2309.15920v1
# Dynamics of Long-lived Axion Domain Walls and Its Cosmological Implications ###### Abstract We perform an updated analysis on a long-lived axion domain wall (DW) network. By simulating the axion field on a 3D lattice and fitting an analytical model for the DW evolution, we identify the leading energy loss mechanisms of the DWs and compute the spectrum of axions emitted from the network. The contribution from the DWs to axion dark matter (DM) density is derived, with viable parameter space given. The application to both QCD axions and general axion-like particles (ALPs) are considered. Due to the new approaches taken, while our results bear consistency with earlier literature, notable discrepancies are also revealed, such as the prediction for DM abundance, which may have a profound impact on axion phenomenology at large. ## I Introduction Axions are ultra-light particles that are originally proposed as a compelling solution to the Strong CP problem in quantum chromodynamics (QCD) [1; 2; 3]. Recent years have seen a significantly increased interest in QCD axions and more general axion-like particles (ALPs), as dark matter (DM) candidates alternative to WIMPs [4; 5; 6; 7]. While most existing studies on axion phenomenology and detection focused on the axion particle per se, the impact of the accompanying axion topological defects, i.e. axion strings and domain walls (DWs), can be substantial, yet still not well understood. Such axion topological defects are indispensable companions of axion particles for post-inflationary PQ symmetry breaking, with potentially significant contribution to axion relic abundance [8; 9; 10; 11; 12; 13], and may provide complementary search avenues for axion models [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26]. A growing effort has been made in the past few years along this direction. However, there are still debates to be resolved and clarifications to be made, in part due to the technical challenges with simulating axion topological defects [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39]. Axion cosmic strings form as the PQ breaking phase transition (PT) occurs at a high energy scale \(f_{a}\), and prevail till the pseudo-goldstone boson (axion) later acquires a nonzero mass \(m_{a}\) and DWs form. The structure of the DWs depends on the model specifics of the axion potential and is characterized by the axion mass and the DW number \(N_{\rm DW}\). The case with \(N_{\rm DW}=1\) is most studied in recent years, where the DWs are short-lived and strings dominate the dynamics of the axion topological defects [32; 34; 36]. On the other hand, more generally for the \(N_{\rm DW}>1\) models e.g. Dine-Fischler-Srednicki-Zhitnitsky model [40; 41], the DWs are stable and problematic as they would over-close the Universe. Nevertheless, the \(N_{\rm DW}>1\) cases can be innocuous with the presence of a small symmetry-breaking bias term in the axion potential, which yields the DWs that are long-lived but collapse before the BBN [42; 43]. Upon collapsing, long-lived DWs can leave observable imprints in the form of axion dark matter relic density, gravitational waves (GWs), as well as the impact on cosmic structure formation [44; 12]. A clear understanding of the evolution and dynamics of the DW network is crucial for predicting and probing such potentially rich phenomenology. However, the literature on the dynamics of metastable DWs (axion-associated or more general) is still relatively scarce [12; 45; 46; 47; 48; 49; 44; 45; 46; 47; 48; 49], and further investigation is required to advance and clarify our understanding. In this work, we conduct an updated analysis for the long-lived axion DWs and predict axion relic abundance produced from the axion DWs (with \(N_{\rm DW}\)=2 as a benchmark). We perform a 3D field theory lattice simulation for the axion field with grid size \(N^{3}=1536^{3}\) in a radiation-dominated background, including a bias term in the axion potential, and solve the axion field equation of motion exactly. This differs from earlier simulation work, with the promise of potential improvement: e.g. the analysis of metastable DWs in [12] and [38] is based on a 2D simulation, while the 3D simulation in [47; 48] employs Higgs DWs with Press-Ryden-Spergel (PRS) [50] approximation. In order to elucidate the physics of the dynamics of DW evolution, we investigated the DW radiation mechanisms by capturing and zooming in the snapshots of animations from our simulation and by analyzing the axion spectrum and zoom-in. In addition to obtaining results based on numerical simulation, through analytical fitting, we also present the velocity-dependent one-scale (VOS) model applicable to the metastable DW evolution. This is a notable extension of the framework of the VOS model which previously has been widely used to describe the evolution of other types of topological defects such as cosmic strings [51; 52] and, only recently a few attempts on stable DWs [47; 48; 53; 54; 55; 56]. By combining numerical and analytical approaches, our analysis leads to an updated prediction for the spectrum and relic abundance of axions radiated from DWs, as well as new insights into the evolution of DW substructures. This study may shed new light on the cosmological implication of axion topological defects and their role in axion physics at large. In the following, we will first introduce the axion model and simulation setup that we adopted. Then we will present the essential results on the dynamics of axion DWs derived from the simulation, and demonstrate how these can be used to calibrate the analytical VOS model. Cosmological implications related to axion DM will be discussed before we conclude. ## II Axion model We first introduce the benchmark axion model that we consider and the essentials in our simulation. As a pseudo-Nambu-Goldstone boson, axion is associated with the angular mode of a complex scalar field whose VEV spontaneously breaks a global U(1) symmetry. The U(1) symmetry breaking occurs at a relatively high scale \(T\sim f_{a}\) when the radial mode acquires a mass \(m_{R}\sim f_{a}\). The original shift symmetry possessed by the axion is broken at a much later time \(T\sim\Lambda\simeq\sqrt{m_{a}f_{a}}\) (e.g. \(\Lambda_{\rm QCD}\) for QCD axion), when the axion acquires a mass \(m_{a}\) and DW forms. At an even later time when \(H\ll f_{a}\), the effective Lagrangian for axion field \(a=a({\bf x},t)\) with the radial mode integrated out reads \[\mathcal{L}=|\partial_{\mu}a|^{2}-V(a). \tag{1}\] We consider a biased potential \[V(a)=\frac{m_{a}^{2}f_{a}^{2}}{N_{\rm DW}^{2}}\Bigg{[}1-\cos\left(N_{\rm DW} \frac{a}{f_{a}}\right)+\epsilon\left(1+\cos\frac{a}{f_{a}}\right)\Bigg{]}, \tag{2}\] where \(\epsilon\ll 1\) is the bias parameter that causes the DW to collapse. We consider \(N_{\rm DW}=2\), which implies one true vacuum and one false vacuum in the model [57]. This is a representative choice that involves a simple DW structure which eases the simulation analysis and also allows us to extrapolate our results to the string-wall scenario, which we will discuss in more detail in the Appendix A. We estimate the DW surface tension based on the axion potential in Eq.(2) as: \[\sigma_{\rm DW}\simeq\eta_{\rm DW}\,\frac{m_{a}f_{a}^{2}}{N_{\rm DW}^{2}}, \tag{3}\] where \(\eta_{\rm DW}=8\) for the potential in Eq.(2), which we will use in this study. \(\eta_{\rm DW}=8.97(5)\) for QCD axion with pion contribution included [58]. The DWs become dynamical at cosmic time \(t\sim 1/m_{a}\) when the horizon becomes comparable to the DW thickness \(\delta\sim 1/m_{a}\). ## III Simulation ### Setup The equation of motion (EoM) of the axion field in a flat homogeneous and isotropic Friedmann-Lemaitre-Robertson-Walker (FLRW) universe is \[\frac{\partial^{2}a}{\partial\tau^{2}}+2\left(\frac{d{\rm ln}R}{d{\rm ln} \tau}\right)\frac{1}{\tau}\frac{\partial a}{\partial\tau}-\frac{\partial^{2} a}{\partial x_{i}^{2}}=-R^{2}\frac{\partial V}{\partial a}, \tag{4}\] where \(R(t)\) is the scale factor, \(x_{i}\) is comoving spatial coordinate, \(\tau\) is comoving time, and \(\nabla\) is the Laplacian in physical coordinates. We start our simulations at a time that is slightly earlier than the DW formation time. For the initial condition (IC) of the field of our simulation, a random and uniform distribution of the axion field is consistent with the consequence of stochastic inflation under the assumption that the axion potential scale \(\sqrt{m_{a}f_{a}}\) is far below the inflation scale \(H_{I}\) (see [59] and a review of the stochastic method [60]). We thus consider a simpler scenario in that we randomly assign field value \(a=0\) or \(\pi\) (the two vacuums in the potential) to realize an unbiased IC where half of the points on the lattice are in a true vacuum and assume zero initial field velocity \(\dot{a}(t_{i})\to 0\). As we will see once the DW network enters the attractive solution, the so-called scaling regime, the DW network evolution would no longer be sensitive to IC. This phenomena has been observed in earlier simulations [54, 17, 28, 44, 61], and see [48] for a discussion of the effect of a biased IC on the PRS DW evolution (and earlier references [62, 63, 49]). Other simulation setups are as follows. We normalized all parameters according to \(f_{a}\to 1\). The lattice size is \(N^{3}=1536^{3}\), and the simulation period starts from \(1/H(t_{i})=R(t_{i})\Delta x_{i}\), and ends at \(1/H(t_{f})=(N/2)\Delta x_{f}\), where \(\Delta x_{i}=1\) is initial lattice spacing, \(R(t_{i})=1\) is initial scaling factor, \(\Delta x_{f}=R(t_{f})\Delta x_{i}\) is comoving spacing at the end of simulation, and a radiation background is assumed with \(R(t)\propto t^{1/2}\). We fix the time interval \(\Delta\tau=0.1\) and test convergence by re-running with smaller time intervals, where \(\tau\) is the comoving time. We further fix the physical DW thickness as \[\delta\sim\frac{1}{m_{a}}=\frac{1}{(N/2)R(t_{i})\Delta x_{i}}. \tag{5}\] These choices imply that the simulation starts at the time when the horizon size equals lattice spacing \(\Delta x_{i}\), and ends when the horizon expands to half of the full lattice size. On the other hand, the DW thickness \(\delta\) occupies \(N/2\) lattice grids at \(t_{i}\), then as the coordinate expands, the simulation ends when \(\delta\) occupies two grids. We chose such simulation setups for the following reasons: (1) \(\delta\) cannot be smaller than the size of two grids for sufficient resolution of the DW. Lower resolution leads to incorrect and insensible simulation results such as a frozen DW in the lattice because the gradient \(\nabla^{2}a\) in the equation of motion Eq.(4) would be incorrectly calculated in the simulation. In addition, a lower resolution would incorrectly induce a wrong tail in the axion kinetic spectrum around axion momentum of \(k\sim 2\pi/\Delta x_{f}\). (2) We simulated with two types of boundary conditions (b.c.'s), periodic and symmetric, and investigated the results' robustness against the choice of b.c. As the simulation results are expected to be inevitably subject to b.c. (albeit not significantly as we found), in order to mitigate the effect we conservatively collect simulation data from the central \(1/8\) of the simulation box and discard the rest. This data collection range equals the Hubble box size at the end of the simulation. In order to present a free axion spectrum by filtering out the DW contribution, we employ a mask function on the axion field as in previous studies [12; 37] (originally applied in CMB analysis [64]). The method is to mask \(\dot{a}(x)\) by a window function \[\dot{a}(x)\rightarrow\theta(x-d)\dot{a}(x), \tag{6}\] where \(x\) is the coordinate that origin at the DW center where \(V(a(x=0))=V_{\rm max}\), \(d\) is a mask function parameter, and \(\theta(x)\) is the Heaviside step function. We fix \(d=\delta/2\) in our simulation to exclude the DWs contribution to the power spectrum. But due to the influence on the DWs exerted by the background axion field, \(\delta\) would not be perfectly a constant. Thus we cannot fully erase the DW contribution to the free axion spectrum, yet our approach should provide a good estimation. A more effective algorithm to erase such a contribution may be developed with dedicated future work. The kinetic power spectrum is found to be insensitive to the choice of \(d\) that is not too far from \(\delta\), i.e. \(\delta/4\lesssim d\lesssim 2\delta\). We found that applying the mask function on the axion field itself \(a(x)\rightarrow\theta(x-d)a(x)\) causes an insensible result on the gradient energy and potential, i.e. a variation on the blue tail of spectrum (\(k\sim 1/m_{a}\)) sensitive to the variation of \(d\). This may be caused by the oscillation behavior of the axion field around the vacuum such as the contribution from sub-horizon compact DW or oscillons (see the red points at the end of the simulation, i.e. the far right panel in Fig. 1) that cannot be fully removed by the mask function. Thus to estimate the total energy of the radiated free axions we only apply the mask function for the axion kinetic energy and assume that the free axions are all in harmonic mode i.e. its kinetic energy takes half of its total energy. Our DW simulation was run with various simulation conditions and ALP model benchmarks as follows. We conducted 5 simulations for each benchmark with \(\epsilon\gtrsim 10^{-3}\) (to ensure that all the DWs decayed away by the end of simulation) while keeping the aforementioned parameters constant as described in last three paragraphs. Subsequently, based on the simulation data, we will construct a model for the DW dynamics and then extrapolate it to lower \(\epsilon\) values and a wider range of \(m_{a}\) via analyzing the axion spectrum as well as monitoring the evolution of the DWs and the free axion background field informed by the snapshots of simulation and the spectrum analysis in Sec. IV. Besides the main simulation runs, we also conducted test runs under various conditions and ALP model benchmarks to ensure that our analysis result would not be affected by the specific simulation parameters that we have set. In particular, the test runs are set as the following. We assessed the impact of varying simulation parameters (with 5 testing runs for each benchmark as well) such as axion mass \(m_{a}\), spanning a range from 0.5 to 2, initial scaling rate \(R(t_{i})\) with values of 0.5, 1, and 2, and \(x_{i}\) with values of 0.1, 1, and 10. Additionally, we considered different lattice sizes \(N\) (512, 1024, and 1536) and the mask function parameter \(d\) as previously mentioned. As expected for free axion spectrum as shown in Sec. IV, and consequently, our conclusions remained unaffected. ### Application to Other Models Although we simulated a network for a simple DW model, our results can be applied to a variety of more complex models if they satisfy the following conditions: (1) The DW network has enough time to enter the scaling regime before its decay. For instance, in our model a large \(\epsilon\gtrsim 5\times 10^{-3}\) (see Sec. IV.2) would cause the false vacuum to collapse too early for the DW area to have time to converge to a constant, i.e. enter the scaling regime (see Sec. IV.2 and Eq.(10) therein). (2) Essential properties of the DW should be (approximately) the same as in our simulation. For instance, the DW thickness \(\delta\) should be kept as a constant during the scaling regime and before the DW starts to decay. Meanwhile, DW number should be \(N_{\rm DW}=2\) as considered in this study. The first condition eliminates the dependence on the DW initial distribution effect when applied to different models. The second ensures that the DW dynamics are congruent with our findings. As an example, in the following, we explain how our simulation can apply to certain QCD axion models. Firstly, a simple condition for a QCD model to be mimicked by our DW-only simulation is for the DW structure to be absent from the model, which can be satisfied in the scenario of a pre-inflationary PQ symmetry breaking or if the vacuum manifold after the PQ phase transition is simply connected (see later discussion in this section and Appendix. A for a more complex case: a possible application to QCD models with cosmic strings). Secondly, the QCD axion model needs to have the same \(N_{\rm DW}=2\) and the presence of a nonzero \(\epsilon\) term in order to avoid the DW over-closure problem. Furthermore, the DW thickness in the QCD model needs to be effectively constant during the simulation time window. Consider that unlike in the model we considered in Sec. VII where \(m_{a}\) and thus DW thickness is a constant, in QCD the DW thickness generally takes a time-dependent form as \[\frac{1}{\delta_{\rm QCD}}\simeq m_{a}(T)\simeq\begin{cases}m_{a} \left(\frac{\Lambda_{\rm QCD}}{T}\right)^{4}&\text{for}\;\;T>\Lambda_{\rm QCD}, \\ m_{a}&\text{for}\;\;T\leq\Lambda_{\rm QCD},\end{cases} \tag{7}\] where the QCD scale \(\Lambda_{\rm QCD}=400\,\text{MeV}\), \(T\) is the cosmic temperature, and the expression is derived from a diluted instanton gas approximation [65; 66; 67; 68] (also see the results from lattice simulation [69; 58]). The QCD axion DW thickness \(\delta_{\rm QCD}\) approaches a constant at the transition time \(t_{a}\) when \(T\simeq\Lambda_{\rm QCD}\), and afterwards, the QCD axion DW would evolve as in our simulation. We did not simulate a time-dependent thickness \(\delta_{\rm QCD}\) due to the computational limitations imposed by the lattice. The DW thickness, which rapidly shrinks as \(\delta_{\rm QCD}\propto R(t)^{-4}\) in Eq.(7), imposes a significant demand on the evolution time range in our simulation, because the thickness should be at least larger than the lattice spacing for accurate resolution. Due to this limitation, we choose to focus on simulating the cases where \(\delta\) can be treated as a constant. In order for our model to approximate a \(\delta_{\rm QCD}\) during the simulation time window, we should consider a small \(\epsilon\lesssim 10^{-4}\) such that the DW can live long enough to enter the scaling regime after \(t_{a}\). We will discuss the concrete application of this condition on the parameter space in Fig. 15 in Appendix. B. In addition to the issue of constant vs. time-dependent DW thickness as discussed above, another key potential difference between our simple model and the QCD case is that some QCD axion models may also involve cosmic strings in the axion topological defect structure, such as in the scenario of post-inflationary \(U(1)\) symmetry breaking, where QCD axion strings persist until DW formation. In such a case with \(N_{\rm DW}=2\), two DWs attach to a single cosmic string, forming a string-wall network that differs significantly from what the DW-only structure that we considered in our study. Nevertheless, we find that the influence of cosmic strings is negligible when the DW tension dominates the network [70], specifically when the condition \[\sigma_{\rm DW}t/\mu>1 \tag{8}\] is satisfied, where \(\mu\simeq 2\pi^{2}f_{a}^{2}\text{ln}(tf_{a})\) is the cosmic string tension. Under this condition, the string-wall structure is well approximated by our simulation. However, for higher values of \(N_{\rm DW}>2\), where multiple DWs attach to a single string, a more complex scenario arises with the attachment of multi-DWs. We have chosen to leave the investigation of such complex scenarios with \(N_{\rm DW}>2\) for future work. We will present the viable parameter space satisfying the condition given in Eq.(8), and discuss the application to the QCD axion model with cosmic strings in the Appendix.A. Furthermore, our decision to focus on the simplified case without string contribution is also influenced by technical considerations. Due to limitations in our simulation resources, the lattice size imposes constraints on extending the simulation period sufficiently to observe DW decay if cosmic strings are included. The scale hierarchy between the width of the string (\(\sim 1/f_{a}\)) and the Hubble scale at the time of DW decay prevents us from adequately observing the network in our simulation with the current lattice size. Finally, note that our simulation results not only can apply to the aforementioned QCD axion models, but also to other axion-like particle models that satisfy the two conditions that we identified above. ## IV Domain Wall Dynamics ### Features Observed in the Simulation In this subsection, we will discuss the features identified from the snapshots of our simulations, and will further discuss their corresponding energy contributions and dynamic behaviors later in Sec. V and Sec. VI. We find 6 distinguishable objects in simulation, and they are connected through 3 different dynamic motions, including their creation, annihilation, and motion. As illustrated in Fig. 3, the _objects_ observed in the simulation can be categorized as follows: (1) Super-horizon sized DWs: represented as the red wall-like structures in Fig. 1 and Fig. 3, with different shapes (either planar or compact). These super-horizon sized DWs are formed due to the initial field distribution of the simulation. (2) Horizon-sized compact DW: also shown as red wall-like structures in Fig. 1 and Fig. 3, but with a compact geometry. These horizon-sized compact DWs are formed by the contraction or self-chopping process (which will be discussed) of super-horizon sized DWs. Such DWs release energy through flattening motion, self-chopping into smaller compact DWs, and then collapse (to be defined later). (3) Sub-horizon compact DWs or _oscills_: DWs with typical sizes of \(\sim 1/m_{a}\) (in our simulation it is found that larger, sub-horizon sized compact DW rapidly contract down to the size of \(\sim 1/m_{a}\)), much smaller than the horizon scale. These structures are mainly formed through self-chopping due to the fluctuations on the DW surface, and the collapse of the horizon-sized compact DWs, see Fig. 3 and the red dots in Fig. 1. Distinguishing between sub-horizon compact DWs and oscillons is challenging due to limited lattice resolution, as both structures occupy only a few lattice spacings. Therefore, sub-horizon compact DW and oscillon are two interchangeable terms in this study. At the end of our simulation, sub-horizon compact DWs/oscillons are found to contribute to the residual energy density. However, their contribution is subdominant when compared to that from free axion fields, such as axion clouds and mechanical waves (will be introduced next). (4) Axion clouds: background axion field distributed around the vacua, on average with relatively large momentum of \(k\gtrsim m_{a}\). They are shown as blue regions in the true vacuum and yellow regions in the false vacuum in Fig. 3. The formation of axion clouds can be induced by heating the background axion field, i.e. increasing the oscillation amplitude (and thus the energy density) of the background axion field around the vacua through DW movements, specifically, processes like flattening and compact DW collapsing, which will be elaborated on shortly. (5) Axion Mechanical Wave: the ripple-like structure in Fig. 3, originated from the axion waves propagating outward from the collapsing DWs (see Fig. 4). Compared to axion clouds, they have relatively lower momentum \(k\lesssim m_{a}\). (6) Resonance: the phenomenon where a region of the axion clouds are divided into small wave-packets as particle-like structures, with a scale of \(k\sim 3.68/m_{a}\) (this characteristic scale will be demonstrated by spectral analysis later in Sec. V). This characteristic \(k\) value is obtained by first visualizing it in spatial dimension and then converting it to momentum space by Fourier transformation. We have shown the resonance in the lowermost-right subfigure in Fig. 3. These objects are connected to each other via transformative processes (e.g. creation, annihilation) which can be categorized as the following _dynamic motions_ as identified from our simulation: (1) _Flattening_ motion: This DW motion is analogous to laying a piece of paper flat (for example, as illustrated in Fig. 16), and therefore we refer to this motion as "flattening", which originates from the DW tension and vacuum pressures, see Fig. 8. As a result of such a flattening process, the DW curvature and surface fluctuation are reduced, resulting in the heating of the background axion field. Additionally, the flattening process induces the _contraction_ of compact domain walls, causing larger domain walls to transform into smaller ones. For instance, a super-horizon-sized compact domain wall contracts into a horizon-sized domain wall, as illustrated in Fig. 7. (2) _Self-chopping_: refers to the phenomenon where a segment of the DW shrinks and eventually breaks off from the 'parent' DW, leading to the splitting of the DW into two parts. This mechanism plays a crucial role in the DW network evolution, affecting any size of DW. The upper-row subfigures in Fig.3 illustrate the sub-horizon sized DW self-chopping (the first three subfigures) and horizon-sized DW self-chopping (the third subfigure) processes, while a cartoon illustration can be found in Fig. 5. This process is in analogy to self-intersection in cosmic string dynamics. Note that self-chopping is an intermediate process of transforming the DW energy from larger DWs to smaller DWs, which would further decay to the final outcome (mostly free axions) through the collapse process as defined below. Therefore we do not count self-chopping as an effective mechanism of DW energy release, unlike the flattening and collapse processes. (3) _Collapse_ of horizon-sized compact DWs: the process during the final stage of DW evolution when a horizon-sized compact DW rapidly contracts and subsequently collapses while radiating the axion field in the form of mechanical wave and heating the background axion field. Such a process is illustrated in the upper-row subfigures in Fig. 3, and also as a cartoon in Fig. 4. The complete evolution process of DWs can then be summarized as follows: At the beginning of simulation, super-horizon-sized DWs transform into horizon-sized DWs via either contracting (by flattening) or dividing (by self-chopping). Following this, the horizon-sized DWs undergo a collapse, resulting in the emergence of Figure 1: Visualization of lattice simulation with bias parameter \(\epsilon=0.0013\): snapshots in a time series (left to right: \(m_{a}t=21,43,97,385\)). The yellow (blue) region indicates a false (true) vacuum, and the red region represents DWs. The Hubble volume is shown as a black cube in the bottom-left corner of each snapshot (see animation for \(\epsilon=0.0012\)). The small red dots are defined as sub-horizon compact DW or oscillon, which are the axion field that oscillates around the false vacuum, surrounded by DWs (For a zoomed-in simulation for the dissipation of the small red dots, see: animation link) Figure 2: DW area parameter (defined in Eq.(10)) as a function of the cosmic time in our simulation, with varying bias parameter \(\epsilon\) (defined in Eq.(2)). axion mechanical waves and axion clouds, while also releasing a smaller amount of energy in form of subhorizon compact DWs. Throughout this entire process, the sub-horizon-sized DWs and oscillons undergo continuous self-chopping, while the background axion field continues to heat up, resulting in the formation of an axion cloud. The energy released during the evolution of DWs can be categorized based on two key mechanisms: flattening and collapsing. In Section VI, we will delve into a detailed discussion and analysis of the energy release, with a specific emphasis on these two aspects i.e. collapsing, leading to \(\rho_{2}\) in Eq.(21) and flattening, leading to \(\rho_{3}\) in Eq.(27), respectively. Note that it may not be feasible to precisely separate the energy contributions arising from these two mechanisms, as both of them lead to the heating of the background axion field. There are additional contributions from processes such as the self-chopping and the subsequent decay of sub-horizon compact DWs. But these influences are comparatively insignificant when compared to the essential processes mentioned above. It is worth noting that in the analogous VOS model of cosmic strings, the majority of energy is released through the formation of loops primarily generated by the interaction of two long strings [71]. In contrast to the chopping process of cosmic strings, the probability of chopping due to the intersection of two DWs (cartoon illustration in Fig. 6) is negligibly low, and the majority of energy loss is due to the two mechanisms-flattening and collapse outlined above. The energy contribution from the self-chopping of sub-horizon compact DWs is negligible when compared to that of horizon-sized compact DWs, as found in the simulation. Furthermore, it is observed that horizon-sized and sub-horizon sized compact DWs typically do not originate from the chopping of two horizon-sized DW, as shown in Fig. 6, rather, from contraction or self-chopping of larger, super-horizon or horizon-sized DWs. ### Scaling Regime In our simulation, we track the evolution of DWs and the pattern of energy loss from the DW network. A snap Figure 4: A cartoon illustration for the DW collapse process. Figure 3: Visualization of the lattice simulation with the bias parameter \(\epsilon=0.0012\). The leftmost figure displays a snapshot of the entire simulation scale, where the domain wall (DW) is highlighted in red color. The upper row shows a zoomed-in region of our simulation with a \(160^{3}\) lattice, accompanied by a further zoomed-in time series depicted at the bottom. The lower row comprises smaller lattice sizes. Both sets of sub-figures encompass a range of features discovered in this study, and a detailed discussion of these features is provided in the main text of Section IV.1. shot of the evolution is shown in Fig. 1, and for comparison, its counterpart with non-biased potential is shown in Fig. 16 in the Appendix.B. The left-most snapshot is taken as the network enters the scaling regime when the DWs flatten while expanding. Shortly after its formation (\(\Delta t\lesssim 10/m_{a}\)), the network approaches an attractive solution called the scaling regime while releasing energy through the two mechanisms which were introduced in the last section: (1) the collapsing of horizon-sized compact DWs; (2) The flattening motion. Meanwhile, the super-horizon DWs enter into the horizon continuously, which consequently compensates for the energy loss due to both mechanisms, so that the DW area per horizon volume \(A_{v}\) remains constant. This constant solution is the feature of the scaling regime. Such a feature has been identified in literature [12; 17; 44; 53; 54], and also agrees with our findings as shown in Fig. 2. At a later time, the DWs start to decay around \(t_{\rm decay}\), and the scaling solution breaks down. In the scaling regime, the DW energy density takes the following form: \[\rho_{\rm DW}=\gamma^{2}\frac{\sigma_{\rm DW}A_{v}}{t}, \tag{9}\] where \(\gamma\simeq 1\) is the Lorentz factor that represents the contribution of the kinetic energy of the DW, and the DW area parameter is given by (originally introduced in [50]) \[A_{v}\equiv\frac{A_{w}t}{R(t)V}=0.67^{+0.04}_{-0.04},\qquad{\rm for}\;\;\epsilon =0, \tag{10}\] where \(A_{w}\) is the DW comoving area, and \(V\) is the comoving volume. The result largely agrees with the previous simulation studies [12; 38; 44], but it is about 30% less than the prediction by the simulation assuming PRS approximation for the DW network [47]. On the other hand, Figure 5: A cartoon for DW self-chopping process. Figure 8: Force diagram for domain wall tension and vacuum pressures. This is used to illustrate the origin of the DW flattening motion. Figure 6: A cartoon for two DWs chopping. This process is rarely observed in the simulation. Figure 7: A cartoon for DW contraction process with DW velocity \(v\). The Hubble box enlarges over time, while the compact DW is contracting. The zoomed-in subfigure illustrates the DW tension and pressure that are also shown in Fig. 8. in the metastable DW scenario, we find \[A_{v}=c_{1}+c_{2}\operatorname{Exp}\left[-c_{3}\,\left(\epsilon\sqrt{m_{a}t} \right)^{c_{4}}\right],\quad\text{for}\ \ \epsilon>0, \tag{11}\] with \[c_{1}=0.0088^{+0.0009}_{-0.0009},\ \ c_{2}=0.62^{+0.06}_{-0.05},\] \[c_{3}=3.98^{+0.40}_{-0.40}\times 10^{6},\ \ \ c_{4}=3.57^{+0.08}_{-0.11},\] where the parameter \(c_{1}\) term represents the residual compact DWs and oscillons at the end of the simulation. As mentioned we cannot distinguish whether these are sub-horizon (i.e. much smaller than the horizon scale) compact DWs or oscillons due to the limitation of the simulation period and resolution. The fitting model Eq.(11) is inspired by field theory analysis [45] that employs mean-field approximation method and Gaussian ansatz on the field probability distribution in the limit of a small bias term \(\epsilon\ll 1\). Moreover, the parameter \(c_{4}\sim 3\) is approximately the spatial dimension as predicted in [45]. The fitting model in Eq.(11) also fits the data from other DW simulation studies [48, 49, 63]. As the axion kinetic energy reduces due to redshifting, the true vacuum pressure force gradually overcomes the DW tension, which causes energy loss of the DW network. We define the characteristic decay time of the DW, \(t_{\text{decay}}\), as when the DW area \(A_{v}\) becomes \(\sim 10\%\) of the pre-collapsing value i.e. \(0.1A_{v}(t\to 0)=A_{v}(t_{\text{decay}})\). \(t_{\text{decay}}\) can be estimated by Eq.(11) as \[t_{\text{decay}} \simeq\frac{\epsilon^{-2}}{m_{a}}\left(\frac{c_{\mu}}{c_{3}} \right)^{2/c_{4}} \tag{12}\] \[\simeq\frac{\epsilon^{-2}}{m_{a}}(3.22\pm 0.94)\times 10^{-4},\] where the factor \[c_{\mu}=2.32^{+0.61}_{-0.60}. \tag{13}\] Note that other semi-analytical estimation studies [12, 43] compare the pressure gap between vacua and use a power-law model to fit their data, and predict \(t_{\text{decay}}\propto 1/\epsilon\). This causes a notable difference from our results in the prediction for the axion relic abundance as shown in Sec. VII. ### Domain Wall Velocity In DW dynamics, its velocity plays an important role in its equation of motion. We measure the velocity by tracking the movement of the maximum of the axion potential \(V(a(x,t))=V_{\text{max}}\) in the simulation. The observed DW velocity is shown in Fig. 9 for varying \(\epsilon\). The DW network during the scaling regime at first decelerates (relative to the initial velocity set by initial condition) due to the Hubble friction and the DW flattening motion, then experiences acceleration due to the pressure difference between the true and false vacua, during the decaying period \(t\sim t_{\text{decay}}\), then decelerates again when the network decays away during the later stage of \(t>t_{\text{decay}}\). The peak of each curve is thus located at about \(t_{\text{decay}}\), see Fig. 10, where we show that the comparison of the decay time \(t_{\text{decay}}\) as defined in Eq.(12) and the peak of the observed velocity. To fit the DW velocity function, we consider the following model: \[\gamma v=\frac{0.923\pm 0.136}{(m_{a}t)^{0.614\pm 0.031}}+\alpha_{v}e^{-(t-t_{ \text{decay}})^{2}/(2\sigma_{v}^{2})}, \tag{14}\] with \[\alpha_{v}=(0.241\pm 0.039),\ \ \ \text{and}\ \ \sigma_{v}=(52\pm 20)\frac{1}{m_{a}}.\] Figure 10: Bias parameter \(\epsilon\) versus axion mass times decaying time \(m_{a}t_{\text{decay}}\). The red bars are the decay time calculated by Eq.(12) using the fitting result of this study with Eq.(11). The black bars are estimated from the peaks in Fig. 9, which is the time that DW velocity starts deceleration. Figure 9: The average \(\gamma v\) versus \(m_{a}t\) with varying bias parameter \(\epsilon\). The uncertainty bands are shown as shaded areas. The network enters the scaling regime at about \(m_{a}t\sim 15\), thus the earlier peak at \(m_{a}t\sim 10\) is due to the initial condition. The second term in Eq.(14) indicates the effect of the pressure difference between the true and false vacua in the decay phase, \(\alpha_{v}\) represents the magnitude of the acceleration, \(\sigma_{v}\) is the uncertainty in our observation and the exponential indicates that the acceleration stops at about \(t\simeq t_{\rm decay}\). This section analyzed the the domain wall's velocity, which along with earlier discussions, paves the way for the next section, where we will investigate the free axion spectrum, resulting from of the decay of the DWs. ## V Free axion spectral analysis We discuss the details of the spectral analysis for free axion energy density in this section, which would be the key input for estimating axion dark matter relic density in Sec. VII. As discussed in Sec. III, we estimate the total free axion energy as twice the masked axion kinetic energy. We then compute the free axion spectrum according to [35, 38] as \[\rho_{a}=\int dk\partial\rho_{a}/\partial k,\quad\text{with}\quad\rho_{a}= \langle\hat{a}^{2}\rangle, \tag{15}\] where the axion spectrum \(\partial\rho_{a}/\partial k\) is given by \[\frac{\partial\rho_{a}}{\partial k}=\frac{k^{2}}{(2\pi L)^{3}}\int d\Omega_{ k}|\tilde{\tilde{a}}(k)|^{2}, \tag{16}\] where \(\tilde{\tilde{a}}(k)\) is the Fourier transform of \(\hat{a}(x)\), \(L=(N/2)R(t)\Delta x_{i}\) is the collected data range, and the momentum \(\tilde{k}\equiv\frac{2\pi\hbar}{L}\). In addition, we cut off the momentum that is higher than the Nyquist frequency \(k_{\rm Ny}=\pi/(R(t)\Delta x_{i})\) to prevent corrupted data caused by insufficient resolution. In Fig. 11 we show the free axion energy spectrum with snapshots for the cosmic time evolution, using different colors. The dark blue curve (\(m_{a}t=15\)) represents the spectrum when the network just enters the scaling regime, and the red curve (\(m_{a}t=360\)) presents the spectrum near the end of the simulation. We find that the spectrum can be fitted as a sum of three Gaussian distributions corresponding to distinct physics origins (to be explained later): \[\frac{\partial\rho_{a}}{\partial k}=\sum_{i=1}^{3}\frac{\partial\rho_{i}(A_{i },k_{i},\sigma_{i})}{\partial k}, \tag{17}\] where the labels \(i=\{1,2,3\}\) denote the 3 gray-dashed curves from low k to higher in Fig. 11, associated with the first, second, and third peak, as indicated respectively. These curves are parameterized by \[\frac{\partial\rho_{i}(A_{i},k_{i},\sigma_{i})}{\partial k}\equiv A_{i}\exp^{ -(k-k_{i})^{2}/\sigma_{i}^{2}}, \tag{18}\] where we set \(k_{1}\simeq 0\) due to the lack of data within the large scale range of \(k\leq 0.02\) as limited by the simulation size of \(N=1536\), and \(k_{2}\simeq 0\) since the first peak dominates over the lower \(k\) range associated with the second peak, making it challenging to discern the contribution of the second peak to the measurement, and \[k_{3}=(3.68\pm 0.03)m_{a}, \tag{19}\] which decreases as \(1/R(t)\) due to redshift after DW decay. We fit the parameters in Eq.(17) with data from each cosmic time snapshot from every simulation run (we show data from a single run in Fig. 11 as an example), then analyze their time dependence in the next section. The fitting results for the parameters and energy densities in Eq.(17) are given in Appendix. B. We have verified the robustness of the peak at \(k_{3}\) by conducting additional test runs involving variations in the value of \(m_{a}\) and lattice spacing, as outlined in Sec. III.1, but the magnitude of the peak may be subject to the inherent resolution of the simulation during the later stage of the simulation (roughly when \(t\gtrsim 300/m_{a}\)), as \(k_{3}\) in Eq.(19) closely approaches the Nyquist frequency during this later stage. We observe that \(\rho_{1}\) is in reasonable agreement with the energy density of axions produced through the misalignment mechanism, specifically, \(\rho_{1}\sim m_{a}^{2}f_{a}^{2}/2N_{\rm DW}^{2}\)[72], at the early stage of simulation, then redshifts like matter. As a result of this redshift, the spectral line associated with this contribution progressively shifts towards the lower frequencies over time. The free axion energy density \(\rho_{2}\) in Eq.(15) carries the energy contribution with the scale \(k\lesssim m_{a}\). We attribute Figure 11: Free axion energy density spectrum \(\partial\rho_{a}/\partial k\) as a function of physical momentum \(k\), assuming the bias parameter \(\epsilon=0.0011\). The early to later spectrum is shown as blue to red. The spectrum can be split into three Gaussian distributions as shown as dashed gray curves corresponding to the 3 contributing terms in Eq.(17). From low \(k\) to higher \(k\), these three Gaussian distributions present the energy density from misalignment (\(k/m_{a}\lesssim 0.2\)), free axions radiated by compact DW self-chopping, and collapsing (\(k/m_{a}\lesssim 1\)), and the small structure axion field such as the axion clouds with the resonance at (\(k\sim\mathcal{O}(m_{a})\) ), respectively. The smaller \(k<0.01m_{a}\) region is lacking data because of the simulation lattice size, and higher \(k\) has been cut at Nyquist frequency as discussed in Sec. V. this energy component to axion mechanical wave originated from collapsing of horizon-sized compact DWs. There are two reasons for this explanation: (1) The energy spectrum of \(\rho_{2}\) is consistent with the scale range of the axion mechanical wave, i.e. \(k\lesssim m_{a}\). (2) \(\rho_{2}\) aligns well with the production process of the compact DW according to the data fitting (see Eq.(22, and the details will be provided in the next section), as predicted by the DW VOS model in the context of DW chopping [54]. It is important to note that while we observe the self-chopping phenomenon (as discussed in Sec. IV.1), it differs from the definition of two DWs chopping in the VOS model. Nonetheless, they share a similar energy loss form in the equation of motion, as we will see in Sec. VI. The energy density \(\rho_{3}\) can be interpreted as the contribution from axion clouds with a resonance at \(k_{3}\). This energy arises from various processes, as discussed in Sec. IV.1. We anticipate that the primary contribution to this energy comes from the annihilation of fluctuations on the DW surface through the flattening motion, because the estimation of the energy released from these fluctuations aligns well with the energy density \(\rho_{3}\) as demonstrated in Sec. VI. The energy release mechanisms discussed in Sec. IV.1 occur in both the scaling regime and decaying period, and the compact DW collapse is more likely to occur in the decaying period. In other words, the biased potential significantly accelerates the DW flattening, contraction, and self-chopping. During the decaying phase, we find that the production of axion clouds (\(\rho_{3}\)) increases by about \(\sim 70\%\), and the radiation for larger wavelength axion mechanical waves (\(\rho_{2}\)) is enhanced by about \(\sim 30\%\), compared to the scaling regime. The percentage is estimated at the time \(t_{\rm decay}(\epsilon\to 0.0012)\), and by comparing the outcome from the \(\epsilon=0.0012\) and \(\epsilon=0\) scenarios. ## VI Model for domain wall evolution In this section, we present the coupled evolution equations for the energy densities of the DW network and of the free axions emitted from the DWs. The two components of axion energy densities sourced by different DW dynamics, \(\rho_{2}\) and \(\rho_{3}\), as identified via spectral analysis and monitoring simulation evolution in Sec. IV.1 and Sec. V, are key inputs in this section. Here we will quantitatively model these contributions, \(\rho_{2}\) and \(\rho_{3}\), respectively, by numerically fitting simulation data. We extract time-dependent data from simulations in Sec. V, and we further fit them into the DW evolution equations in this section. We first generalize the DW evolution equation in the VOS model for a stable DW network [53; 54] as follows: \[\frac{d\rho_{\rm DW}}{dt}=-(1+3v^{2})H\rho_{\rm DW}-\left.\frac{d \rho_{\rm DW}}{dt}\right|_{\rm to2}-\left.\frac{d\rho_{\rm DW}}{dt}\right|_{ \rm to3}, \tag{20}\] where the right-hand side of the equation represents, in order, the redshift effect, the DW energy loss to \(\rho_{2}\) and \(\rho_{3}\), respectively. Here we have reasonably assumed that the final form of DW energy release is free axions, as gravitational wave radiation albeit inevitable, is expected to be subleading. By energy conservation, the latter two terms in Eq. 20 also enter the evolution equations of the free axions, which is essential for solving the axion relic abundance. As revealed via the spectral analysis based on simulation results, free axion production from DWs can be roughly divided into two kinetic regions associated with distinct DW dynamics, corresponding to \(\rho_{2}\), \(\rho_{3}\). It is thus reasonable to consider the evolution of \(\rho_{2}\) and \(\rho_{3}\) components separately, then sum up their solution for the total axion abundance. We first write down the evolution equation for \(\rho_{2}\), which originates from the collapse of compact DWs: \[\frac{d\rho_{2}}{dt}=-3H\rho_{2}+\left.\frac{d\rho_{\rm DW}}{dt} \right|_{\rm to2}, \tag{21}\] where \(3H\) reflects the finding that this spectral component of axions generally has a longer wavelength and behaves like cold matter, and corresponds to the axion mechanical wave as introduced earlier. The second term on the right-hand side reflects energy conservation and the aforementioned reasonable assumption that the DW energy release \(100\%\) goes to axions. As the second term descends from the formation of compact DWs through DW self-chopping, we can explicitly model its evolution as follows: \[\left.\frac{d\rho_{\rm DW}}{dt}\right|_{\rm to2}=\tilde{c}_{v}v \frac{\rho_{\rm DW}}{L_{\rm DW}}, \tag{22}\] where the self-chopping efficiency parameter, \(\tilde{c}_{v}\), can be modeled as \[\tilde{c}_{v}\equiv c_{v}\gamma^{c_{\gamma}}\mathcal{A}_{F}^{-c_{\mathcal{A}}}, \tag{23}\] with \[c_{v}=0.36^{+0.07}_{-0.03},\quad c_{\gamma}=3.36^{+0.93}_{-0.58}, \quad c_{\mathcal{A}}=1.55^{+0.04}_{-0.06} \tag{24}\] where \(L_{\rm DW}=\gamma^{2}\sigma_{\rm DW}/\rho_{\rm DW}\) is the DW correlation length. The value of the parameters \(c_{v}\), \(c_{\gamma}\), and \(c_{\mathcal{A}}\) are calibrated by the simulation data. A single run data is shown in Fig. 11 and Fig. 12. \(\mathcal{A}_{F}\) is the area fraction parameter: \[\mathcal{A}_{F}\equiv\frac{A_{v}(\epsilon)}{A_{v}(\epsilon\to 0)}. \tag{25}\] where \(A_{v}\) is defined in Eq.(11). In the limit of non-relativistic and stable DW, i.e., \(\gamma\to 1\) and \(\epsilon\to 0\), Eq.(22) approaches the expression \(c_{v}v\frac{\rho_{\rm DW}}{L_{\rm DW}}\) which was used to describe the energy loss resulting from the intersection of DWs, leading to the creation of compact DWs that eventually collapse. This term was originally introduced by Kibble in the context of the cosmic string network [71], and later applied to the stable DW VOS model [54] for two DWs chopping. We slightly modify its physical interpretation to self-chopping and utilize it to explain our data (see Fig. 12). The factor \(\mathcal{A}_{F}^{-3/2}\) captures the simulation finding that compact DW production is more efficient during the decay phase, \(v\rho_{\rm DW}/L_{\rm DW}\) represents the likelihood of DW self-chopping, and \(\gamma^{c_{\gamma}}\) indicates that an accelerated DW velocity increases the rate of self-chopping. We further estimate the solution of \(\rho_{2}\) by numerically solving the axion radiation equation Eq.(21) with Eq.(22), and can be fitted as: \[\rho_{2}\left(\frac{R(t)}{R(t_{\rm decay})}\right)^{3}\simeq 2\tilde{c}_{v}v \rho_{\rm DW}\bigg{|}_{\epsilon\to 0,\ t\to t_{\rm decay}}. \tag{26}\] The dominant DW contribution to the \(\rho_{2}\) component of the axions is from the era around \(t_{\rm decay}\), and the radiated axions redshift like matter afterward. This solution can be understood as resulting from energy conservation. Next, we consider the evolution equation for the component of \(\rho_{3}\), mostly due to the axion clouds production from the DWs flattening motion as discussed in Sec. IV.1. By analogy of Eq. 21 for \(\rho_{2}\), we have: \[\frac{d\rho_{3}}{dt}=-\lambda_{3}H\rho_{3}+\frac{d\rho_{\rm DW}}{dt}\bigg{|}_{ \rm to3}, \tag{27}\] where \(\lambda_{3}\) represents the time-dependent redshift of this component of axion energy density. As shown in the spectral analysis, at production these axions are on average (semi-)relativistic with a shorter wavelength, thus radiation-like and \(\lambda_{3}\simeq 4\); then the axions cool down and become matter-like with \(\lambda_{3}=3\)[73]. For simplicity, we use the following function for \(\lambda_{3}\) to fit the spectrum, \[\lambda_{3}=\left\{\begin{aligned} & 4\quad\text{for}\quad t<t_{\rm decay },\\ & 3\quad\text{for}\quad t\geq t_{\rm decay}.\end{aligned}\right. \tag{28}\] The evolution of DW energy loss that leads to this component of axion production can be modeled as (to be explained later): \[\frac{d\rho_{\rm DW}}{dt}\bigg{|}_{\rm to3} =\frac{1}{2}\frac{d}{dt}\left[\rho_{\rm DW}(1-v^{2})^{c_{f2}} \left(\frac{m_{a}}{H}\right)^{c_{f1}(1-\mathcal{A}_{F})}\right],\] \[\equiv\frac{1}{2}\frac{d}{dt}\mathcal{F}_{A}(t), \tag{29}\] where the parameters are calibrated by simulation data as: \[c_{f1}=0.44^{+0.20}_{-0.20},\quad c_{f2}=3.61^{+0.90}_{-0.98}. \tag{30}\] We also show a fitting result for \(\epsilon=0.0012\) in Fig. 13 as an example. Similar to the case of \(\rho_{2}\), the numerical solution of Eq. (27) can be fitted as \[\rho_{3}\left(\frac{R(t)}{R(t_{\rm decay})}\right)^{3}\simeq\mathcal{F}_{A} \bigg{|}_{\epsilon\to 0,\ t\to t_{\rm decay}}. \tag{31}\] We have chosen the model fitting form given by Eq. (29) for the following reasons. Firstly, the energy of the perturbation per unit area of the DW increases with the scalar (axion) mass \(m_{a}\) as estimated in [74]. Additionally, the total area of the horizon-sized DWs within a horizon decreases as \(H\) increases, and it is expected that the energy loss of DWs is greater for higher overall DW energy density \(\rho_{\rm DW}\). These considerations are captured by the variables \(1/H\), and \(\rho_{\rm DW}\), respectively, along with their functional form in Eq. (29). In addition, the power of \(c_{f1}(1-\mathcal{A}_{F})\) renders the dimensionless parameter \(m_{a}/H\) negligible in the scaling regime, which captures the fact that the DW fluctuations release energy becomes more significant in the scenario of metastable DWs (i.e. \(\epsilon\neq 0\)). We also introduced a simple velocity dependence to Eq. (29) as preferred by numerical fitting, which implies that a significant contribution to \(\rho_{3}\) occurs around the peaks shown in Fig. 9, i.e. when \(t\sim t_{\rm decay}\). It is important to note that the DW fluctuation (scalar perturbation) radiation term as described in [47] represents the axion radiation resulting from the annihilation of surface fluctuations, which corresponds to \(\rho_{3}\) component in this study. They find that the chopping effect [75] in the VOS model that results in \(\rho_{2}\) in this study is negligible in their simulation. Their conclusion does not align well with the axion spectrum depicted in Fig. 11 as found from our simulation. This discrepancy may be attributed to the utilization of the PRS algorithm [50] in [47], which can inaccurately model the DW dynamics at small-scale structures, as pointed out in [44]. There are also caveats identified from our detailed analysis that are worth reiterating. Firstly, \(\rho_{3}\) encompasses not only the radiation from the flattening the surface fluctuations of the DWs, but also the (sub-dominant) contributions from, for instance, the collapse of horizon-sized compact DW that also leads to the heating of background axion field, as discussed in Sec. IV.1. Secondly, in the later stages of the simulation, the characteristic energy scale of \(\rho_{3}\) become close to the Nyquist frequency, which may result in considerable observational uncertainties, as discussed in Sec. V. In this section we introduced the coupled evolution equations for DW network and free axions from DWs, using \(\rho_{2}\) and \(\rho_{3}\) from spectral analysis, and provided an estimate for axion production. The DW evolution equation, considering the redshift effects and energy loss to \(\rho_{2}\) and \(\rho_{3}\), demonstrates the relation between DW energy loss and axion production. Separate equations for \(\rho_{2}\) and \(\rho_{3}\) capture the horizon compact DW creation and collapse, and axion cloud production and axion field resonance, respectively. In the next section, we will apply the results obtained here for the prediction of \(\Omega_{a}\). ## VII Cosmological implication In this section, we will estimate the contribution of DWs to the relic density of axions based on the results obtained in earlier sections and present the viable parameter space of our model. We will apply our result to the \(N_{\rm DW}=2\) ALP model (see Eq.(2)) with pre-inflationary PQ symmetry breaking (so that cosmic strings are simply absent) as a concrete example. We then present an illustrative analysis that includes the potential contribution of cosmic strings to the axion relic abundance in the Appendix. A. The contribution of the standard misalignment mechanism to the axion relic density is found to be negligible compared to the DW contribution in the parameter space of our interest (\(\epsilon\lesssim 10^{-3}\)): \(\rho_{\rm mis}/\rho_{\rm DW}<1\%\), where \(\rho_{\rm mis}\simeq m_{a}^{2}f_{a}^{2}/2N_{\rm DW}^{2}\) is the axion energy density from the misalignment mechanism, and \(\rho_{\rm DW}\) is the contribution from DW decay. We thus neglect its contribution in the subsequent discussions. The DW contribution to the relic axions is given by the solutions to the evolution equation of motion Eq.(21) and Eq.(27) along with their numerical results in Eq.(26) and Eq.(31), respectively. The total axion energy density is \(\rho_{a}=\rho_{2}+\rho_{3}\). In order to estimate the axion relic density \(\Omega_{a}\) from DWs, we numerically fit \(\rho_{2}+\rho_{3}\) based on data points in Fig. 9, and extrapolate the result to lower \(\epsilon\)'s and a wider range of \(m_{a}\). Our fitting result for the DW contribution to \(\Omega_{a}\) is \[\Omega_{\rm a}^{\rm DW}h^{2}\simeq 0.116\left(\frac{m_{a}}{2\times 10^{-4}\,{\rm eV}}\right)^{-1.50^{+0.02}_{-0.02}}\] \[\times\left(\frac{\Lambda}{400\,{\rm MeV}}\right)^{4}\left(\frac {\epsilon}{10^{-4}}\right)^{-1.87^{+0.35}_{+0.44}}, \tag{32}\] where the uncertainties are fitted within the \(m_{a}\) and \(\epsilon\) ranges as given in Fig. 14. The benchmark example with \(\Lambda=\Lambda_{\rm QCD}\) is shown in Fig. 14, where the parameter region that predicts the observed axion DM relic density \(\Omega_{\rm a}=(0.12\pm 0.0012)h^{-2}\) lies in the white area. We also considered the BBN constraint \(t_{\rm decay}<0.01\)s [77; 78], Figure 14: Viable parameter region of axion model considering the DW contribution to axion relic density as estimated by this work (assuming \(\Lambda=\Lambda_{\rm QCD}\)). The white region indicates that the axion relic abundance is sufficient to account for the observed dark matter as measured by the Planck Observatory (\(\Omega_{\rm DM}=(0.12\pm 0.0012)h^{-2}\)) [76], taking into account both the misalignment mechanism and the DW contribution. The width of the white region presents the uncertainty associated with extrapolation, which expands as \(\epsilon\) decreases. Above the black-dashed horizontal line, the DW has not entered the scaling regime before its decay. The yellow area indicates that the produced axion partially contributes to dark matter, while the orange area indicates an overproduction of dark matter. The blue-dashed line represents the prediction of \(\Omega_{\rm DM}=\Omega_{\rm a}^{\rm DW}\propto\epsilon^{-1/2}\) from a previous simulation study [12]: the area to the lower left of the line indicates overproduction. The result from [12] is shown as a thin line as the error bar given there is tiny. The grey/dark grey areas are excluded by BBN constraint and CMB observation, respectively, as DWs must decay prior to the BBN and CMB eras (\(t_{\rm decay}<0.01\)s) [77; 78]. Figure 12: The energy density of the second Gaussian fitting function (related to \(\rho_{2}\)) as given in Fig. 11 and Eq.(17) where we fix \(\epsilon=0.0012\). The black curve presents the prediction of axion production model Eq.(21) which implies the energy loss in the DW network through the horizon compact DW collapse as discussed in Sec. IV.1. Figure 13: The energy density for the third Gaussian fitting function (related to \(\rho_{3}\)) as given in Fig. 11 and Eq.(17), with \(\epsilon=0.0012\). The blue curve presents the prediction of axion production model Eq.(27) which implies the energy loss in the DW network through the DW flattening motions discussed in Sec. IV.1. The vertical line \(A_{v}\to 10\%\) corresponds to the time \(t_{\rm decay}\) when \(A_{v}\) becomes \(10\%\) of the value at scaling regime (Eq. 10). and the CMB constraint that DWs should decay before photon decoupling. In addition, the region above the black horizontal dashed line corresponding to \(\epsilon=5\times 10^{-3}\) (also see Fig. 25 in Appendix. B indicates that the DW network does not have sufficient time to transition into the scaling regime before its decay. Furthermore, we have fixed \(\Lambda=\Lambda_{\rm QCD}\) in Fig. 14 as a QCD example, but Eq. 32 can apply to general ALPs by varying \(\Lambda\), and the constraints shown in Fig. 14 related to axion relic abundance would be relieved for smaller \(\Lambda\)'s. Fig. 14 also includes a comparison between the results from our study and those from the previous 2D simulation for the metastable DW [12, 38]. We use the dashed blue curve to represent the prediction [79] of the DW contribution to the axion relic abundance as presented in [12]. Both studies have technical limitations that restrict their simulations to relatively large values of \(\epsilon\gtrsim\mathcal{O}(10^{-3})\), and extrapolations are made to smaller \(\epsilon\) and different \(m_{a}\) values. Our estimate of the axion relic abundance for \(\epsilon\sim 10^{-4}\) to \(10^{-3}\) roughly agrees with that of [12], but a discrepancy becomes increasingly significant as \(\epsilon\) decreases. For example, the DW network produces more axion energy density in our finding compared to [12] in the range of smaller bias region of \(\epsilon\lesssim 10^{-4}\), while results in less axion energy density for \(\epsilon\gtrsim 10^{-4}\). In [12] the fitting for axion relic density from DWs is \(\Omega\propto\epsilon^{-1/2}m_{a}^{-3/2}\). The discrepancy between their and our result may arise from the differences in the fitting models chosen for DW dynamics, especially the DW decay behavior \(A_{v}\). This \(A_{v}\) controls the energy density of DW and explains its decaying process, and thus consequently influences the axion production. We adopt the fitting model described by Eq.(11), whereas [12] employs a power-law form \(A_{v}\propto t^{1-p}\) with a pressure calibration parameter \(p\). This power-law model was investigated in [63, 80]. They analyze the pressure gap between different vacuums, then conclude that the collapse of DWs occurs when the pressure in the true vacuum overcomes the one in the false vacuum, which takes place at \(t_{\rm decay}\sim\sigma_{\rm DW}/\Delta V\propto\frac{\epsilon}{m_{a}}\), where \(\Delta V\) represents the difference in potential between the vacua. However, the fitting model described by Eq.(11) and Eq.(12) in our work provides a much better fit to our simulation results. These fitting formulae that we used are inspired by the mean-field approximation method analysis in [45] as discussed in Sec. II. ## VIII Conclusion This work presents an updated study on the dynamics and evolution of long-lived, metastable axion DWs, with a DW number of \(N_{\rm DW}=2\) as a benchmark. The study incorporates 3D lattice simulations and a semi-analytical approach based on the VOS model. Our analysis includes analyzing the DW evolution dynamics by monitoring the simulation snapshots, and a detailed examination of the axion kinetic energy spectrum. We infer the mechanisms of axion production sourced by the DWs and the corresponding energy loss mechanisms of the DWs. The contribution to the relic abundance of axions from the DW is then derived by numerical fitting and extrapolation, and is found to be significantly greater than that from the misalignment mechanism for a small bias parameter \(\epsilon\lesssim 5\times 10^{-3}\). Based on the features in the axion energy spectrum obtained from our simulation (see Sec. IV.1), we identified two distinct components or kinetic energy regimes of the axions: the shorter wave-length axion clouds with resonance around \(k\sim 3.68\,m_{a}\), with larger impact on the small-scale region in the axion spectrum; and the longer wave-length axion mechanical waves with \(k\lesssim m_{a}\). These two features are sourced by different DW dynamics. The axion clouds primarily arise from the flattening motion of the horizon-scale DWs, which smooths the fluctuations on the DW surface while heating (i.e. enlarging the oscillation amplitude of) the background axion fields. On the other hand, the axion mechanical wave is mostly generated by the collapse of the horizon-sized compact DWs which are formed by self-chopping or contraction processes of the super-horizon sized DWs. Based on these identified features and the corresponding sources, we derive equations governing the evolution of the DWs, built upon the existing VOS model (for stable DWs) while extending it to incorporate the decay phase of the DWs. By energy conservation, the evolution equation of the DWs is coupled to that of the free axions. By solving these equations numerically, we determine the present-day relic abundance of axions. Our findings align with some earlier literature in terms of the scaling solution, the DW area \(A_{v}\) in Eq.(10) and the self-chopping effect in the VOS model. Meanwhile, notable differences are identified and thoroughly discussed. Particularly, our prediction for \(\Omega_{a}(m_{a},\ \epsilon,\ \Lambda)\) takes a different form compared to the results found in [12, 38], as shown in Eq.(32) and Fig. 14. This discrepancy, which is likely caused by the mathematical fitting model for DW area evolution \(A_{v}\), has potentially significant implications for axion dark matter physics and related experimental probes. Consequently, we predict a larger \(\Omega_{a}\) from the DW decay process in the range of \(\epsilon\lesssim 10^{-4}\) compared to the earlier simulation study [12], and a smaller \(\Omega_{a}\) for larger \(\epsilon\). While we directly simulated a simple axion model using the potential described in Eq.(2), we have demonstrated that the results can be applied to certain ALP models and the QCD axion models, with a bias parameter \(\epsilon\lesssim 10^{-3}-10^{-4}\) that ensure that the DW thickness can be treated as a constant before DWs decay away. See discussion in Sec. III.2 for the conditions of general applicability, and Sec. VII and Appendix. A for numerical examples of the application to stringless ALP/QCD models and QCD axion string-wall networks, respectively. In particular, we considered the benchmark of axion mass in the range of \(10^{-6}\leq m_{a}\leq 1\) eV with a fixed DW phase transition scale \(\tilde{\Lambda}=\Lambda_{\rm QCD}\) as a benchmark. Notably, our study improves upon existing literature by including the biased potential in the 3D field simulation without relying on approximations such as the PRS algorithm. To ensure efficient simulation with this more accurate treatment, we focus on the benchmark case of \(N_{\rm DW}=2\) and decouple the radial mode, which is a reasonable assumption for the relevant time range of DW formation. It is worth exploring further by considering \(N_{\rm DW}>2\) and simulating the full complex scalar field. The dynamics of DWs identified in this study can provide new insights into the physics of axion-like DWs and other types of DWs, such as those arising from GUT models. The updated results on axion DW dynamics presented here are also expected to implications for astrophysical observables related to axion physics, including gravitational wave signals from axion DWs and the formation of axion minihalos as relic overdense energy regions originated from DWs decay. ## Acknowledgement The authors are supported in part by the US Department of Energy under award number DE-SC0008541. The simulation in this work was performed with the UCR HPCC. ## Appendix A The application to QCD axion case with string-wall network In order to estimate the axion energy density generated by cosmic strings, we employ a conservative estimation outlined in [32]. They simulated the QCD axion cosmic string evolution in the scenario with a post-inflationary PQ symmetry breaking and a short-lived DW (\(N_{\rm DW}=1\)) that formed at the QCD phase transition. We considered \(N_{\rm DW}=2\) in this study, however, we can still apply the cosmic string contribution to the axion field as the result given in [32; 35; 36] to our study. There are two main reasons for this. First, the contribution from cosmic strings that decayed prior to the QCD phase transition should match the simulations presented in references [32; 35; 36], as this is sourced before DW formation and thus independent of DW details. Second, shortly after the QCD phase transition, the DW tension becomes dominant within the string-DW network, as long as the condition introduced in Eq. 8 is met. Therefore in this regime, the string contribution to the axion abundance would be subleading relative to that from the DW, and the possible variance compared to the \(N_{\rm DW}=1\) case would be insignificant. As discussed in Sec. III.2, our simulation result can be applied to the DW domination period in the QCD axion string-wall network if the following two conditions are met: (1) The domain wall (DW) becomes dominant within the string-wall network (\(t>\mu/\sigma_{\rm DW}\), as shown in Eq.(8)), and subsequently, it has sufficient time (\(\Delta t\sim 10/m_{a}\)) to transition into the scaling regime before its eventual decay. This condition ensures that the influence of the cosmic string on the network becomes negligible and eliminates sensitivity to the initial field distribution, thanks to the attractive (scaling) solution offered by the DW. Furthermore, we specifically consider a scenario with a domain wall number of \(N_{\rm DW}=2\), where two DWs are attached to a single cosmic string. In this case, the DW tension prevails on \(t>\mu/\sigma_{\rm DW}\), rendering the impact of the cosmic string negligible. Consequently, the network behaves no differently from a scenario with a single DW (\(N_{\rm DW}=2\) without string) in our simulation. This alignment with the network's evolution after the DW tension dominates is consistent with the findings of our study. (2) The QCD axion domain wall thickness is time-dependent as shown in Eq.(7) until cosmic temperature \(T=\Lambda_{QCD}\), while our simulation considered a constant thickness. Therefore in order to be self-consistent, the second condition is that the DW network should be long-lived enough to enter the scaling region after \(T=\Lambda_{QCD}\). As will be discussed in the following paragraphs, Fig. 15 shows those two conditions with a red solid line, and a green dashed line, respectively. The calculation of the axion energy density produced by cosmic strings in the references [32; 35; 36] considers two distinct contributions from these cosmic structures: (A) Axion radiation during the evolutionary phase, which starts from the cosmic string formation and ends around the QCD phase transition: this contribution arises from the emission of axion radiation by cosmic strings as they evolve. This emission takes place during the earlier phases of cosmic string evolution. This component is particularly significant in determining the axion relic abundance. As per the QCD axion model (referenced as Eq. (7)), the mass of a single axion particle \(m_{a}(T)\) is inversely proportional to the energy levels at earlier times, following the relationship \(m_{a}(T)\propto T^{-4}\). Consequently, axions with lighter masses are produced during this phase. (B) Decay of the remaining cosmic strings at QCD phase transition: The second contribution stems from the complete decay of the cosmic strings that remain after their evolutionary phase. This decay occurs at the QCD phase transition. The dominant role in determining the axion relic abundance is played by the first contribution (A), where lighter axion particles are produced. This is because the required energy thresholds for axion production are lower during the earlier stages of the universe. The second contribution (B), involving the decay of cosmic strings at the QCD phase transition, accounts for about half of the overall contribution as found in [32]. It is important to note that the specific case being discussed involves a network of strings attached to walls (referred to as a string-walls network). Not all of these strings decay immediately during the QCD phase transition. Some of these strings persist until later stages, and their contribution to the axion abundance is ignored due to the dominance of domain walls (DWs) at that later time, see Eq.(8). The remaining strings that have not yet decayed at the QCD phase transition will mostly eventually decay along with the decaying domain walls. Some of them will decay before the DW dominates, but the decay rate should be gradually suppressed by DW tension. The mass of axion particles in this scenario is higher than the mass during the earlier phases, thus fewer axion particles are produced. As a result, the estimation presented in [32], which considered the contributions from (A) and from the immediate string decay through (B), could potentially predict a higher axion abundance compared to the string-walls scenario. As shown in Fig. 15, the prediction for the observed axion DM relic density \(\Omega_{\rm a}=(0.12\pm 0.0012)h^{-2}\) lies in the white area. The BBN and CMB constraints, scaling region, and a comparison to the early simulation work [61] are discussed in Fig. 14 and Sec. VII. Furthermore, we present condition (1) as the red line, and condition (2) has been shown as the green dashed line in Fig. 15. The prediction of DW-produced axion relic abundance is given in Eq.(32). The estimated contribution from cosmic strings is found to be considerably higher than the energy contribution of axions resulting from the misalignment mechanism. Additionally, when domain walls (DWs) have a sufficiently long lifetime, i.e., \(\epsilon\lesssim 10^{-3}\), their contribution can surpass that of cosmic strings. ## Appendix B Supplemental Data In this appendix, we provide supplementary data for the following purposes: \(\bullet\) We present a simulation animation for a no biased potential \(\epsilon=0\) in Fig. 16. The right-most and the second-right snapshots clearly present the flattening motion of the DW. \(\bullet\) Axion kinetic energy density spectrum with benchmarks \(\epsilon=0\) and \(\epsilon=0.0012\) are given in Fig. 17 and Fig. 18, respectively. \(\bullet\) The model fits for \(\rho_{3}\) with benchmarks \(\epsilon=0\), \(\epsilon=0.0011\), \(\epsilon=0.0013\), and \(\epsilon=0.0014\) are shown in Fig. 19, Fig. 20, Fig. 21, and Fig. 22, respectively. \(\bullet\) The model fits for \(\rho_{2}\) with different benchmarks are shown in Fig. 23. \(\bullet\) Fig.24 displays the various potential model fit options for the DW velocity \(\langle\gamma v\rangle\) when \(\epsilon=0\), which corresponds to fitting the first term in Eq.(14). The interpolation results for later times \(m_{a}t\gg 1\) are significantly influenced by different assumptions made about the data, such as when the network enters the scaling regime. In this particular study, we assumed that the network enters the scaling regime when \(A_{v}\) becomes constant, i.e.\(m_{a}t\), as shown in Fig. 2. \(\bullet\) We increase the bias parameter \(\epsilon\) from \(0.002\) to \(0.005\) to verify a limitation of \(\epsilon\) that whether the DW network enters into the scaling region before its decay in our simulation. Fig. 25.
2309.10858
On-device Real-time Custom Hand Gesture Recognition
Most existing hand gesture recognition (HGR) systems are limited to a predefined set of gestures. However, users and developers often want to recognize new, unseen gestures. This is challenging due to the vast diversity of all plausible hand shapes, e.g. it is impossible for developers to include all hand gestures in a predefined list. In this paper, we present a user-friendly framework that lets users easily customize and deploy their own gesture recognition pipeline. Our framework provides a pre-trained single-hand embedding model that can be fine-tuned for custom gesture recognition. Users can perform gestures in front of a webcam to collect a small amount of images per gesture. We also offer a low-code solution to train and deploy the custom gesture recognition model. This makes it easy for users with limited ML expertise to use our framework. We further provide a no-code web front-end for users without any ML expertise. This makes it even easier to build and test the end-to-end pipeline. The resulting custom HGR is then ready to be run on-device for real-time scenarios. This can be done by calling a simple function in our open-sourced model inference API, MediaPipe Tasks. This entire process only takes a few minutes.
Esha Uboweja, David Tian, Qifei Wang, Yi-Chun Kuo, Joe Zou, Lu Wang, George Sung, Matthias Grundmann
2023-09-19T18:05:14Z
http://arxiv.org/abs/2309.10858v1
# On-device Real-time Custom Hand Gesture Recognition ###### Abstract Most existing hand gesture recognition (HGR) systems are limited to a predefined set of gestures. However, users and developers often want to recognize new, unseen gestures. This is challenging due to the vast diversity of all plausible hand shapes, it is impossible for developers to include all hand gestures in a predefined list. In this paper, we present a user-friendly framework that lets users easily customize and deploy their own gesture recognition pipeline. Our framework provides a pre-trained single-hand embedding model that can be fine-tuned for custom gesture recognition. Users can perform gestures in front of a webcam to collect a small amount of images per gesture. We also offer a low-code solution to train and deploy the custom gesture recognition model. This makes it easy for users with limited ML expertise to use our framework. We further provide a no-code web front-end for users without any ML expertise. This makes it even easier to build and test the end-to-end pipeline. The resulting custom HGR is then ready to be run on-device for real-time scenarios. This can be done by calling a simple function in our open-sourced model inference API, _MediaPipe Tasks_. This entire process only takes a few minutes. ## 1 Introduction Hand gesture recognition (HGR) plays a pivotal role in enabling natural and intuitive human-computer interactions, such as in augmented reality (AR), virtual reality (VR), video conferencing and remote control applications. As these technologies evolve, the ability to accurately detect, interpret and respond to hand gestures is key to creating immersive user experiences without disruption. We present an innovative approach to train accurate and robust HGR models with limited training data. Our approach uses a pre-trained model that has been trained on a large dataset of videos of people fingerspelling words in sign language. We then fine-tune the weights of this pre Figure 1: Our custom hand gesture recognition system enables any user without ML expertise to use a small number of images per gesture class for training and immediately use the model for real-time on-device inference. Here we show how our solution extracts the hand landmarks of each hand to compute a 128 dimensional embedding vector which is used for custom gesture classification. (The landmarks in this figure are best viewed digitally). trained model for custom gesture classification (see Figure 1). This approach has two main benefits: 1. We are able to train an accurate model with a relatively small amount of training data, as few as \(50\) images per gesture. 2. The pre-trained model captures information of a wider range of hand shapes and movements, including transition states that are harder to capture with still images. Our HGR inference pipeline works as follows: 1. An RGB camera captures an image. 2. The HGR extracts the \(3\)D skeletal key points (or landmarks) and the handedness (left, right) of each hand from the input image. 3. The landmarks and handedness information are supplied to the newly trained custom gesture recognition model for inference. Our HGR runs real-time at 30+ FPS (frames per second) on mainstream mobile devices. ## 2 Architecture We use the work presented in "On-device Real-Time Hand Gesture Recognition" [1] as the starting point for building a system for custom hand gesture recognition. As shown in Figure 1, our solution uses a model that extracts hand landmarks and runs in real-time [2]. To train our _word-level fingerspelling_ model, we use an in-house collected dataset of \(79K\) videos of \(21K\) unique fingerspelled words. In each video, a subject fingerspells a word using either the left or the right hand. During training we discard frames that don't contain any hands. We use normalized hand landmarks after processing the input videos using the hand landmark model [2]. As shown in Figure 2, the _word-level fingerspelling_ model extracts embedding vectors from each hand's landmarks in each video frame. Since each frame contains only one of _left_, _right_ hands, the model extracts a _single-hand embedding_ for each hand and adds the two embedding vectors. Since addition is commutative, the model is invariable to the order of the two embedding vectors. The embedding vector of a single video frame contains structural piece-wise skeletal information. All per-frame embedding vectors along with hand location information are sent to a lightweight bidirectional LSTM [3][4] to predict character level sequences of the fingerspelled word. Using a Connectionist Temporal Classification (CTC) loss [5] for training the _word-level fingerspelling_ model in Figure 2, we are able to guide the _single-hand embedding_ sub-model to extract discriminative features that capture the subtle differences in a wide range of real-world hand configurations. We are thus able to use weights of the _single-hand embedding_ model for training a custom gesture recognition model with minimal training data via transfer learning [6]. ## Custom Hand Gesture Model We propose that the weights of the pre-trained _single-hand embedding_ model represent essential features that are useful for custom gesture recognition. By fine-tuning the weights of the pre-trained embedding model and the custom hand gesture model head, we observe that our model can recognize gestures accurately. This approach significantly reduces number of images required for training. Figure 3 shows the model architecture of the custom hand gesture recognition model with the _single-hand embedding_ model as its feature extractor. ## 3 Results In Figure 4, we report the results of training a custom gesture recognition model by fine-tuning the weights of the _single-hand embedding_ model (shown in Figure 3). Figure 3: Custom hand gesture model. This model classifies the input hand data into one of \(N+1\) classes,(\(N\) gestures and \(1\)_background_ class). Figure 2: Model architecture used for training the _word-level fingerspelling_ model and the _single-hand embedding_ sub-model. We used an in-house dataset of \(8\) classes, with \(7\) gesture classes and \(1\)_background_ class. Samples that could not be labeled as any of the \(7\) gesture classes were labeled as the _background_ class. To explore how much data is required to train the custom gesture recognition model, we conducted trials with varying values of the average number of training samples per gesture, \(K\). We used the following values of \(K:10,20,50,100,200,500\). For example, when \(K=20\), we train a model with \(20\) positive and negative samples of each of the \(N\) gesture classes, _i.e_. the total number of samples used for training were \(N\times 20\) (\(140\) for \(7\) gesture classes). The negative samples are labeled as the _background_ class. During inference, an input hand shape can be labeled as one of the \(8\) classes. We report the performance on the \(7\) gesture classes. To account for the performance of the _background_ class in our results, we use _specificity_ and _sensitivity_: \[\mathrm{Specificity} =\frac{\mathrm{True\ Negatives}}{\mathrm{True\ Negatives}+ \mathrm{False\ Positives}}\] \[\mathrm{Sensitivity} =\frac{\mathrm{True\ Positives}}{\mathrm{True\ Positives}+ \mathrm{False\ Negatives}}\] _True negatives_ account for samples that are correctly labeled as the _background_ class. Similarly, _false positives_ account for samples that belong to the _background_ class but are incorrectly labeled as one of the gesture classes. To concisely represent our results, we combine _sensitivity_ and _specificity_ into one metric, namely the \(\mathrm{SS\ F_{1}score}\) which is the harmonic mean of these two metrics: \[\mathrm{SS\ F_{1}score}=\frac{2\times\mathrm{Sensitivity}\times\mathrm{Specificity}}{ \mathrm{Sensitivity}+\mathrm{Specificity}}\] Most of our models achieve \(\mathrm{SS\ F_{1}score}\) values close to \(1.0\). So we present results in Figure 4 as \[\mathrm{complementary\ SS\ F_{1}score}=1-\mathrm{SS\ F_{1}score}\] This allows us to measure the model's performance by focusing on misclassification errors. To explore the effectiveness of the _fine-tuned embedding_ model for custom gesture classification, we conducted an ablation study on the model's weights. The models we trained for the study have the same architecture as the _fine-tuned embedding_ custom gesture model. We defined two experiments: 1. _Random initial weights_: The initial weights of all layers are randomized, so the model trains on raw hand landmark data from scratch. 2. _Frozen embedding_: The weights of the _single-hand embedding_ model layers are frozen. Only the weights of the classification head are updated during training. We report the results of \(K\)-shot gesture classification for these models in Figure 4. All models perform reasonably well when the value of \(K\) is high, _i.e_. \(K=500\). For very small values of \(K\), _i.e_. \(K=10\) and \(K=20\), all models perform poorly. Note that the model with random initial weights performs well for these values of \(K\) but the _complementary_\(SS\ F_{1}score\) is still unacceptably much higher than \(10\%\). For values of \(K=50\) and above, we observe that the _complementary_\(SS\ F_{1}score\) is lower than \(10\%\), steadily decreasing for higher values of \(K\). The _fine-tuned embedding_ model outperforms the other two models at \(K=50\), \(K=100\) and \(K=200\). These results demonstrate the advantage of fine-tuning a pre-trained _single-hand embedding_ model instead of training a model with random initial weights to recognize hand gestures from raw hand landmarks. ## 4 Hand Landmark Detection Improvements When two hands are very close to each other or occlude each other, the landmark model fails to accurately extract all hand landmarks for both hands. This failure cascades to the gesture recognition system that relies on accurate landmark detection to correctly infer the gesture depicted by a hand shape. In Figure 5 for example, we can see that the baseline hand landmark model is unable to extract landmarks of the right hand in panels (a) and (b). To improve landmark accuracy when two hands are near each other or are overlapping with each other, we experimented with providing a handedness hint to the hand landmark model during training and inference. This guides the model to extract the landmarks of the hand with the same handedness as the input handedness hint. In Figure 5, we can see that the new model extracts both the left and right hand's landmarks respectively with the correct handedness hint. Figure 4: _Complementary \(SS\ F_{1}score\) for K-shot gesture classification._ Quantitatively, on an in-house dataset of \(3,310\) images where hands are near or overlapping each other, the new model has a Mean Normalized Absolute Error (MNAE) of \(13.09\) compared to the baseline model's MNAE of \(13.89\). This improvement enables the custom hand gesture recognition pipeline to perform well when multiple hands are present in the input image or video. ## 5 Implementation ### Training Pipeline We developed a low-code training pipeline called _MediaPipe Model Maker_[7], that enables users to effortlessly train new hand gesture recognition models. In the pipeline, the custom gesture recognition model is defined as a set of dense layers as shown in Figure 6. This model maps the gesture embedding vectors generated by the pre-trained _single-hand embedding_ model to the target labels of the input images. To train the custom gesture recognition model, users need to supply a small set of images. Each image should be annotated with a hand gesture label. All input images are pre-processed by the hand landmark model to generate hand landmarks on the fly during model training. Our training pipeline allows the users to customize the neural network attributes such as the dense layer shapes and the training hyperparameters such as learning rate, batch size, and training epochs, etc. Because of low training data requirements, each training session only takes a few minutes on most local computers and on Google's public Colab [8] runtime to produce accurate gesture recognition models. The trained custom gesture model is then converted to a TFLite [9] model format for our end-to-end inference pipeline introduced below. ### Inference Pipeline The gesture recognition inference pipeline has been implemented as a modular structure, as shown in Figure 6. The pipeline consumes a raw hand image sequence as input and processes all images sequentially. The hand landmark detection module converts the input images into landmark vectors. The gesture embedding module further maps the landmark vectors to 128-dimensional gesture embedding vectors. The gesture recognition module outputs the probabilities of each label. This modular graph structure allows users to control or replace any module as desired. Our benchmarks show that this end-to-end pipeline achieves real time performance (\(16.76\ ms\) per frame) on Pixel \(6\) devices. Our inference pipeline _MediaPipe Tasks_[10] offers a user-friendly API that supports multiple platforms, including Java, Python, and WebJs. This API allows users to easily integrate their customized gesture recognition model into the pipeline. Both the training and inference pipeline have been open-sourced via _MediaPipe Model Maker_[7] and _Gesture Recognizer API_ in _MediaPipe Tasks_[10]. ## 6 Conclusion In conclusion, our research presents an easy-to-use approach to train accurate custom hand gesture recognition models with just a small set of training examples by fine-tuning pre-trained embeddings of hand landmarks. We also present our improvements to the hand landmark model which enhance the effectiveness of our hand gesture recognition system. These findings underscore the practicality of our custom hand gesture recognition system in real world scenarios and pave the way for better human-computer interactions in various domains, such as virtual reality, augmented reality, video conferencing and remote control applications.
2306.00216
A new estimate of the transfinite diameter of Bernstein sets
Let $K \subset \mathbb{C}^n$ be a compact set satisfying the following Bernstein inequality: for any $m \in \{ 1,..., n\}$ and for any $n$-variate polynomial $P$ of degree $\mbox{deg}(P)$ we have \begin{align*} \max_{z\in K}\left|\frac{\partial P}{\partial z_m}(z)\right| \le M\ \mbox{deg}(P) \max_{z\in K}|P(z)| \ \mbox{ for } z = (z_1, \dots, z_n). \end{align*} for some constant $M= M(K)>0$ depending only on $K$. We show that the transfinite diameter of $K$, denoted $\delta(K)$, verifies the following lower estimate \begin{align*} \delta(K) \ge \frac{1}{n M}, \end{align*} which is optimal in the one-dimensional case. In addition, we show that if $K$ is a Cartesian product of compact planar sets then \begin{align*} \delta(K) \ge \frac{1}{M}. \end{align*}
Dimitri Jordan Kenne
2023-05-31T22:23:01Z
http://arxiv.org/abs/2306.00216v2
# A new estimate of the transfinite diameter of Bernstein sets ###### Abstract. Let \(K\subset\mathbb{C}^{n}\) be a compact set satisfying the following Bernstein inequality: for any \(m\in\{1,...,n\}\) and for any \(n\)-variate polynomial \(P\) of degree \(\deg(P)\) we have \[\max_{z\in K}\left|\frac{\partial P}{\partial z_{m}}(z)\right|\leq M\deg(P) \max_{z\in K}|P(z)|\ \ \text{for}\ z=(z_{1},\ldots,z_{n}).\] for some constant \(M=M(K)>0\) depending only on \(K\). We show that the transfinite diameter of \(K\), denoted \(\delta(K)\), verifies the following lower estimate \[\delta(K)\geq\frac{1}{nM},\] which is optimal in the one-dimensional case. In addition, we show that if \(K\) is a Cartesian product of compact planar sets then \[\delta(K)\geq\frac{1}{M}.\] Key words and phrases:Transfinite diameter, Bernstein inequality, Markov inequality 2000 Mathematics Subject Classification: Primary 41A17, 31C15, 32U15 ## 1. Introduction Sets verifying a Markov inequality are of particular interest in approximation and pluripotential theories. In [8], Plesniak posed the question of the continuity of Green pluricomplex function for these sets. In particular, he wanted to know whether they are nonpluripolar or not, which is equivalent to verifying that the transfinite diameter is positive or zero (see [6]). A subclass of these Markov sets known as Bernstein sets, have been proven by Siciak in [9] to be nonpluripolar. Some positive lower bounds of the transfinite diameter of Bernstein sets were proven by Bialas-Ciez and Jedrzejowski (first) in [2] and Yazici (later) in [11]; therefore emphasing their nonpluripolarity. In this paper, we present a better lower estimate of the transfinite diameter of Bernstein sets which is optimal in the one-dimensional case. We consider \(\mathcal{P}_{d}(\mathbb{C}^{n})\), the space of \(n\)-variate polynomials of total degree at most \(d\). A compact set \(K\in\mathbb{C}^{n}\) is a **Markov set** if it satisfies the following Markov inequality: for each \(m\in\{1,\ldots,n\}\) and for every polynomial \(P\in\mathcal{P}_{d}(C^{n})\) \[\max_{z\in K}\left|\frac{\partial P}{\partial z_{m}}(z)\right|\leq Md^{r}\max _{z\in K}|P(z)|\ \ \ \text{for}\ z=(z_{1},\ldots,z_{n}) \tag{1}\] for some constants \(M=M(K)>0\), \(r=r(K)>0\) which depend only on \(K\). If (1) is verified with \(r=1\) then \(K\) is called a **Bernstein set**. For instance, the closed unit disc \(D(0,1)=\{z\in\mathbb{C}:\ |z|\leq 1\}\) is a well-known Bernstein set as it satisfies the inequality \[\max_{z\in D(0,1)}|P^{\prime}(z)|\leq d\max_{z\in D(0,1)}|P(z)|\] for all polynomial \(P\in\mathcal{P}_{d}(\mathbb{C})\). More generally, any finite union of \(\mathcal{C}^{2}\)-smooth Jordan curves is also a Bernstein set (see [7]). Examples of Bernstein sets with an infinite number of connected components have been constructed in [10]. The Bernstein's property is strongly related to the smoothness of Green pluricomplex function. Indeed, Siciak shows in [9] that, a set is Bernstein (or verifies a Bernstein inequality) if and only if its Green pluricomplex function is Holder continuous (with exponent \(\mu=1\)). It follows from this equivalence that Bernstein sets are nonpluripolar. As far as we know, the generalisation of this result to all Markov sets remains an open question (for \(n\geq 2\)). We give in the preliminary section 2, the definition of the transfinite diameter of a given compact set \(K\subset\mathbb{C}^{n}\) and throughout this paper we denote it, \(\delta(K)\). Now, we recall the lower estimates of the transfinite diameter of Bernstein sets that have been proven so far. In [1], Bialas-Ciez established that the transfinite diameter of any Markov set \(K\subset\mathbb{C}\) verifies the following estimate \[\delta(K)>\frac{1}{M}\sigma^{r}(\operatorname{diam}(K))^{\frac{1}{3}} \tag{2}\] where \(\sigma\) is an absolute positive constant and \(\operatorname{diam}(K)\) is the diameter of \(K\). As a consequence, all planar Markov sets are nonpolar. Later, Bialas-Ciez and Jedrzejowski proved in [2] that any Bernstein set \(K\subset\mathbb{C}^{n}\) satisfies \[\delta(K)>\frac{1}{M2^{n-1}}. \tag{3}\] Recently in [11], Yazici improved the previous lower estimate for the transfinite diameter of any Bernstein set \(K\subset\mathbb{C}^{n}\) for \(n>4\), by proving that \[\delta(K)>\frac{1}{enM}. \tag{4}\] In this paper, we use an idea from [4] involving generalized Leja points to obtain a better lower bound for the transfinite diameter of Bernstein sets. Our main result is the following. **Theorem 1.1**.: _Let \(K\) be a Bernstein set in \(\mathbb{C}^{n}\) of parameter \(M\). Then_ \[\delta(K)\geq\frac{1}{nM}. \tag{5}\] _Remark 1_.: Note that the estimate in (5) is optimal in the one-dimensional case since the closed disc \(D(0,R)=\{z\in\mathbb{C}:\ |z|\leq R\}\) (\(R>0\)) is a Bernstein set with \(M=1/R\) and its transfinite diameter is \(\delta(D(0,R))=R\). Moreover, the estimate in (5) is better than (4) for all natural numbers \(n\) and better than (3) for all \(n\geq 3\). It turns out that the estimate given in Theorem 1.1 can be improved for the case of a Cartesian product of planar sets. **Theorem 1.2**.: _Let \(K=K_{1}\times\cdots\times K_{n}\) be a Bernstein set in \(\mathbb{C}^{n}\) of parameter \(M\). Then_ \[\delta(K)\geq\frac{1}{M}. \tag{6}\] ## 2. Preliminaries Let \(\mathbb{N}=\{0,1,2,\dots\}\) be the set of natural numbers and let \(\alpha:\mathbb{N}\ni j\longmapsto(\alpha_{1}(j),\dots,\alpha_{n}(j))\) be the enumeration on \(\mathbb{N}^{n}\) associated with the graded lexicographical ordering which we denote by "\(\prec\)". We write \(|\beta|=\beta_{1}+\cdots+\beta_{n}\) for the length of a multi-index \(\beta=(\beta_{1},\dots,\beta_{n})\). Consider the monomials \[e_{j}(z):=z^{\alpha(j)}=z_{1}^{\alpha_{1}(j)}\cdots z_{2}^{\alpha_{n}(j)},\ \ \ j =1,2,\dots. \tag{7}\] The space of holomorphic polynomials of \(n\geq 1\) complex variables and of degree at most \(d\in\mathbb{N}\) is \[\mathcal{P}_{d}(\mathbb{C}^{n})=\operatorname{span}\{e_{i}:=z^{\alpha(i)}=z_{1}^ {\alpha_{1}(i)}\cdots z_{n}^{\alpha_{n}(i)};\;i\in\mathbb{N}\text{ and }\deg(e_{i})\leq d\} \tag{8}\] and its dimension is \(h_{d}:=\dim(\mathcal{P}_{d}(\mathbb{C}^{n}))={n+d\choose n}\). For any set of points \(\{\xi_{0},\dots,\xi_{k-1}\}\) of \(\mathbb{C}^{n}\), we define the generalized Vandermonde determinant \(\operatorname{vdm}(\xi_{0},\dots,\xi_{k-1})\) by: \[\operatorname{vdm}(\xi_{0},\dots,\xi_{k-1}):=\det[e_{i}(\xi_{j})]_{i,j=0,1,2, \dots,k-1} \tag{9}\] with the convention \(\operatorname{vdm}(\xi_{0}):=1\). For any \(k\in\mathbb{N}\), we consider the constants \[V_{k}:=\max_{\xi_{0},\dots,\xi_{k-1}\in K}\left|\operatorname{vdm}(\xi_{0}, \dots,\xi_{k-1})\right|,\quad l_{d}:=\sum_{i=1}^{d}i(h_{i}-h_{i-1}). \tag{10}\] The constant \(l_{d}\) is the total degree of \(\operatorname{vdm}(\xi_{0},\dots,\xi_{h_{d}-1})\) viewed as a polynomial in \(\xi_{0},\dots,\xi_{h_{d}-1}\). It is known that \(l_{d}=n{n+d\choose n+1}\). The **transfinite diameter** of a compact set \(K\subset\mathbb{C}^{n}\) is the constant \[\delta(K)=\limsup_{d\to+\infty}\delta_{d}(K), \tag{11}\] where \(\delta_{d}(K)=V_{h_{d}}^{1/l_{d}}\) is called the \(d\)-th order transfinite diameter of \(K\). Fekete proved in [3] that the limit \(\delta(K)\) exists for any compact set \(K\subset\mathbb{C}\) (i.e when \(n=1\)). Later in [5], Leja introduced the name "transfinite diameter" and thus posed the problem of its existence when \(n\geq 2\). A positive answer to his problem was given later by Zaharjuta in [12]. The transfinite diameter of a compact set can also be determined using Leja sequences. A Leja sequence for a compact set \(K\subset\mathbb{C}^{n}\) is a sequence \((\xi_{j})_{j\geq 0}\) such that \(\xi_{0}\) is any arbitrary point (preferably on \(\partial K\)) and for each \(N\geq 1\) \[|\operatorname{vdm}(\xi_{0},\dots,\xi_{N-1},\xi_{N})|=\sup_{z\in K}| \operatorname{vdm}(\xi_{0},\dots,\xi_{N-1},z)|. \tag{12}\] It is proved in [4] that, for any compact set \(K\subset\mathbb{C}^{n}\) \[\delta(K)=\lim_{d\to+\infty}|\operatorname{vdm}(\xi_{0},\dots,\xi_{h_{d}-1})| ^{1/l_{d}} \tag{13}\] We Consider the following mappings \[\begin{array}{llll}\partial_{i}:&\mathbb{N}^{n}&\longrightarrow&\mathbb{N} ^{n}\\ &\beta&\longmapsto&\partial_{i}\beta=\left\{\begin{array}{ll}(0,\dots,0)& \text{if }\beta_{i}=0\\ (\beta_{1},\dots,\beta_{i-1},\beta_{i}-1,\beta_{i+1},\dots,\beta_{n})&\text{ if }\beta_{i}\geq 1\end{array},\right.\end{array} \tag{14}\] \(i=1,\dots,n\). Observe that \(\frac{\partial z^{\beta}}{\partial z_{m}}=\beta_{m}z^{\partial_{m}(\beta)}\) for any multi-index \(\beta\in\mathbb{N}^{n}\) and for any \(i=1,\dots,n\). The following lemma is a direct consequence to the fact that the graded lexicographical order is translation invariant. **Lemma 2.1**.: _Let \(\beta=(\beta_{1},\dots,\beta_{n})\) and \(\gamma=(\gamma_{1},\dots,\gamma_{n})\) be two multi-indices in \(\mathbb{N}^{n}\). If \(\beta\prec\gamma\) then for any \(i\in\{1,\dots,n\}\) such that \(\gamma_{i}\geq 1\) we have \(\partial_{i}\beta\prec\partial_{i}\gamma\) or \(\partial_{i}\beta=\partial_{i}\gamma=(0,\dots,0)\)._ Proof.: Let \(\beta\prec\gamma\) and let \(i\in\{1,\dots,n\}\) such that \(\gamma_{i}\geq 1\). We distinguishes two cases. * If \(\beta_{i}=0\) then \(\partial_{i}\beta=(0,\dots,0)\). Hence, \(\partial_{i}\beta=\partial_{i}\gamma=(0,\dots,0)\) or \(\partial_{i}\beta\prec\partial_{i}\gamma\). * Suppose that \(\beta_{i}\geq 1\). We have \(\partial_{i}\beta=(\beta_{1},\dots,\beta_{i-1},\beta_{i}-1,\beta_{i+1},\dots, \beta_{n})\) and \(\partial_{i}\gamma=(\gamma_{1},\dots,\gamma_{i-1},\gamma_{i}-1,\gamma_{i+1}, \dots,\gamma_{n})\). There are two cases to consider: * If \(|\beta|<|\gamma|\) then \(|\partial_{i}\beta|=|\beta|-1<|\gamma|-1=|\partial_{i}\gamma|\). It follows directly from the definition of the graded lexicographical order that \(\partial_{i}\beta\prec\partial_{i}\gamma\). If \(|\beta|=|\gamma|\) then we also have \(\partial_{i}\beta\prec\partial_{i}\gamma\) since the lexicographical order is translation invariant and by hypothesis \(\beta\prec\gamma\). ## 3. Proofs of Theorem 1.1 and Theorem 1.2 We begin by establishing two lemmas (Lemma 3.1 and Lemma 3.2) which are obtained by using the Markov's property over the following class of polynomials \[\mathcal{P}^{i}=\{P_{i}(z)=e_{i}(z)+\sum_{0\leq j<i}c_{j}e_{j}(z):\ c_{j}\in \mathbb{C}\}\quad\text{for }i\in\mathbb{N}. \tag{15}\] It is essentially due to these results that we can obtain a better estimate of the transfinite diameter of Bernstein sets. **Lemma 3.1**.: _Let \(K\) be a Markov set in \(\mathbb{C}^{n}\) of parameters \((M,r)\). For every polynomial \(P_{i}(z)=e_{i}(z)+\sum_{0\leq j<i}c_{j}e_{j}(z)\in\mathcal{P}^{i}\), we have_ \[\max_{z\in K}|D^{\alpha(i)}P_{i}(z)|\leq M^{|\alpha(i)|}(|\alpha(i)|!)^{r}\ \max_{z\in K}|P_{i}(z)| \tag{16}\] _where \(D^{\alpha}P:=\frac{\partial^{|\alpha|}P}{\partial z_{1}^{\alpha_{1}}\cdots \partial z_{n}^{\alpha_{n}}}\)._ Proof.: We give a proof by induction on \(i\in\mathbb{N}\). For \(i=0\) the inequality (16) is easily verified since we have \(D^{\alpha(0)}P_{0}=D^{(0,\dots,0)}(1)=1\). Inductive step: fix \(k\geq 1\) and suppose that (16) is true for all \(0\leq i\leq k-1\). Let us show that it is also verified at the rank \(i=k\). Let \(d\geq 0\) such that \(h_{d}\leq k<h_{d+1}\). Since \(|\alpha(k)|\geq 1\), we can select a certain \(m\in\{1,\dots,n\}\) for which \(\alpha_{m}(k)\) is positive. Let \(k_{0}\in\{0,\dots,k\}\) such that \[\alpha(k_{0})=\partial_{m}\alpha(k)=(\alpha_{1}(k),\dots,\alpha_{m-1}(k), \alpha_{m}(k)-1,\alpha_{m+1}(k),\dots,\alpha_{n}(k)).\] We have \(D^{\alpha(k)}P_{k}=D^{\alpha(k_{0})}\left(\frac{\partial P_{k}}{\partial z_{m }}\right)\). Moreover, Lemma 2.1 guarantees that we can write \[\frac{\partial P_{k}}{\partial z_{m}}=\alpha_{m}(k)z^{\alpha(k_{0})}+\sum_{j= 0}^{k_{0}}b_{j}e_{j}=\alpha_{m}(k)\left(e_{k_{0}}+\sum_{j=0}^{k_{0}}\tilde{c}_ {j}e_{j}\right).\] for some constants \(b_{j},\tilde{c}_{j}\geq 0\) (eventually equal to zero), \(j=1,\dots,k_{0}\). By setting \(Q_{k_{0}}=e_{k_{0}}+\sum_{j=0}^{k_{0}}\tilde{c}_{j}e_{j}\), it follows that \(D^{\alpha(k)}P_{k}=\alpha_{m}(k)D^{\alpha(k_{0})}Q_{k_{0}}\). Using the hypothesis of induction with the polynomial \(Q_{k_{0}}\), we deduce that \[\max_{z\in K}|D^{\alpha(k)}P_{k}(z)| =\alpha_{m}(k)\max_{z\in K}|D^{\alpha(k_{0})}Q_{k_{0}}(z)|\leq \alpha_{m}(k)\ M^{|\alpha(k_{0})|}(|\alpha(k_{0})|!)^{r}\ \max_{z\in K}|Q_{k_{0}}(z)|\] \[\leq\left(\alpha_{m}(k)\ M^{|\alpha(k_{0})|}(|\alpha(k_{0})|!)^{r }\right)\left(\frac{1}{\alpha_{m}(k)}\max_{z\in K}\left|\frac{\partial P_{k}} {\partial z_{m}}(z)\right|\right)\] \[=M^{|\alpha(k_{0})|}(|\alpha(k_{0})|!)^{r}\ \max_{z\in K}\left|\frac{ \partial P_{k}}{\partial z_{m}}(z)\right|.\] Moreover, from the definition of Markov set we have \[\max_{z\in K}\left|\frac{\partial P_{k}}{\partial z_{m}}(z)\right|\leq M| \alpha(k)|^{r}\max_{z\in K}|P_{k}(z)|.\] Therefore, \[\max_{z\in K}|D^{\alpha(k)}P_{k}(z)|\leq M^{|\alpha(k_{0})|+1}|\alpha(k)|^{r} (|\alpha(k_{0})|!)^{r}\ \max_{z\in K}|P_{k}(z)|\leq M^{|\alpha(k)|}(|\alpha(k)|!)^{r}\ \max_{z\in K}|P_{k}(z)|\] since \(|\alpha(k)|=|\alpha(k_{0})|+1\). Thus (16) is true for all \(i\geq 0\) The following Lemma provides a better version of Inequality (16) for the case of Cartesian product sets. **Lemma 3.2**.: _Let \(K=K_{1}\times\cdots\times K_{n}\) be a Markov set in \(\mathbb{C}^{n}\) of parameters \((M,r)\). For every polynomial \(P_{i}(z)=e_{i}(z)+\sum_{0\leq j<i}c_{j}e_{j}(z)\in\mathcal{P}^{i}\), we have_ \[\max_{z\in K}|D^{\alpha(i)}P_{i}(z)|\leq M^{|\alpha(i)|}(\alpha(i)!)^{r}\ \max_{z\in K}|P_{i}(z)|, \tag{17}\] _where \(\alpha!=\alpha_{1}!\cdots\alpha_{n}!\)._ Proof.: The proof is similar to that of Lemma 2.1. We proceed by induction on \(i\in\mathbb{N}\). The case \(i=0\) is obvious. Now, fix \(k\geq 1\), \(P_{k}\in\mathcal{P}^{k}\) and suppose that the inequality (17) is true for all \(0\leq i\leq k-1\). We can assume without loss of the generality that \(\alpha_{1}(k)\geq 1\). Let us consider \(k_{0}\in\{0,\ldots,k-1\}\) such that \(\alpha(k_{0})=\partial_{1}\alpha(k)\). By setting \(Q_{k_{0}}=e_{k_{0}}+\sum_{j=0}^{k_{0}}\tilde{c}_{j}e_{j}\) so that \(D^{\alpha(k)}P_{k}=\alpha_{1}(k)D^{\alpha(k_{0})}Q_{k_{0}}\), as in the proof of Lemma 3.1, we obtain by the hypothesis of induction \[\max_{z\in K}|D^{\alpha(k)}P_{k}(z)|\leq M^{|\alpha(k_{0})|}(\alpha(k_{0})!)^{ r}\ \max_{z\in K}\left|\frac{\partial P_{k}}{\partial z_{1}}(z)\right|. \tag{18}\] For \(\xi_{2},\ldots,\xi_{n}\in\mathbb{C}\) fixed, the polynomial \(P_{k}(z_{1},\xi_{2},\ldots,\xi_{n})\) belongs to \(\mathcal{P}_{\alpha_{1}(k)}(\mathbb{C})\subset\mathcal{P}_{\alpha_{1}(k)}( \mathbb{C}^{n})\). Therefore, the Markov inequality gives \[\left|\frac{\partial P_{k}}{\partial z_{1}}(z_{1},\xi_{2},\ldots,\xi_{n}) \right|\leq M(\alpha_{1}(k))^{r}\max_{z\in K}|P_{k}(z)|\quad\text{for all }z_{1}\in K_{1}.\] Since \(\xi_{2},\ldots,\xi_{n}\in\mathbb{C}\) are arbitrarily chosen, it follows that \[\max_{z\in K}\left|\frac{\partial P_{k}}{\partial z_{1}}(z)\right|\leq M( \alpha_{1}(k))^{r}\max_{z\in K}|P_{k}(z)|\] and hence the inequality (18) implies \[\max_{z\in K}|D^{\alpha(k)}P_{k}(z)| \leq M^{|\alpha(k_{0})|}(\alpha(k_{0})!)^{r}\ \left(M\alpha_{1}(k)^{r}\max_{z\in K}|P_{k}(z)|\right)\] \[=M^{|\alpha(k)|}(\alpha(k)!)^{r}\ \max_{z\in K}|P_{k}(z)|.\] because \(|\alpha(k_{0})|+1=|\alpha(k)|\) and \(\alpha(k_{0})!=(\alpha_{1}(k)-1)!\alpha_{2}(k)!\cdots\alpha_{n}(k)!\). _Remark 2_.: We deduce from Lemma 3.1 and Lemma 3.2 that the inequalities (16) and (17) are valid for all polynomials \(P_{i}(z)=\sum_{j=0}^{i}c_{j}e_{j}(z)\), with \(c_{j}\in\mathbb{C}\). _Remark 3_.: It is interesting to note that we have the following result: for all compact sets \(K_{j}\subset\mathbb{C}^{n_{j}}\) (\(n_{j}\in\mathbb{N}\)), \(j=1,\ldots,m\) \[K_{1}\times\cdots\times K_{m}\subset\mathbb{C}^{n}\text{ is a Markov set }\Longleftrightarrow K_{1},\ldots,K_{m}\ \text{ are all Markov sets.} \tag{19}\] Indeed, if \(K_{1}\times\cdots\times K_{m}\) is a Markov set of parameters \((M,r)\) then using the fact that for each \(d\in\mathbb{N}\) and for each \(j=1,\ldots,m\), \(\mathcal{P}_{d}(\mathbb{C}^{n_{j}})\subset\mathcal{P}_{d}(\mathbb{C}^{n})\), we can easily deduce that \(K_{1},\ldots,K_{m}\) are also Markov sets of parameters \((M,r)\). On the other hand, if we suppose that \(K_{1},\ldots,K_{m}\) are Markov sets with respective parameters \((M_{1},r_{1}),\ldots,(M_{m},r_{m})\), it is straightforward to see that \(K_{1}\times\cdots\times K_{m}\) is a Markov set of parameters \((\max_{j=1,\ldots,m}M_{j},\max_{j=1,\ldots,m}r_{j})\). Now we can prove Theorem 1.1 and Theorem 1.2. Proof of Theorem 1.1.: Let \(K\) be a Bernstein set in \(\mathbb{C}^{n}\) and let \((\xi_{i})_{i\geq 1}\) be a Leja sequence for \(K\). Then \(K\) is necessarily determining for the space of polynomial as a Bernstein set, i.e \(P\in\bigcup_{d\geq 0}\mathcal{P}_{d}(\mathbb{C}^{n})\) and \(P\equiv 0\) on \(K\) imply \(P\equiv 0\) in \(\mathbb{C}^{n}\). Therefore, all Leja sequences for \(K\) are unisolvent, i.e \[\textsc{vdm}^{(n)}(\xi_{0},\dots,\xi_{i-1})\neq 0\quad\text{for all }i\geq 1.\] Set \[P_{i}(\xi):=\frac{\textsc{vdm}(\xi_{0},\dots,\xi_{i-1},\xi)}{\textsc{vdm}(\xi _{0},\dots,\xi_{i-1})}\quad\text{ for all }i\geq 1.\] By expanding the Vandermonde determinant, which is in the numerator of \(P_{i}\), following its last column we obtain for each \(i\geq 1\) \[P_{i}(\xi)=e_{i}(\xi)+\sum_{0\leq j<i}c_{j}e_{j}(\xi).\] for some constants \(c_{i}\in\mathbb{C}\). Now observe that \(\textsc{vdm}(\xi_{0},\dots,\xi_{i-1},\xi_{i})=\prod_{j=1}^{i}P_{j}(\xi_{j})\), \(i\geq 1\). Hence, using the definition of Leja sequence and then Lemma 3.1 we obtain \[|\textsc{vdm}(\xi_{0},\dots,\xi_{h_{d}-1})| =\left|\prod_{i=1}^{h_{d}-1}P_{i}(\xi_{i})\right|=\prod_{i=1}^{h_ {d}-1}\max_{z\in K}|P_{i}(z)|\] \[\geq\prod_{i=1}^{h_{d}-1}\left[\frac{1}{M^{|\alpha(i)|}|\alpha(i) |!}\middle\|D^{\alpha(i)}P_{i}\right\|_{K}\right]=\prod_{i=1}^{h_{d}-1}\left[ \frac{\alpha(i)!}{M^{|\alpha(i)|}|\alpha(i)|!}\right]\] \[\geq\prod_{i=1}^{h_{d}-1}\left[\frac{1}{(nM)^{|\alpha(i)|}} \right]=\frac{1}{(nM)^{l_{d}}}.\] The last inequality is due to the fact that \(D^{\alpha(i)}P_{i}=\alpha(i)!\geq\frac{1}{n^{|\alpha(i)|}}|\alpha(i)|!\) and \(l_{d}=\sum_{i=1}^{h_{d}-1}|\alpha(i)|\). It follows that \[\delta(K)=\lim_{d\to+\infty}|\textsc{vdm}(\xi_{0},\dots,\xi_{h_{d}-1})|^{1/l_ {d}}\geq\frac{1}{nM}\] which is the desired estimate. Proof of Theorem 1.2.: The proof is the same as that of Theorem 1.1 but here we use Lemma 3.2 to replace Lemma 3.1. ## Acknowledgments I wish to thank Professor Leokadia Bialas-Ciez for her valuable comments on this work. Funding: This work was partially supported by the National Science Center, Poland, grant Preludium Bis 1 N\({}^{\text{o}}\) 2019/35/O/ST1/02245
2309.10680
On $3$-generated axial algebras of Jordan type $\frac{1}{2}$
Axial algebras of Jordan type $\eta$ are a special type of commutative non-associative algebras. They are generated by idempotents whose adjoint operators have the minimal polynomial dividing $(x-1)x(x-\eta)$, where $\eta$ is a fixed value that is not equal to $0$ or $1$. These algebras have restrictive multiplication rules that generalize the Peirce decomposition for idempotents in Jordan algebras. A universal $3$-generated algebra of Jordan type $\frac{1}{2}$ as an algebra with $4$ parameters was constructed by I. Gorshkov and A. Staroletov. Depending on the value of the parameter, the universal algebra may contain a non-trivial form radical. In this paper, we describe all semisimple $3$-generated algebras of Jordan type $\frac{1}{2}$ over a quadratically closed field.
Ravil Bildanov, Ilya Gorshkov
2023-09-19T15:04:55Z
http://arxiv.org/abs/2309.10680v5
# On \(3\)-generated axial algebras of Jordan type \(1/2\) # On \(3\)-generated axial algebras of Jordan type \(1/2\) Ravil Bildanov, Ilya Gorshkov _Abstract: Axial algebras of Jordan type \(\eta\) are a special type of commutative non-associative algebras. They are generated by idempotents whose adjoint operators have the minimal polynomial dividing \((x-1)x(x-\eta)\), where \(\eta\) is a fixed value that is not equal to \(0\) or \(1\). These algebras have restrictive multiplication rules that generalize the Peirce decomposition for idempotents in Jordan algebras._ _A universal \(3\)-generated algebra of Jordan type \(\frac{1}{2}\) as an algebra with \(4\) parameters was constructed by I. Gorshkov and A. Staroletov. Depending on the value of the parameter, the universal algebra may contain a non-trivial form radical. In this paper, we describe all semisimple \(3\)-generated algebras of Jordan type \(\frac{1}{2}\) over a quadratically closed field._ _MSC code: 20D60_ _Keywords: axial algebras, Jordan type algebras._ ## Introduction Axial algebras of Jordan type \(\eta\) were introduced by Hall, Rehren, and Shpectorov [2] within the framework of the general theory of axial algebras. These algebras are commutative non-associative algebras over a field \(\mathbb{F}\), generated by special idempotents known as primitive axes. While Jordan algebras generated by primitive idempotents are an example of Jordan type \(\frac{1}{2}\) algebras, not all algebras of this type are Jordan algebras. The Matsuo algebras, constructed from the group of \(3\)-transpositions, are an example of such algebras. It was proved in [2] (with a correction in [3]) that for \(\eta\neq 1/2\), algebras of Jordan type \(\eta\) are the Matsuo algebras or their quotient algebras. Therefore, the case \(\eta=1/2\) is special for algebras of Jordan type, and for this \(\eta\), they are called Jordan type half algebras. The class of Matsuo algebras was introduced by Matsuo [6] and later generalized in [2]. Jordan type half algebras are not exhausted by Matsuo algebras and their quotient algebras. Moreover, the quotient algebras of Matsuo algebras do not contain all Jordan algebras generated by primitive idempotents. For example, the \(27\)-dimensional Albert algebra is generated by \(4\) primitive idempotents and hence it is a Jordan type half algebra but not a Matsuo algebra [5]. A universal \(3\)-generated algebra \(A(\alpha,\beta,\gamma,\psi)\) of Jordan type half was constructed in [1], where \(\alpha,\beta,\gamma,\psi\) are parameters. It is proved there that if \((\alpha+\beta+\gamma-2\psi-1)(\alpha\beta\gamma-\psi^{2})\neq 0\) and \(\psi^{2}-\alpha\beta\gamma\) is a square in \(\mathbb{F}\), then \(A(\alpha,\beta,\gamma,\psi)\) is isomorphic to the matrix algebra \(M_{3}^{+}(\mathbb{F})\) of \(3\times 3\) matrices with Jordan multiplication. Otherwise, the algebra \(A(\alpha,\beta,\gamma,\psi)\) is not simple. A Frobenius form \((\,\ )\) on \(A\) is a nonzero symmetric bilinear form that associates with multiplication in \(A\), i.e., \(\forall a,b,c\in A\), we have \((ab,c)=(ac,b)\). Hall, Rehren, and Shpectorov [2] showed that for Jordan type algebras, there exists a unique Frobenius form with the property \((a,a)=1\) for any primitive axis \(a\). Let \(A\) be an algebra with a Frobenius form \((\,\ )\). The radical of the form \((\,\ )\) is an ideal \(R(A)\) generated by elements \(x\) such that \((x,a)=(a,x)=0\) for any element \(a\in A\). The purpose of this article is to describe all \(3\)-generated algebras of Jordan type half with trivial radical over a quadratically closed field. ## 1 Preliminary results We consider commutative non-associative algebras over a ground field \(\mathbb{F}\) of characteristic not two. For definitions, we almost always follow [2] and [4]. By \(L\langle X\rangle\) denote the linear span of the set \(X\) over \(\mathbb{F}\), and by \(\langle\langle X\rangle\rangle\) denote the algebra generated by the set \(X\). **Definition 1**.: _Given \(a\in A\) and \(\lambda\in\mathbb{F}\), consider the subspace \(A_{\lambda}(a)=\{u\in A\ |\ au=\lambda u\}\)._ Obviously, \(A_{\lambda}(a)\) is an eigenspace of the operator \(ad_{a}:x\to ax\), associated with \(\lambda\in\mathbb{F}\). **Definition 2**.: _An idempotent \(a\in A\) is said to be primitive if \(\dim(A_{1}(a))=1\)._ **Definition 3**.: _An algebra \(A\) is an algebra of Jordan type half if \(A\) is generated by the set of primitive idempotents \(X\) such that for every \(x\in X\), we have a decomposition \(A=A_{0}(x)\oplus A_{1}(x)\oplus A_{1/2}(x)\) with the following fusion (multiplication) rules:_ \[A_{0}(a)A_{1/2}(a)\subseteq A_{1/2}(a),A_{1}(a)A_{1/2}(a)\subseteq A_{1/2}(a),A_{0}(a)A_{1}(a)\subseteq\{0\}\] \[A_{0}^{2}(a)\subseteq A_{0}(a),A_{1}^{2}(a)\subseteq A_{1}(a),A_{1/2}^{2}(a) \subseteq A_{0}\oplus A_{1}.\] Such idempotents are called axes. By an \(n\)-generated algebra we mean an algebra generated by \(n\) primitive axes. Let us introduce some classes of simple Jordan algebras. **Definition 4**.: _Denote by \(M_{n}^{+}(\mathbb{F})\) the matrix algebra \(M_{n}(\mathbb{F})\) with Jordan product \(A\circ B=\frac{1}{2}(AB+BA)\)._ **Definition 5**.: _If \(j\) is an involution of \(M_{n}(\mathbb{F})\), then define the Hermitian Jordan algebra \(H(M_{n}(\mathbb{F}),j)\) as \(\{A\in M_{n}^{+}(\mathbb{F})\ |\ j(A)=A\}\)._ **Definition 6**.: _Define the Jordan form algebra \(JForm_{n}(\mathbb{F})\) on \(\mathbb{F}\oplus V\) over an arbitrary field \(\mathbb{F}\) and vector space \(V\) of dimension \(n\) over \(\mathbb{F}\) with bilinear form \(\phi\), with the product_ \[(a\oplus\mathbf{v})\bullet(b\oplus\mathbf{w})=(ab+\phi(\mathbf{v},\mathbf{w} ))\oplus(a\mathbf{w}+b\mathbf{v}),\text{ where }a,b\in\mathbb{F}\text{ and }\mathbf{v},\mathbf{w}\in V.\] It is well known that \(M_{n}^{+}(\mathbb{F})\), \(H_{n}^{+}(\mathbb{F})\), \(JForm_{n}(\mathbb{F})\) for \(n\geq 2\) are simple Jordan algebras generated by primitive idempotents, so they are algebras of Jordan type half. **Lemma 1**.: _[_3_, Theorem 4.1.]_ _Every algebra of Jordan type \(\eta\) admits a unique Frobenius form which satisfies the property \((a,a)=1\) for all axes \(a\in X\)._ **Lemma 2**.: _[_2_, Proposition 2.7.]_ _The radical of Frobenius form \(R(A)\) coincides with the largest ideal of \(A\) containing no axes from \(A\)._ **Lemma 3**.: _Let \(A\) be an algebra of Jordan type \(\eta\). Then for all \(a,b\in A\) and their images \(\overline{a},\overline{b}\in A/R(A)\), \((a,b)=(\overline{a},\overline{b})\)._ Proof.: Let \(a=\overline{a}+r_{a},b=\overline{b}+r_{b}\), where \(\overline{a},\overline{b}\in A/R(A),r_{a},r_{b}\in R(A)\). Then \((a,b)=(\overline{a}+r_{a},\overline{b}+r_{b})=(\overline{a},\overline{b})+( \overline{a},r_{b})+(\overline{b},r_{a})+(r_{a},r_{b})=(\overline{a},\overline {b})\). **Lemma 4**.: _[_4_, Lemma 1.]_ _Let \(A\) be a finitely generated algebra of Jordan type half, \(a,b\) are axes, \(\alpha=(a,b)\). Then we have the following equalities:_ 1. \(a_{0}^{2}(b)=(1-\alpha)a_{0}(b)\)_;_ 2. \(a_{1/2}^{2}(b)=\alpha a_{0}(b)+(\alpha-\alpha^{2})a\)_;_ 3. \(a_{0}(b)a_{1/2}(b)=\frac{1}{2}(1-\alpha)a_{1/2}(b)\)_._ **Lemma 5**.: _Let \(A=\langle\langle a,b\rangle\rangle\) be a \(2\)-generated algebra of Jordan type half. Then one of the following holds:_ 1. \(\dim(A)=1\)_,_ \((a,b)=1\)_,_ \(a=b\)_;_ 2. \(\dim(A)=2\)_,_ \((a,b)=0\)_,_ \(A\cong\mathbb{F}\oplus\mathbb{F}\)_;_ 3. \(\dim(A)=2\)_,_ \((a,b)=1\)_,_ \(\dim(R(A))=1\)_;_ 4. \(\dim(A)=3\)_,_ \((a,b)=0\)_,_ \(\dim(R(A))=1\)_,_ \(A/R(A)\cong\mathbb{F}\oplus\mathbb{F}\)_;_ 5. \(\dim(A)=3\)_,_ \((a,b)=1\)_,_ \(\dim(R(A))=2\)_;_ 6. \(\dim(A)=3\)_,_ \((a,b)\neq 0,1\)_, and_ \(A\) _is a Matsuo algebra. In particular, it is a simple Jordan algebra isomorphic to_ \(JForm_{2}(\mathbb{F})\)_._ Proof.: The assertion of the lemma is a simple consequence of [3, Proposition 1]. **Lemma 6**.: _[_4_, Corollary 1]_ _Let \(A\) be a \(2\)-generated algebra of Jordan type half with generating axes \(a\) and \(b\). Denote \(\alpha=(a,b)\). Then we have_ 1. \(a(ab)=\frac{1}{2}(\alpha a+ab)\)_;_ 2. \((ab)b=\frac{1}{2}(\alpha b+ab)\)_;_ 3. \((ab)(ab)=\frac{\alpha}{4}(a+b+2ab)\)_._ **Lemma 7**.: _[_1_, Main theorem]_ _There exists a \(3\)-generated \(9\)-dimensional algebra \(A(\alpha,\beta,\gamma,\psi)\) such that each \(3\)-generated algebra of Jordan type half is a quotient algebra of this algebra for suitable values of parameters._ Let \(A=\langle\langle a,b,c\rangle\rangle\), \(\dim(A)=9\), \(\alpha=(a,b)\), \(\beta=(b,c)\), \(\gamma=(a,c)\), \(\psi=(ab,c)\). In the table 1 below (that is similar to [1, Table 6] up to renumbering rows), we present all possible relations for \(\alpha,\beta,\gamma,\psi\) for \(A(\alpha,\beta,\gamma,\psi)\) to not be simple. ## 2 Quotient algebras In this section we describe \(3\)-generated algebras of Jordan type half over a quadratically closed field \(F\) with trivial radical and prove the following theorem. **Theorem 1**.: _Let \(A\) be a \(3\)-generated algebra of Jordan type half with trivial radical over a ground quadratically closed field \(\mathbb{F}\) with characteristic not equal to two or three. Then \(A\) is isomorphic to one of the following algebras:_ 1. \(\mathbb{F}^{n},n\in\{1,2,3\}\)_;_ 2. \(JForm_{2}(\mathbb{F})\)_;_ \begin{table} \begin{tabular}{|c|c|c|c|} \hline Number & Relations & \(\dim(A/R(A))\) & Basis of the radical \\ \hline \(1\) & \(\psi=\alpha=\beta=\gamma=1\) & \(1\) & \(b-a,c-a,ab-a,bc-a,ac-a,\) \\ & & & \(a(bc)-a,b(ac)-a,c(ab)-a\) \\ \hline \(2\) & \(\psi=\alpha=\beta=0,\gamma=1\) & \(2\) & \(c-a,ab,bc,ac-a,\) \\ & & & \(a(bc),b(ac),c(ab)\) \\ \hline \(3\) & \(\psi=\alpha=\beta=\gamma=0\) & \(3\) & \(ab,bc,ac,ac(bc),b(ac),c(ab)\) \\ \hline \(4\) & \(\psi=\alpha=0,\beta,\gamma\neq 0\), \(\beta+\gamma=1\) & \(3\) & \(ab,\frac{1}{2}ya-\frac{1}{2}gb-\frac{1}{2}+bc,\) \\ & & \(\beta+\gamma=1\) & \(-\frac{1}{2}\gamma a+\frac{1}{2}gb-\frac{1}{2}c+ac,\) \\ & & & \(\frac{1}{4}ja+\frac{1}{4}gb-\frac{1}{4}+a(bc),\) \\ & & & \(\frac{1}{4}ja+\frac{1}{4}gb-\frac{1}{4}c+b(ac),c(ab)\) \\ & & & \(\frac{1}{4}ja+\frac{1}{4}gb-\frac{1}{4}c+b(ac),c(ab)\) \\ \hline \(5\) & \(\alpha\beta\gamma=\psi^{2},\psi\neq 0,\alpha\neq 1\), \(\alpha+\beta+\gamma=2\psi+1\) & \(3\) & \(\alpha(\beta-1)a+\alpha(-1)a+c(2a-2\psi)ab,\) \\ & & & \((\alpha\beta-\omega)b+(\psi-\alpha)gh+(\alpha^{2}-\alpha)ab,\) \\ & & & \((\alpha\gamma-\omega)a+(\psi-\alpha)\gamma ab+(\alpha^{2}-\alpha)ac,\) \\ & & & \((\alpha\psi-\alpha^{2}\beta)a+(a+\psi-\alpha^{2}-\alpha)ab+2a(\alpha-1)a(bc),\) \\ & & & \(\alpha(\psi-\alpha)\gamma b+(\alpha+\psi-\alpha^{2}-\alpha)ab+2a(\alpha-1)b(ac),\) \\ & & & \((\psi-\alpha)a+(\psi-\alpha)\gamma b+(1-\alpha)ab+2(\alpha-1)c(ab)\) \\ \hline \(6\) & \(\psi=\alpha=\beta=0,\gamma\neq 0,1\) & \(4\) & \(ab,bc,ac,a(bc),b(ac),c(ab)\) \\ \hline \(7\) & \(\psi^{2}\neq\alpha\beta\gamma\), \(\alpha+ 3. \(\mathbb{F}\oplus JForm_{2}(\mathbb{F})\)_;_ 4. \(M_{2}^{+}(\mathbb{F})\)_;_ 5. \(H(M_{3}(\mathbb{F}),j)\) _with_ \(j(X)=X^{T}\)_;_ 6. \(M_{3}^{+}(\mathbb{F})\)_._ It follows from Lemma 7 that we need to describe the quotient algebras of the algebra \(A(\alpha,\beta,\gamma,\psi)\) by its radical. We use the description of the algebra \(A(\alpha,\beta,\gamma,\psi)\) from [1, Theorem 2]. Following [1], denote \(\alpha=(a,b),\beta=(b,c),\gamma=(a,c),\psi=(ab,c)\). In [1, Table 6], one can find the dimensions and bases of the radicals of the algebra \(A(\alpha,\beta,\gamma,\psi)\). Denote by \(A_{i}\) the universal \(9\)-dimensional algebra \(A(\alpha_{i},\beta_{i},\gamma_{i},\psi_{i})\) with parameters and numeration from Table 1, \(R_{i}\) the radical of this algebra and by \(S_{i}\) the quotient algebra \(A_{i}/R_{i}\). We begin with two trivial propositions for \(1\)-dimensional and \(2\)-dimensional algebras, which are not generated by three linearly independent axes. **Proposition 1**.: _If \(A\) is a \(1\)-dimensional algebra of Jordan type half with trivial radical, then \(A\cong S_{1}\)._ Proof.: It is easy to see that \(S_{1}\cong\mathbb{F}\). We have that \(A\) is \(1\)-dimensional, so \(\dim L\langle a,b,c\rangle=1\) and \(a=b=c\). Hence \(A\cong\mathbb{F}\cong S_{1}\). **Proposition 2**.: _If \(A\) is a \(2\)-dimensional \(3\)-generated algebra of Jordan type half with trivial radical, then \(A\cong\mathbb{F}\oplus\mathbb{F}\cong S_{2}\)._ Proof.: By Lemma 5, there is only one \(2\)-dimensional algebra of Jordan type half with trivial radical, so \(A\cong\mathbb{F}\oplus\mathbb{F}\cong S_{2}\). **Proposition 3**.: _If \(A\) is a \(3\)-dimensional \(3\)-generated algebra of Jordan type half with trivial radical, then \(A\) is isomorphic to either \(S_{3}\) or \(S_{5}\)._ Proof.: Assume that \(A\) is generated by axes \(a\) and \(b\). From Lemma 5, it follows that there is only one \(3\)-dimensional \(2\)-generated Jordan type half algebra with trivial radical. In this case, we can choose any other axis of the algebra \(A\) as the axis \(c\). Put \(c=a^{\tau_{b}}=a-4ab+4\alpha b\). We have \(\beta=\alpha,\ \gamma=(1-2\alpha)^{2}\) and \(\psi=\alpha(2\alpha-1)\). Therefore \(\alpha\beta\gamma=\psi^{2},\ \psi\neq 0,\alpha\neq 1,\ \alpha+\beta+\gamma=2\psi+1\). So in this case \(A\simeq S_{5}\). Assume that \(A\) is not generated by \(2\) axes. Therefore, based on the dimension of \(A\), we conclude that \(A\) is the linear span of the axes \(a\), \(b\), and \(c\). Assume that \(ab\notin L\langle a,b\rangle\). Hence \(\dim\langle\langle a,b\rangle\rangle=3\). Therefore \(c\in L\langle a,b,ab\rangle=A\), which is a contradiction. Similarly, we can show that if \(ac\in L\langle a,c\rangle\) and \(bc\in L\langle b,c\rangle\). In particular, we have \(\dim(\langle\langle a,b\rangle\rangle)=\dim(\langle\langle a,c\rangle\rangle )=\dim(\langle\langle c,b\rangle\rangle)=2\). From Lemma 5 it follows that \(\{(a,b),(a,c),(b,c)\}\subseteq\{0,1\}\). Moreover, if \((a,b)=0\), then \(\langle\langle a,b\rangle\rangle\simeq\mathbb{F}\oplus\mathbb{F}\). Therefore, if \((a,b)=(a,c)=(b,c)=0\) then \(A\simeq\mathbb{F}\oplus\mathbb{F}\oplus\mathbb{F}\) and \(\psi=0\). In this case the Gram matrix of the algebra \(A\) is the identity matrix and hence the radical of \(A\) is trivial. We conclude that in this case \(A\simeq S_{3}\). Assume that \((a,c)\neq 0\). We have \((a,c)=1\). In this case, \(R(\langle\langle a,c\rangle\rangle)\) is not trivial and contains the element \(a-c\). Assume that \((a,b)=(b,c)=0\). In this case we have \((a-c,b)=0\). Consequently \(a-c\in R(A)\), which is a contradiction. Therefore, without loss of generality, we can assume that \((b,c)=1\). If \((a,b)=1\) then \((a-c,b)=0\) and consequently \(a-c\in R(A)\), which is a contradiction. Therefore \((a,b)=0\). From the description of \(2\)-generated algebras of Jordan type half we have \(ab=0\), \(a=c+a_{h}\), \(b=c+b_{h}\) where \(a_{h},b_{h}\in A_{1/2}(c)\). Therefore \(0=ab=(c+a_{h})(c+b_{h})=c+1/2(a_{h}+b_{h})+a_{h}b_{h}\), where \(c+a_{h}b_{h}\in A_{0+1}(c)\) and \(a_{h}+b_{h}\in A_{1/2}(c)\). Therefore \(a_{h}+b_{h}=0\). In particular, \(b=a^{\tau_{c}}\) and \(\dim(A)=2\). **Proposition 4**.: _Algebras \(S_{4}\) and \(S_{5}\) are isomorphic._ Proof.: We will first show that \(S_{4}=\langle\langle a,c\rangle\rangle\). Put \(S=\langle\langle a,c\rangle\rangle\). We have that \(S=S_{0}(a)+S_{1}(a)+S_{1/2}(a)\) and \(c=c_{0}(a)+\gamma a+c_{1/2}(a)\), where \(c_{0}(a)\in S_{0}(a)\), \(c_{1/2}(a)\in S_{1/2}(a)\). If \(c_{1/2}=0\) then we can see that \(c\) is not the primitive idempotent. Thus set \(c_{1/2}\neq 0\). Assume that \(c_{0}=0\). Therefore, \(\gamma a+c_{1/2}=c=c^{2}=\gamma^{2}a+\gamma c_{1/2}+c_{1/2}^{2}\). Hence \(\gamma=1\) and from definition of \(S_{4}\) it follows that \(\beta=0\). In this case \((a-c,b)=(a,b)-(c,b)=0\). It follows that \(a-c\in R(S_{4})\), which is a contradiction. Therefore, \(\dim(S)=3\). Thus, \(S_{4}=S\). Hence \(S_{4}\) is generated by \(2\) axes and is isomorphic to \(S_{5}\). It is known that \(A\) is isomorphic to \(JForm_{2}(\mathbf{F})\) in this case. **Proposition 5**.: _If \(A\) is a \(4\)-dimensional \(3\)-generated Jordan type half algebra with trivial radical, then one of the following assertions holds:_ 1. \(A\simeq S_{6}\simeq\mathbb{F}\oplus JForm_{2}(\mathbb{F})\)_;_ 2. \(A\simeq S_{7}\simeq M_{2}^{+}(\mathbb{F})\)_._ Proof.: The algebra \(M_{2}^{+}(\mathbb{F})\) is a simple Jordan algebra. Algebra \(\mathbb{F}\oplus JForm_{2}(\mathbb{F})\) contains non-trivial ideals. Therefore \(M_{2}^{+}(\mathbb{F})\not\simeq\mathbb{F}\oplus JForm_{2}(\mathbb{F})\). Hence, to prove this proposition, it suffices to show that \(S_{6}\simeq\mathbb{F}\oplus JForm_{2}(\mathbb{F})\) and \(S_{7}\simeq M_{2}^{+}(\mathbb{F})\). **Lemma 8**.: \(S_{6}\) _is isomorphic to \(\mathbb{F}\oplus JForm_{2}(\mathbb{F})\)._ Proof.: Let \(\langle\langle a,b,c\rangle\rangle\simeq S_{6}\). We have \((a,c)\not\in\{0,1\}\). Therefore \(\langle\langle a,c\rangle\rangle\) is isomorphic to \(JForm_{2}(\mathbb{F})\). From Table 1, it follows that the radical of \(A(0,0,\gamma,0)\) contains \(ab\) and \(bc\). Therefore \(ab=bc=0\) and \(S_{6}\simeq\langle\langle a,c\rangle\rangle\oplus\langle\langle b\rangle \rangle\simeq\mathbb{F}\oplus JForm_{2}(\mathbb{F})\). **Lemma 9**.: \(S_{7}\) _is isomorphic to \(M_{2}^{+}(\mathbb{F})\)._ Proof.: Let \[A=\left(\begin{array}{cc}1&\lambda_{a}\\ 0&0\end{array}\right)B=\left(\begin{array}{cc}1&0\\ \lambda_{b}&0\end{array}\right),C=\left(\begin{array}{cc}1-\lambda_{c}&1\\ \lambda_{c}(1-\lambda_{c})&\lambda_{c}\end{array}\right)\] where \(\lambda_{a},\lambda_{b},\lambda_{c}\in\mathbb{F}\setminus\{0\}\). Consider the following map \(f:S_{7}\to M_{2}^{+}(\mathbb{F}),f(a)=A,f(b)=B,f(c)=C\). It is easy to see that \(\dim L\langle A,B,C,A\circ B\rangle=4\), so \(\langle\langle A,B,C\rangle\rangle=M_{2}^{+}(\mathbb{F})\). A map \((\cdot\,\cdot):M_{2}^{+}(\mathbb{F})^{2}\to\mathbb{F}\) such that \((X,Y)=tr(XY)=tr(X\circ Y)\), where \(X,Y\in M_{2}^{+}(\mathbb{F})\), is a symmetric bilinear form on \(M_{2}^{+}(\mathbb{F})\). This form associates with the product \(\circ\). Clearly, we have \(tr(A\circ A)=tr(B\circ B)=tr(C\circ C)=1\). Furthermore, we see that \(tr(A\circ B)=1+\lambda_{a}\lambda_{b}=\alpha,tr(B\circ C)=1-\lambda_{c}+ \lambda_{b}\lambda_{c}-\lambda_{b}\lambda_{c}^{2}=\beta,tr(A\circ C)=1+ \lambda_{a}-\lambda_{c}=\gamma\) and \(tr(A\circ(B\circ C))=tr(B\circ(A\circ C))=tr(C\circ(A\circ B))=\psi=\frac{1}{ 2}(1-\alpha-\beta-\gamma)\). So we can take \[\lambda_{a}=\frac{1}{\alpha}(\psi+\alpha\gamma\pm\sqrt{\psi^{2}-\alpha\beta \gamma}),\lambda_{b}=\frac{1}{\gamma(\gamma-1)}(\psi+\alpha\gamma\mp\sqrt{ \psi^{2}-\alpha\beta\gamma}),\] \[\lambda_{c}=\pm\frac{1}{\alpha}(\psi+\alpha+\sqrt{\psi^{2}-\alpha\beta\gamma}),\] as the solution of these equations. Using computer calculations, we show that multiplication table for \(f(\langle\langle a,b,c\rangle\rangle)\)1 coincides with multiplication table for \(S_{7}\). Hence, \(f\) is an isomorphism. Footnote 1: Computer calculations for multiplication table in \(S_{7}\) can be found in [https://github.com/RaviBlidanov/3gen-axial-algebras/blob/main/S7multiplicationtable.nb](https://github.com/RaviBlidanov/3gen-axial-algebras/blob/main/S7multiplicationtable.nb), see paragraph Tables. We also use computer calculations to check that \(R(f(\langle\langle a,b,c\rangle\rangle))=\{0\}\) and relations between \(\alpha,\beta,\gamma,\psi\) hold. 2. Footnote 2: One can find our computer calculations here: [https://github.com/RaviBlidanov/3gen-axial-algebras/blob/main/S7multiplicationtable.nb](https://github.com/RaviBlidanov/3gen-axial-algebras/blob/main/S7multiplicationtable.nb), see paragraph Tables. **Proposition 6**.: _If \(A\) is a \(6\)-dimensional \(3\)-generated algebra of Jordan type half with trivial radical, then \(A\simeq S_{8}\simeq S_{9}\simeq H(M_{3}(\mathbb{F}),j)\), where \(j(X)=X^{T}\)._ **Lemma 10**.: \(S_{8}\cong H(M_{3}(\mathbb{F}),j)\)__ Proof.: Consider the following matrices in \(H(M_{3}(\mathbb{F}),j)\) and map \(f:S_{8}\to H(M_{3}(\mathbb{F}),j)\), \(f(a)=A,f(b)=B,f(c)=C\), \(\lambda_{a},\lambda_{b},\lambda_{c}\in\mathbb{F}\setminus\{0\}\) are the invariant by \(\theta\) parameters which are defined later from conditions to \(\alpha,\beta,\gamma\) and \(\psi\). \[A=\left(\begin{array}{ccc}1&0&0\\ 0&0&0\\ 0&0&0\end{array}\right)B=\left(\begin{array}{ccc}0&0&0\\ 0&\frac{1+\sqrt{1-4\lambda_{b}^{2}}}{2}&\lambda_{b}\\ 0&\lambda_{b}&\frac{1-\sqrt{1-4\lambda_{b}^{2}}}{2}\end{array}\right)C=\left( \begin{array}{ccc}\frac{1+\sqrt{1-4\lambda_{c}^{2}}}{2}&0&\lambda_{c}\\ 0&0&0\\ \lambda_{c}&0&\frac{1-\sqrt{1-4\lambda_{c}^{2}}}{2}\end{array}\right)\] Below we show that the mapping \(f\) is an isomorphism between the algebras \(S_{8}\) and \(H(M_{3}(\mathbb{F}),j)\). It is easy to see that \(A^{2}=A,B^{2}=B,C^{2}=C\). We check that \(f(\langle\langle a,b,c\rangle\rangle)=L\langle A,B,C,A\circ C,B\circ C,A\circ(B \circ C)\rangle\). Thus, \(\dim L\langle A,B,C,A\circ C,B\circ C,A\circ(B\circ C)\rangle=6\). Hence \(\langle\langle A,B,C\rangle\rangle\) and \(H(M_{3}(\mathbb{F}),j)\) are isomorphic as vector spaces. A map \((\cdot\,\cdot):H(M_{3}(\mathbb{F}),j)^{2}\to\mathbb{F}\) such that \((X,Y)=tr(XY)=tr(X\circ Y)\), where \(X,Y\in H(M_{3}(\mathbb{F}),j)\) is a symmetric bilinear form on \(H(M_{3}(\mathbb{F}),j)\). This form associates with the product \(\circ\). Clearly, we have \(tr(A\circ A)=tr(B\circ B)=tr(C\circ C)=1\). Furthermore, we see that \(tr(A\circ B)=0,tr(B\circ C)=\frac{1}{4}(1-\sqrt{1-4\lambda_{b}^{2}})(1-\sqrt{1-4 \lambda_{c}^{2}})=\beta,tr(A\circ C)=\frac{1+\sqrt{1-4\lambda_{c}^{2}}}{2}=\gamma\) and \(tr(A\circ(B\circ C))=tr(B\circ(A\circ C))=tr(C\circ(A\circ B))=0\). So, we have conditions to \(\lambda_{b},\lambda_{c}\). Take the basis \(a,b,c,b\cdot c,a\cdot c,a\cdot(b\cdot c)\) for \(S_{8}\). Multiplication table for \(f(\langle\langle a,b,c\rangle\rangle)\)3 coincides with multiplication table for \(S_{8}\). Footnote 3: Computer calculations for the multiplication table in \(S_{8}\) can be found in [https://github.com/RaviBildanov/3gen-axial-algebras/blob/main/S8multiplicationtable.nb](https://github.com/RaviBildanov/3gen-axial-algebras/blob/main/S8multiplicationtable.nb), see paragraph Tables. We also use computer calculations to check that \(R(f(\langle\langle a,b,c\rangle\rangle))=\{0\}\) and relations between \(\alpha,\beta,\gamma,\psi\) hold. 4. Footnote 4: Computer calculations for this proof can be found in [https://github.com/RaviBildanov/3gen-axial-algebras/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/master/blob/](https://github.com/RaviBildanov/3gen-axial-algebras/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/master/blob/master/blob/master/blob/master/blob/master/master/blob/) We can also check that the multiplication table for \(a,c,d,ac,cd,a(cd)\) coincides with the multiplication table for the standard basis \(a,b,c,bc,ac,a(bc)\) of \(S_{8}\). This means that \(S_{9}\) contains a 6-dimensional subalgebra isomorphic to \(S_{8}\).5. Footnote 5: One can see our computer calculations here: [https://github.com/RavilBildanov/3gen-axial-algebras/blob/main/2/see](https://github.com/RavilBildanov/3gen-axial-algebras/blob/main/2/see) section ”Isomorphism between \(S_{8}\) and \(S_{9}\)”. \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline \(*\) & \(a\) & \(b\) & \(c\) & \(ab\) \\ \hline \(a\) & \(a\) & \(*\) & \(*\) & \(*\) \\ \hline \(b\) & \(ab\) & \(b\) & \(*\) & \(*\) \\ \hline \(c\) & \(\frac{1}{2(\alpha-1)}((\gamma-\alpha)a+(\gamma-1)b\) & \(\frac{1}{2(\alpha-1)}((\beta-1)a+(\beta-\alpha)b\) & \(c\) & \(*\) \\ & \(+(1-\alpha)c+2(-\gamma+1)ab)\) & \(+(1-\alpha)c+2(-\beta+1)ab\) & & \\ \hline \(ab\) & \(\frac{1}{2}(a\alpha+ab)\) & \(\frac{1}{2}(b\alpha+ab)\) & \(\frac{1}{2(\alpha-1)}((\psi-\alpha)a+\) & \(\frac{1}{4}\alpha(a+\) \\ & & & \(+(\psi-\alpha)b+(\alpha-\alpha^{2})c+\) & \(+b+2ab)\) \\ & & & \(+(2-\beta-\gamma)ab)\) & \\ \hline \end{tabular} \end{table} Table 2: Multiplication table for \(S_{7}\) \begin{table} \begin{tabular}{|c||c|c|c|c|c|} \hline \(*\) & \(a\) & \(b\) & \(c\) & \(bc\) & \(ac\) & \(a(bc)\) \\ \hline \hline \(a\) & \(a\) & \(*\) & \(*\) & \(*\) & \(*\) \\ \hline \(b\) & \(0\) & \(b\) & \(*\) & \(*\) & \(*\) \\ \hline \(c\) & \(ac\) & \(bc\) & \(c\) & \(*\) & \(*\) \\ \hline \(bc\) & \(a(bc)\) & \(\frac{1}{2}(b\beta+bc)\) & \(\frac{\beta}{4}((b+c+2bc))\) & \(*\) & \(*\) \\ \hline \(ac\) & \(\frac{1}{2}(a\gamma+ac)\) & \(a(bc)\) & \(\frac{1}{2}(c\gamma+ac)\) & \(\frac{1}{4}bc+\frac{\beta}{4}ac+\frac{1}{2}a(bc))\) & \(\frac{1}{4}\gamma(a+c+2ac)\) & \(*\) \\ \hline \(a(bc)\) & \(0\) & \(\frac{1}{4}(\beta ac+2a(bc))\) & \(\frac{1}{4}(\gamma bc+\beta ac)\) & \(\frac{\beta\gamma}{8}b+\frac{\beta}{8}ac+\frac{\beta}{4}a(bc)\) & \(\frac{\beta\gamma}{8}a+\frac{1}{8}bc+\frac{\gamma}{4}a(bc)\) & \(\frac{\beta\gamma}{16}a+\) \\ & & & & \(+\frac{1}{16}b\) \\ \hline \end{tabular} \end{table} Table 3: Multiplication table for \(S_{8}\)
2301.13434
DESI and DECaLS (D&D): galaxy-galaxy lensing measurements with 1% survey and its forecast
The shear measurement from DECaLS (Dark Energy Camera Legacy Survey) provides an excellent opportunity for galaxy-galaxy lensing study with DESI (Dark Energy Spectroscopic Instrument) galaxies, given the large ($\sim 9000$ deg$^2$) sky overlap. We explore this potential by combining the DESI 1\% survey and DECaLS DR8. With $\sim 106$ deg$^2$ sky overlap, we achieve significant detection of galaxy-galaxy lensing for BGS and LRG as lenses. Scaled to the full BGS sample, we expect the statistical errors to improve from $18(12)\%$ to a promising level of $2(1.3)\%$ at $\theta>8^{'}(<8^{'})$. This brings stronger requirements for future systematics control. To fully realize such potential, we need to control the residual multiplicative shear bias $|m|<0.01$ and the bias in the mean redshift $|\Delta z|<0.015$. We also expect significant detection of galaxy-galaxy lensing with DESI LRG/ELG full samples as lenses, and cosmic magnification of ELG through cross-correlation with low-redshift DECaLS shear. {If such systematical error control can be achieved,} we find the advantages of DECaLS, comparing with KiDS (Kilo Degree Survey) and HSC (Hyper-Suprime Cam), are at low redshift, large-scale, and in measuring the shear-ratio (to $\sigma_R\sim 0.04$) and cosmic magnification.
Ji Yao, Huanyuan Shan, Pengjie Zhang, Eric Jullo, Jean-Paul Kneib, Yu Yu, Ying Zu, David Brooks, Axel de la Macorra, Peter Doel, Andreu Font-Ribera, Satya Gontcho A Gontcho, Theodore Kisner, Martin Landriau, Aaron Meisner, Ramon Miquel, Jundan Nie, Claire Poppett, Francisco Prada, Michael Schubnell, Mariana Vargas Magana, Zhimin Zhou
2023-01-31T06:10:46Z
http://arxiv.org/abs/2301.13434v1
# DESI and DECaLS (D&D): galaxy-galaxy lensing measurements with 1% survey and its forecast ###### Abstract The shear measurement from DECaLS (Dark Energy Camera Legacy Survey) provides an excellent opportunity for galaxy-galaxy lensing study with DESI (Dark Energy Spectroscopic Instrument) galaxies, given the large (\(\sim 9000\) deg\({}^{2}\)) sky overlap. We explore this potential by combining the DESI 1% survey and DECaLS DR8. With \(\sim 106\) deg\({}^{2}\) sky overlap, we achieve significant detection of galaxy-galaxy lensing for BGS and LRG as lenses. Scaled to the full BGS sample, we expect the statistical errors to improve from 18(12)% to a promising level of 2(1.3)% at \(\theta>8\arcmin(<8\arcmin)\). This brings stronger requirements for future systematics control. To fully realize such potential, we need to control the residual multiplicative shear bias \(|m|<0.01\) and the bias in the mean redshift \(|\Delta z|<0.015\). We also expect significant detection of galaxy-galaxy lensing with DESI LRG/ELG full samples as lenses, and cosmic magnification of ELG through cross-correlation with low-redshift DECaLS shear. If such systematical error control can be achieved, we find the advantages of DECaLS, comparing with KiDS (Kilo Degree Survey) and HSC (Hyper-Suprime Cam), are at low redshift, large-scale, and in measuring the shear-ratio (to \(\sigma_{R}\sim 0.04\)) and cosmic magnification. keywords: weak lensing - cosmology - galaxy-galaxy lensing ## 1 Introduction Weak gravitational lensing is one of the most promising cosmological probes in studying the nature of dark matter, dark energy, and gravity (Refregier, 2003; Mandelbaum, 2018). The combination between different probes can be even more powerful, due to more constraining power and breaking the degeneracy between the parameters (Planck Collaboration et al., 2020; DES Collaboration et al., 2021). However, possibly due to residual systematics or new physics beyond the standard \(\Lambda\)CDM model, the tension between CMB (cosmic microwave background) at redshift \(z\sim 1100\) and the late-time galaxy surveys at \(z<\sim 1\) troubles us when using their synergy (Hildebrand et al., 2017;
2309.14579
Bounded orbits for 3 bodies in $\mathbb{R}^4$
We consider the Newtonian 3-body problem in dimension 4, and fix a value of the angular momentum which is compatible with this dimension. We show that the energy function cannot tend to its infimum on an unbounded sequence of states. Consequently the infimum of the energy is its minimum. This completes our previous work \cite{AD19} on the existence of Lyapunov stable relative periodic orbits in the 3-body problem in $\mathbb{R}^4$.
Alain Albouy, Holger R. Dullin
2023-09-25T23:52:55Z
http://arxiv.org/abs/2309.14579v2
# Critical points at infinity of the 3-body problem in \(\mathbb{R}^{4}\) ###### Abstract. We show that critical points at infinity in the 3-body problem in \(\mathbb{R}^{4}\) do not realize the infimum of the energy. This completes our previous work [1] on the existence of Lyapunov stable relative periodic orbits in the 3-body problem in \(\mathbb{R}^{4}\). ## 1. Introduction This work aims to continue our previous work [1] on the 3-body problem in \(\mathrm{I\!R}^{4}\). There we started from a rather complete description of the configurations of the relative equilibria, which form three curves in the space of triangular shapes. We described their embedding in the phase space, and in particular, the three curves they draw in the energy versus angular momentum diagram. We noticed a cusp on two of the curves, and an interesting connection of the third curve with a fourth curve corresponding to the equilateral relative equilibria. We gave some rigorous results and proposed some difficult conjectures. The relative equilibria are responsible for changes of topology of the integral manifold. Smale [13] insisted that changes of topology may also occur at infinity. In this sense, our four curves in the energy versus angular momentum diagram are indicating changes of topology. They must be completed by other curves which indicate the changes of topology which occur at infinity. Different arguments allow us to infer that there are four new curves, and to propose their equation. We complete our diagrams by drawing these conjectural curves. We notice the asymptotic coincidence of three of the new curves and the first three curves of relative equilibria when the energy \(H\to-\infty\). This coincidence is easily deduced from the expansions in [12]. The fourth curve is \(H=0\). In a future work we will present some more statements and proofs about these critical points at infinity. As Richard Montgomery warned us, the proof of our most surprising claim in [1], namely _For any value of the angular momentum and the masses, there exists in the reduced problem a Lyapunov stable relative equilibrium,_ is incomplete. We proved that the energy is bounded below, and that the minimum provides such a stable equilibrium. But we did not prove that the minimum is realized. Here we fill this gap with a quite general argument which proves that a "critical point at infinity" cannot realize the infimum of the energy, since it may be approached by inferior values. This general argument first proposes models for the infimum at infinity, and then proves that these models do not realize the infimum. We conjecture that the critical values at infinity of the energy are six values deduced from these models. In particular, when the energy is increased, the compact level sets surrounding the Lyapunov stable relative equilibria should loose their compactness. This should happen at one of the six values. The values are generically distinct. They form, when the angular momentum is varied, the three curves with \(H<0\) which we add to the energy-momentum diagram. ## 2. The infimum is not at infinity in the 4D 3-body problem Consider the 3-body problem. Let \(q_{1}\), \(q_{2}\), \(q_{3}\) be the positions and \(m_{1}\), \(m_{2}\), \(m_{3}\) be the masses. The bodies move in a Euclidean vector space \(E\). Their center of mass is the origin. Here \(E={\rm I\!R}^{4}\). The vector space \(\bigwedge^{2}E\) of bivectors or of antisymmetric matrices is also Euclidean. For a simple bivector \(\pi=a\wedge b\in\bigwedge^{2}E\), the length squared is \(|\pi|^{2}=|a|^{2}|b|^{2}-\langle a,b\rangle^{2}\). The angular momentum, expressed in terms of the two Jacobi vectors \(q=q_{2}-q_{1}\) and \(Q=q_{3}-(m_{1}q_{1}+m_{2}q_{2})/(m_{1}+m_{2})\) and two positive coefficients \[\mu=\frac{m_{1}m_{2}}{m_{1}+m_{2}},\quad\nu=\frac{m_{3}(m_{1}+m_{2})}{m_{1}+m _{2}+m_{3}},\] is \[L=q\wedge p+Q\wedge P,\quad\text{where}\quad p=\mu\dot{q},\quad P=\nu\dot{Q}.\] Recall that \(L\) is of rank 4 if and only if \(q,p,Q,P\) generate a 4-dimensional space. This means that the motion is not restricted to a 3-dimensional subspace of \(E\). The eigenvalues of \(L\) are \(\pm il_{1},\pm il_{2}\), where the two positive numbers \(l_{1}\) and \(l_{2}\) are such that in an orthonormal base \((e_{1},e_{2},e_{3},e_{4})\) we have \(L=l_{1}e_{1}\wedge e_{2}+l_{2}e_{3}\wedge e_{4}\). Let \(d_{ij}=|q_{i}-q_{j}|\). The energy is \[H=T+V,\quad\text{with}\quad T=\frac{|p|^{2}}{2\mu}+\frac{|P|^{2}}{2\nu},\quad V =-\frac{m_{1}m_{2}}{d_{12}}-\frac{m_{2}m_{3}}{d_{23}}-\frac{m_{1}m_{3}}{d_{13}}.\] In [1] we gave a short proof of: **Proposition 1**.: _Given three positive masses \(m_{1},m_{2},m_{3}\) and a \(4\times 4\) antisymmetric matrix \(L\) of rank 4, consider the 3-body problem in \(\mathbb{R}^{4}\) with these masses, and consider in the phase space the submanifold of states with angular momentum \(L\). On this submanifold the energy \(H\) is bounded below._ We will now prove a statement which constrains the realization of the infimum of \(H\). **Proposition 2**.: _Consider an unbounded sequence of states of given angular momentum \(L\) of rank 4. If the energy converges to the infimum \(H_{\inf}\) of the energy for this \(L\), then there is an \((i,j)\), \(1\leq i<j\leq 3\) and an \((a,b,A,B)\in E\) with \(L=a\wedge b+A\wedge B\) such that_ \[H_{\inf}=-\frac{(m_{i}m_{j})^{3}}{2|a\wedge b|^{2}(m_{i}+m_{j})}.\] Proof.: We first prove that the velocities are bounded along the unbounded sequence. Suppose the contrary, i.e., \(T\to+\infty\) along a subsequence. Then \(T\sim H_{\inf}-V\sim-V\). Apply the map \((q,p,Q,P)\mapsto(2q,p/2,2Q,P/2)\), which preserves \(L\). Then, on the image of the subsequence, the energy \(T/4+V/2\sim V/4\) tends to \(-\infty\), which contradicts the finite infimum \(H_{\inf}\). So, the velocities are bounded and the configuration is unbounded. As \(H_{\inf}<0\), according to an easy general lemma ([1], Lemma 1), we can extract a subsequence such that two positions \(q_{1}\) and \(q_{2}\) (up to renumbering) tend to a limit with respect to their center of mass, and their velocities relative to this center of mass tend to a limit. The distance \(|Q|\) of the third body \(q_{3}\) tends to infinity. Consider any state of the subsequence. There are two easy ways to produce another state with lower energy and same angular momentum. 1. The momentum \(P\) contributes to the angular momentum. Since the contribution is \(Q\wedge P\), the component of \(P\) along \(Q\) does not contribute to the angular momentum. We decrease this component. The energy decreases. We do this for each state of the subsequence. As the angular momentum is finite and \(Q\) is infinite, we get a new sequence of states where the velocity \(P\) tends to zero. Then the third body does not contribute to the energy in the limit. 2. Standard considerations about the 2-body problem show that without changing the contribution \(q\wedge p\) to the angular momentum, we can decrease the energy by choosing a \((p,q)\) generating a circular motion of the binary. Then the energy is \[H=-\frac{(m_{1}m_{2})^{3}}{2|q\wedge p|^{2}(m_{1}+m_{2})}.\] Taking into account the renumbering, this is the announced \(H_{\inf}\) with \(a=q\) and \(b=p\). We will get a simple lower bound for the term \(|a\wedge b|\) in the denominator of the above formula for \(H_{\inf}\). **Lemma 3**.: _Let \(\mathcal{C}\subset\bigwedge^{2}E\) be the 5-dimensional cone whose points are the bivectors of rank 2. The Gauss map sends \(\mathcal{C}\) into itself._ Proof.: Note that \(0\) is excluded from \(\mathcal{C}\), being of rank \(0\). The points of \(\mathcal{C}\) are non-singular. Let the bivector \(\eta\) be the image of \(\pi\in\mathcal{C}\) by the Gauss map: \(\eta\) is orthogonal to the tangent plane of \(\mathcal{C}\) at \(\pi\). There is an orthogonal frame \((f_{1},f_{2},f_{3},f_{4})\) of \(E\) such that \(\pi=f_{1}\wedge f_{2}\). By varying \(f_{1}\) and \(f_{2}\) arbitrarily, we see that the tangent plane is generated by \(f_{1}\wedge f_{2}\), \(f_{1}\wedge f_{3}\), \(f_{1}\wedge f_{4}\), \(f_{2}\wedge f_{3}\), \(f_{2}\wedge f_{4}\). Then \(\eta\) is proportional to \(f_{3}\wedge f_{4}\), since \(\langle f_{3}\wedge f_{4},f_{i}\wedge f_{j}\rangle=\langle f_{3},f_{i}\rangle \langle f_{4},f_{j}\rangle-\langle f_{4},f_{i}\rangle\langle f_{3},f_{j} \rangle=0\) if \((i,j)\neq(3,4)\). Consequently \(\eta\) is of rank 2. The next Lemma is a standard particular case of a statement by Weyl ([20]) about Hermitian matrices, presented and extended in [15]. The particular case being simple, it deserves a short proof. We deduce it from Lemma 3, which improves an argument sketched in [1], page 331. **Lemma 4**.: _If a bivector \(L=a\wedge b+A\wedge B\) of rank 4 has eigenvalues \(\pm il_{1}\) and \(\pm il_{2}\) with \(0<l_{1}\leq l_{2}\), then \(l_{1}\leq|a\wedge b|\)._ Proof.: Let \(d_{L}\) be the distance from \(L=l_{1}e_{1}\wedge e_{2}+l_{2}e_{3}\wedge e_{4}\) to the cone \(\mathcal{C}\) of bivectors of rank 2. Let \(\pi\) be the point of \(\mathcal{C}\) such that \(|L-\pi|=d_{L}\). The possibility \(\pi=0\) is excluded since we would have \(d_{L}=|L|=\sqrt{l_{1}^{2}+l_{2}^{2}}\), while \(\pi=l_{2}e_{3}\wedge e_{4}\) is at the lower distance \(l_{1}\). As \(\pi\neq 0\), \(L-\pi=\eta\) is orthogonal to the tangent plane of \(\mathcal{C}\) at \(\pi\). According to Lemma 3, \(\eta\) is of rank 2. The decompositions \(L=\pi+\eta\) and \(L=l_{1}e_{1}\wedge e_{2}+l_{2}e_{3}\wedge e_{4}\) coincide if \(l_{1}\neq l_{2}\), by uniqueness of the eigenplanes. We deduce, even if \(l_{1}=l_{2}\), that \(d_{L}=l_{1}\). If now \(L=a\wedge b+A\wedge B\), as \(A\wedge B\in\mathcal{C}\), we have \(|a\wedge b|\geq d_{L}\). **Proposition 5**.: _For any \((i,j)\), \(1\leq i<j\leq 3\), for \(k=1\) or \(2\), there is an unbounded sequence \(s_{n}\) of states of given angular momentum \(L=l_{1}e_{1}\wedge e_{2}+l_{2}e_{3}\wedge e_{4}\), \(l_{1}l_{2}\neq 0\) such that the energy \(H(s_{n})\) of \(s_{n}\) satisfies_ \[\lim_{n\to\infty}H(s_{n})=H_{ijk}\quad\text{and}\quad H(s_{n})<H_{ijk},\quad \text{where }H_{ijk}=-\frac{(m_{i}m_{j})^{3}}{2l_{k}^{2}(m_{i}+m_{j})}.\] Proof.: We take for example \((i,j,k)=(1,2,1)\). We choose the following triangle in the plane \(\mathrm{O}e_{1}e_{3}\), with center of mass at \(\mathrm{O}=(0,0)\), and with inertia tensor with axes \(\mathrm{O}e_{1}\) and \(\mathrm{O}e_{3}\): \[q_{1}=(\alpha m_{2},-\beta_{n}m_{3}),\quad q_{2}=(-\alpha m_{1},-\beta_{n}m_{ 3}),\quad q_{3}=\big{(}0,\beta_{n}(m_{1}+m_{2})\big{)}.\] We fix \(\alpha>0\) such that the binary \((q_{1},q_{2})\) has the size of the circular motion of angular momentum \(l_{1}\) and let \(\beta_{n}\to+\infty\). The velocities \((\dot{q}_{1},\dot{q}_{2},\dot{q}_{3})\) are in the plane \(\mathrm{O}e_{2}e_{4}\) orthogonal to the plane of the triangle. We define them as the sum of two components: one component gives an angular momentum \(l_{2}\) to the "large binary" consisting of \(q_{3}\) and the center of mass of \(q_{1}\) and \(q_{2}\). The other is fixed and gives to the binary \(q_{1}\), \(q_{2}\) the unique circular motion of angular momentum \(l_{1}\). The total angular momentum is \(L=l_{1}e_{1}\wedge e_{2}+l_{2}e_{3}\wedge e_{4}\). There is no other component since the rotations are around the axes of inertia. As \(n\to+\infty\), the limit of the energy is \(H_{ijk}\). While approaching the limit, we see two contributions tending to zero: a negative contribution \(-m_{1}m_{3}/d_{13}-m_{2}m_{3}/d_{23}\) to the potential energy, a positive contribution to the kinetic energy, proportional to \(|\dot{q}_{3}|^{2}\). The negative contribution is dominant, since the positive contribution is as \(1/d_{13}^{2}\), the angular momentum being finite. So the limit is approached from below. **Proposition 6**.: _If along a sequence on the submanifold defined in Proposition 1 the energy function converges to its infimum \(H_{\mathrm{inf}}\), then the sequence is bounded._ Proof.: Consider the infimum \(H_{\mathrm{inf}}\) of the energy obtained in Proposition 2 by assuming that the sequence is unbounded. The formula for \(H_{\mathrm{inf}}\) and Lemma 4 prove that \(H_{ij1}\leq H_{\mathrm{inf}}\), where \(H_{ij1}\) is defined in Proposition 5. But Proposition 5 proves that there are states \(s_{n}\) such that \(H(s_{n})<H_{ij1}\). So, \(H_{\mathrm{inf}}\) is not the infimum. Contradiction. In other words, the value \(H_{\mathrm{inf}}\) is the value of the energy at a minimum, and is not a limiting value at infinity. We may now confirm the following result, which we stated in [1] as the first part of Theorem 9. **Theorem 7**.: _There is an \(\epsilon>0\) such that on the submanifold defined in Proposition 1 the level sets of the energy function \(H\) with value \(H\in[H_{\mathrm{inf}},H_{\mathrm{inf}}+\epsilon]\), where \(H_{\mathrm{inf}}\) is the infimum of \(H\), are nonempty and compact._ The second part of Theorem 9 in [1] states that the minima of \(H\) on the submanifold are, after reduction of the symmetry, isolated relative equilibria, which are consequently Lyapunov stable. There we used a lemma about the balanced configurations to exclude the possibility of a continuum of relative equilibria. ## 3. Critical values In [1] we found the relation between energy and angular momentum for relative equilibria of the 3-body problem in 4D. It is natural to add to the energy-momentum diagram from [1] the curves for critical points at infinity. Let \(H_{ij}=-m_{i}^{3}m_{j}^{3}/(2(m_{i}+m_{j}))=H_{ijk}l_{k}^{2}\) and consider the energy multiplied by the squared angular momentum to make it scaling invariant as in [1], hence \[h=H_{ij}l_{1}^{-2}(l_{1}+l_{2})^{2}.\] Let \(h\) be a function of the dimensionless momentum \(k=l_{1}l_{2}/(l_{1}+l_{2})^{2}\). Introducing the parameter \(\chi=(1+l_{2}/l_{1})^{-1}\in(0,1)\) we obtain the scaling invariant energy-momentum curves of the critical points at infinity as \[(h,k)=(H_{ij}\chi^{-2},\chi(1-\chi)).\] If we insist that \(l_{1}\leq l_{2}\) then \(\chi\in(0,1/2]\). Instead of normalising the other way we can consider \(\chi\in(0,1)\). For \(\chi\to 1\) the 3D case is recovered, in which the three critical values at infinity are all larger than the critical values of the collinear Euler and the equilateral Lagrange solutions. Considering Fig. 1 this ordering is not preserved for non-zero values of the angular momentum. At \(\chi=1/2\) the curve of critical values is tangent to the maximum values \(k=1/4\). For \(\chi\leq 1/2\) the curves of critical values of the infinite critical points may intersect those of the finite critical points and the simple ordering of the 3D case is not preserved. We now show that in the limit \(\chi\to 0\) the critical values of the three infinite critical points are asymptotic to the critical values of the three families of finite critical points, which are Figure 1. Critical values of the scaled momentum versus scaled energy (as in [1]) for finite relative equilibria (solid red, green, blue, black) and for critical points at infinity (dashed). Masses \(m_{1}=1/2,m_{2}=1/3,m_{3}=1/6\). balanced configurations. In [10] a relation between the limit of these balanced configurations and critical values at infinity of the 3D case has already been noticed, and with the results of the present paper this is made precise. Eliminate \(\chi\) using \(k=\chi(1-\chi)\) and choose the branch where \(\chi\to 0\). Expanding \(h=h(k)\) in the limit of small \(k\) gives \[h=-\frac{4H_{ij}}{(\sqrt{1-4k}-1)^{2}}=-\frac{H_{ij}}{k^{2}}(1-2k-k^{2}+O(k^{3} )).\] In [10] the asymptotics of the three families of balanced configurations has been computed, see page 393. The series found there agrees in the first two terms so that the limiting value of \(hk^{2}\) and its first derivative agree in the limit \(k\to 0\). The sign of the difference in the 2nd order term depends on the masses. In figure 1 solid and dashed lines with the same asymptotics have the same colour.
2309.13216
MISFIT-V: Misaligned Image Synthesis and Fusion using Information from Thermal and Visual
Detecting humans from airborne visual and thermal imagery is a fundamental challenge for Wilderness Search-and-Rescue (WiSAR) teams, who must perform this function accurately in the face of immense pressure. The ability to fuse these two sensor modalities can potentially reduce the cognitive load on human operators and/or improve the effectiveness of computer vision object detection models. However, the fusion task is particularly challenging in the context of WiSAR due to hardware limitations and extreme environmental factors. This work presents Misaligned Image Synthesis and Fusion using Information from Thermal and Visual (MISFIT-V), a novel two-pronged unsupervised deep learning approach that utilizes a Generative Adversarial Network (GAN) and a cross-attention mechanism to capture the most relevant features from each modality. Experimental results show MISFIT-V offers enhanced robustness against misalignment and poor lighting/thermal environmental conditions compared to existing visual-thermal image fusion methods.
Aadhar Chauhan, Isaac Remy, Danny Broyles, Karen Leung
2023-09-22T23:41:24Z
http://arxiv.org/abs/2309.13216v1
# MISFIT-V: Misaligned Image Synthesis and Fusion using Information from Thermal and Visual ###### Abstract Detecting humans from airborne visual and thermal imagery is a fundamental challenge for Wilderness Search-and-Rescue (WiSAR) teams, who must perform this function accurately in the face of immense pressure. The ability to fuse these two sensor modalities can potentially reduce the cognitive load on human operators and/or improve the effectiveness of computer vision object detection models. However, the fusion task is particularly challenging in the context of WiSAR due to hardware limitations and extreme environmental factors. This work presents Misaligned Image Synthesis and Fusion using Information from Thermal and Visual (MISFIT-V), a novel two-pronged unsupervised deep learning approach that utilizes a Generative Adversarial Network (GAN) and a cross-attention mechanism to capture the most relevant features from each modality. Experimental results show MISFIT-V offers enhanced robustness against misalignment and poor lighting/thermal environmental conditions compared to existing visual-thermal image fusion methods. The code is available at GitHub.1 Footnote 1: [https://github.com/Aadharc/Visual_Thermal_Image_Fusion.git](https://github.com/Aadharc/Visual_Thermal_Image_Fusion.git) ## 1 Introduction Search and rescue teams worldwide have increasingly relied on uncrewed aerial vehicles (UAVs) equipped with visual and thermal imaging sensors to enhance the rescuer's ability to detect and locate lost or injured persons. Often, these operations take place in wilderness environments (i.e., wilderness search and rescue [WiSAR]) featuring hazardous and difficult-to-access terrain, and in various weather and lighting conditions which affect the quality of information obtained from visual and thermal modalities. Ultimately, these conditions affect the ability to detect the presence or absence of missing persons in the imagery. The detection task in a search and rescue mission, whether performed by human imagery analysts, UAV operators, or computer vision algorithms, is a fundamentally crucial function which can make the difference between mission success or failure. Thus, developing algorithms that can detect humans more reliably and in adverse lighting and weather conditions is of utmost importance, and multimodal image fusion is an active area of research that offers many advantages for this application. Multi-modal image fusion is the act of combining information from multiple image modalities into a useful representation that can be more easily used for human or robotic perception-related tasks. This remains an important area of research with applications in medical diagnosis (fusing CT and MRI scans [1]), remote sensing (detecting environmental anomalies from satellite imagery [19, 21]), and application featured in this paper, which is visual-thermal image fusion for enhanced human detection in UAV-aided search and rescue missions. However, achieving accurate and effective fusion between visual and thermal image modalities is particularly challenging in practice due to a variety of factors, Figure 1: **System Conceptual Overview. Images from visual (RGB) and thermal (IR) cameras capturing the same scene are misaligned due to physical properties. MISFIT-V resolves this misalignment while emphasizing useful features from both modalities in the fused output.** including differences in sensor resolution, noise characteristics, and spatial and temporal misalignment which affect the image registration process. In this paper, we address the problem of fusing misaligned visual and thermal image pairs using unsupervised deep learning methods and provide qualitative and quantitative analysis of the resulting fused image quality. The proposed method, Misaligned Image Synthesis and Fusion using Information from Thermal and Visual (MISFIT-V) accounts for practical considerations of WiSAR operations, and presents the following advantages: * Image registration is not required * Ground truth fused images are not needed during model training * The fused output is human-interpretable * The fused output balances visual/thermal features ### Problem Statement The goal is to fuse a visual-thermal image pair \((I_{\mathrm{RGB}},I_{\mathrm{IR}})^{2}\in\mathbb{R}^{H\times W\times 3}\times \mathbb{R}^{H\times W\times 1}\), no necessarily aligned, into a single image \(I_{\mathrm{fus}}\in\mathbb{R}^{H\times W\times 3}\) whereby features are aligned, and salient features from each image are combined. Here, \(H\) & \(W\) represent the height and width of the corresponding images, and 3 & 1 are the number of channels in respective images. Motivated by the practical considerations of WiSAR operations, we further make the following assumptions in our problem setup: **The image pairs are not aligned**: Given visual and thermal sensors function with different optical characteristics (e.g., field of view, resolution, and lens distortions), resulting in sets of misaligned visual and thermal images and largely uncorrelated feature sets. This makes traditional image registration/fusion techniques inadequate. **Ground truth fused images do not exist**: Indeed obtaining fused images is the challenge, and manual image fusion is not a viable solution to obtain ground truth images. **Fused image must be human-interpretable**: While the end goal is to use the fused image to aid in human detection, the fused image must remain human-interpretable given the human-on-the-loop nature of WiSAR missions, i.e., human operators monitor the video feed while a human detection algorithm runs concurrently. ### Organization The rest of this paper is organized as follows. Section 22 discusses the relevant background and related work in image fusion. Section 3 describes the architecture of MISFIT-V, and Sections 4 and 5 report the experimental results and conclusions, respectively. ## 2 Related Work A vast number of approaches to the image fusion problem have been explored in the literature (see [12] for a review). We present a brief review on deep-learning-based fusion algorithms which have demonstrated state-of-the-art performance in this area. ### Visual & Thermal Image Fusion Many standard image fusion methods for thermal and visual images, such as those proposed in [2, 7, 11, 15], rely on the input images that are aligned at the pixel level. This step of visual-thermal image alignment often requires precise hardware-level calibration and image processing, and misaligned features are the primary sources of error for these fusion algorithms. In many practical scenarios where visual and thermal cameras are used, the inherent differences between the two sensors (resolution, field of view, lens effects, and noise characteristics) make these images misaligned. Moreover, visual and thermal imaging sensors use distinctly different operating principles (visible spectrum light versus infrared radiation), often resulting in very little correspondence between each modality's features [14]. ### Multimodal Image Fusion with Generative Adversarial Networks (GANs) More recently, several methods ([9, 20, 22]) propose using image-to-image translation as a way to address the lack of common features between visual and thermal images. Image-to-image translation is the act of taking images from one domain and transforming them so they have the style (or characteristics) of images from another domain. These methods [18] use the Generative Adversarial Network (GAN) architecture as the backbone to translate the image from one modality to the other. However, the dataset [16] used for training the GAN architecture in these works is pre-registered and aligned at the pixel level, which can cause problems with respect to scalability. Since these methods require pixel-aligned multi-modal images, they cannot be utilized in many real-world applications where the alignment between visual and thermal sensor pairs is unknown a priori. For instance, a recent dataset, called WiSARD [4], features visual and thermal image pairs taken from a UAV's perspective with annotations for human detection in a wilderness environment; however, the images are not perfectly aligned with each other on the pixel level. ### Cross-Attention Mechanism Natural language processing (NLP) research made substantial advancements in recognizing the relationships between various input sequence segments with the introduction of the Transformer model and the attention mechanism, and in particular the cross-attention mechanism is commonly used in deep learning models for tasks involving multiple modalities [17]. Numerous studies have used a transformer model to successfully perform multi-modal data processing tasks, using a cross-attention module that enables the model to learn what information from one modality is relevant when analyzing features from another modality. For instance, [6] combined multi-scale picture characteristics using cross-attention, while [3] combined visual and LiDAR image features using self-attention and cross-attention. ## 3 Methodology In this section, we described the proposed neural architecture of MISFIT-V, and motivate the loss function used to train the model. ### Proposed Architecture Figure 2 depicts our proposed model, which was inspired by the idea of using two discriminators presented in [10, 13], leverages a Generative Adversarial Network (GAN) architecture as its backbone. The inclusion of two discriminators enables the preservation of information from both input modalities, ensuring that the distinctive features and characteristics of each modality are effectively retained. Next, we describe each component of the GAN architecture. #### 3.1.1 Generator with Cross Attention The generator, shown in Figure 3, is designed to produce a fused image given a visual-thermal image pair. The generator network is comprised of two separate Convolutional Neural Networks (CNNs) for downsampling and feature extraction from the input images. Additionally, a Cross-Attention Network [5, 17] is incorporated to capture meaningful and unique features from each modality, i.e., "best of both worlds", considering the diverse aspects that thermal and visible images focus on in the same scene (see Appendix A). The utilization of cross-attention eliminates the need for explicit alignment of the images, thereby addressing the fusion of misaligned input images. The outputs of the cross-attention network are a tuple of cross-attention maps with respect to both modalities, which are multiplied with the downsampled features and fed into an upsampling CNN. Finally, the outputs from upsampling CNNs are concatenated and fed into the U-Net which generates the fused image by integrating the information from both modalities to produce a more comprehensive and meaningful fused image (see Figure 3). #### 3.1.2 Dual Discriminators Given that no ground truth data exists, the proposed discriminator module is comprised of _two_ discriminator networks; one for visual and another for thermal, see Figure 2. Each discriminator takes a concatenated image comprised of the original (visual or thermal) and fused images and classifies them as either real or fake. In this way, the generator is discouraged from passing only the features from one modality through, so that it ultimately achieves a balanced set of features from each modality in the fused output. ### Loss Function Here, we describe the loss function used to train MISFIT-V. We employ an adversarial loss function to train the discriminators and generator in order to generate high-quality fused images. The adversarial loss for the discriminator, which corresponds to a specific modality X (either thermal/IR or visual/RGB), is defined as follows: \[\mathcal{L}_{\text{adv,X}}=-\log D_{\text{X}}(I_{\text{X}})-\log(1-D_{\text{X }}(I_{\text{fus}})), \tag{1}\] where \(D_{\text{X}}(I_{\text{X}})\) represents the probability that \(I_{\text{X}}\) is classified as modality X, and \(D_{\text{X}}(I_{\text{fus}})\) represents the probability that \(I_{\text{fus}}\) is classified as modality X by discriminator. The generator loss is defined as the sum of the adversarial losses from both discriminators, weighted by hyperparameters \(\lambda_{\text{IR}}\) and \(\lambda_{\text{RGB}}\), respectively: \[\mathcal{L}_{\text{gen}}=\lambda_{\text{IR}}\cdot\mathcal{L}_{\text{adv,IR}} +\lambda_{\text{RGB}}\cdot\mathcal{L}_{\text{adv,RGB}} \tag{2}\] where \(\lambda_{\text{IR}}\) and \(\lambda_{\text{RGB}}\) control the relative importance of the respective losses in the overall generator loss. Here, the Figure 3: MISFIT-V Generator Architecture. Input images are fed into a downsampling CNN (‘Down’) separately to retrieve their features. These features are then fed into the cross-attention network to calculate the cross-attention map between the modalities, which are multiplied with the downsampled features and concatenated to form the input to the U-Net CNN, which generates a fused image. Figure 2: MISFIT-V Training Pipeline. Thermal (IR) and visual (RGB) images are fed into a generator (orange block) consisting of a cross-attention module and CNN to produce a fused image. The fused image is fed into both discriminator networks, encouraging a balanced set of features from both images. adversarial losses encourage the generator to create fused images that are indistinguishable from thermal images and visual images by training against two discriminators that try to classify them. In addition to the adversarial losses, a Kullback-Leibler (KL) Divergence loss is used to compare the fused image generated by the generator with the original visual and thermal images in terms of their distribution. The KL Divergence loss is defined as: \[\mathcal{L}_{\text{KL}}=\text{KL}(I_{\text{fus}}||I_{\text{IR}})+\text{KL}(I_{ \text{fus}}||I_{\text{RGB}}) \tag{3}\] where \(\text{KL}(I_{\text{fus}}||I_{\text{IR}})\) and \(\text{KL}(I_{\text{fus}}||I_{\text{RGB}})\) represent the KL Divergence between the fused image \(I_{\text{fus}}\) and the thermal image \(I_{\text{IR}}\), and between the fused image \(I_{\text{fus}}\) and the visual image \(I_{\text{RGB}}\), respectively. Furthermore, an L1 loss is utilized to calculate the pixel-wise differences between the fused image and each of the original images. This loss can be expressed as: \[\mathcal{L}_{\text{L1}}=\|I_{\text{fus}}-I_{\text{IR}}\|_{1}+\|I_{\text{fus}}- I_{\text{RGB}}\|_{1}, \tag{4}\] where \(\|\cdot\|_{1}\) denotes the L1 norm. The overall loss, \[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{gen}}+\lambda_{\text{KL}} \mathcal{L}_{\text{KL}}+\lambda_{\text{L1}}\mathcal{L}_{\text{L1}} \tag{5}\] is the sum of the generator loss, the KL divergence loss, and L1 loss weighted by hyperparameters \(\lambda_{\text{KL}}\), and \(\lambda_{\text{L1}}\) that control the relative importance of the KL divergence loss and L1 loss. ## 4 Experimental Results ### Dataset and Training Details The model was trained using 2752 pairs of thermal and visual images from the WiSARD dataset [4], with an 80:20 split for training and validation, and a separate test dataset of 200 pair of images was employed to evaluate the performance of the trained model. The network is trained for 20 epochs, using a learning rate of \(1\times 10^{-4}\). The hyperparameters for the training process are set as follows: \(\lambda_{\text{KL}}=10\), \(\lambda_{\text{L1}}=100\), \(\lambda_{\text{IR}}=1\), and \(\lambda_{\text{RGB}}=1\). ### Qualitative Analysis We demonstrate that the fused images provide a clearer representation of the environment than both modalities alone. From Figure 4, we see that our method produces well-fused images that retain the terrain features but extract the bright human silhouettes from the thermal image. However, our method still has limitations; for example, when objects of interest are small in size, the attention mechanism may encounter challenges in accurately determining the essential features from both modalities, leading to the emergence of ghost artifacts in the fused image (see the fourth row in Figure 4). ### Quantitative Comparison To quantify the fusion results, we evaluate the extent to which the information from each modality is preserved. This analysis involves calculating particular metrics individually for thermal and visual images against the fused image, to measure the level of information retention in the fusion process. We compare MISFIT-V against SeAFusion [15], another method that exhibits state-of-the-art performance for visual-thermal image fusion. Given the formulation of SeAFusion, we had to evaluate both methods on an autonomous driving dataset [8] which contains more structure than WiSAR settings and had ground truth labels. We compare against five metrics: Mean-squared-error (MSE), universal quality index (UQI), multi-scale structural similarity (MSSSIM), normalized mutual information (NMI), and the peak signal-to-noise ratio (PSNR). The y-axis of the comparison (see Figure 5) represents the numerical values corresponding to each metric. For brevity, we have presented the results for three metrics in this section which have been normalized. The plots for the remaining metrics can be found in Appendix B. One interesting trend in our results is that MISFIT-V prioritizes information from the thermal modality to a greater extent and performs only marginally worse in retaining visual information when compared to SeAFusion. While MISFIT-V outperforms SeAFusion in some aspects and only slightly worse in others, it offers a significant advantage over SeAFusion by eliminating the need for semantic labeling and ground truth data, thus enhancing its scalability across diverse datasets. This characteristic enhances the applicability and adaptability of MISFIT-V in a wider range of scenarios. Figure 4: **MISFIT-V Results. Each row, from left to right, shows a scene’s thermal representation, its visual representation, and finally the resulting fused image via MISFIT-V, in a wilderness environment. The yellow bounding boxes highlight the locations of humans.** ### Ablation Study In the pursuit of refining and optimizing our proposed methodology, we conducted an ablation study to meticulously analyze the effects of various modifications on the performance of our model. Through a series of controlled experiments, we aimed to dissect the contribution of specific components and choices within the architecture and total loss function. Here, we present the findings of our ablation study, comparing the original method with distinct variations: one involving the adjustment of the weightage of certain loss functions (Figure 6) and excluding the cross attention mechanism between thermal and visual data. #### 4.4.1 Impact of Loss Function Variations on Fused Image Quality An essential component of our proposed method revolves around the integration of L1 loss within the comprehensive loss function, aimed at optimizing the fusion procedure. To assess the significance of this specific loss term, we embarked on an experiment by modifying the weightage attributed to the L1 loss. In particular, we reduced the weightage from its original value of 100 to a much lower value of 1. The rationale behind this manipulation was to ascertain whether diminishing the emphasis on the L1 loss would result in discernible alterations in the quality of the generated fused images. See Figure 6 and 12. The outcomes of this experiment unveiled an intriguing insight. By reducing the L1 loss weightage, we observed a distinct deterioration in the comprehensibility of the resultant fused images. In other words, when the weightage was lowered from 100 to 1, the fused images exhibited a decrease in their interpretability and coherence. This phenomenon suggests that the L1 loss component indeed plays a pivotal role in shaping the clarity and visual coherence of the fused images. As such, its significance as a contributing factor to the overall loss function is highlighted, reinforcing the critical importance of its weightage within the fusion process. In another variation, we examined the consequences of omitting the Kullback-Leibler (KL) loss term while maintaining the weightage of L1 loss at the original value of 100. This omission aimed to explore the repercussions of excluding the KL loss term on the final quality of the fused images. The subsequent analysis, as evidenced by the diverse metric plots presented below, offers valuable insights into the outcomes of these variations and their implications on the fused image quality. See Figure 6. For additional information and graphical representations for other metrics, please refer to Appendix C in this paper. The experimental results shed light on an intriguing phenomenon. When the KL loss was removed from the loss function, we observed a discernible reduction in the quality of the fused images. This reduction was evident across various metrics that assess the image quality, underscoring the importance of the KL loss in enhancing the fusion process. By omitting the KL loss, which serves as a vital bridge between the latent space and the generated image, the model's ability to capture and reproduce intricate visual features was compromised. Consequently, the fused images exhibited a lower level of fidelity and coherence. Figure 5: **Method Comparison. This plot shows the results of three performance metrics comparing thermal and visual images against fused images generated by MISFIT-V and SeAFusion.** Figure 6: Comparison of image fusion performance using metrics ‘MSSSIM’. The first column, labeled as ‘Original’, presents the scores achieved by the original method. In the second column, labeled as ‘\(\lambda_{\mathrm{L1}}=1\)’ the fusion performance is shown when the weightage of L1 loss is adjusted to 1. The third column displays the results obtained when the KL loss term is omitted. Notably, the quality of the fused image is observed to decrease when the KL loss is omitted, and this degradation is further exacerbated when the \(\lambda_{\mathrm{L1}}\) is set to 1. This visually emphasizes the significance of the KL loss term and the weightage of L1 loss in maintaining the quality of the generated fused images. #### 4.4.2 Impact of Attention Mechanism on Fusion Quality The cross attention mechanism serves as a pivotal bridge between thermal and visual data, enabling the model to capture distinct yet complementary information from both modalities. In this ablation variation, we removed the cross attention mechanism entirely from the architecture to evaluate its influence on the fusion process. The comparison is presented in Figure 7, where the right image displays the fused images generated without the attention mechanism and the left image showcases the fused images produced with the attention mechanism. Notably, the fused image generated without attention exhibited ghost artifacts and the inclusion of visual features, which can be attributed to a lack of emphasis on the essential characteristics of both modalities during the fusion process. In contrast, the fused images generated with the attention module demonstrated a distinct improvement in terms of quality and coherence. The attention mechanism effectively identifies and prioritizes significant features while minimizing the impact of less relevant visual features. This results in a more balanced and comprehensive fused image that accurately represents the salient information present in both thermal and visual modalities. The visual comparison is further supported by quantitative assessments, where metrics such as Mean-Squared Error (MSE), Multi-Spectral Structural Similarity Index (MSSSIM), Normalised Mutual Information (NMI), Universal Quality Index (UQI) and Peak Signal-to-Noise Ratio (PSNR) are employed to quantify the improvement in image quality achieved through the attention mechanism. Please refer to Figure 8 for a detailed comparison of metrics including UQI, MSSSIM, and MSE. For additional information and graphical representations for other metrics, please consult Appendix D in this paper. ## 5 Conclusion We have presented MISFIT-V, a novel approach for visual-thermal image fusion leveraging a GAN architecture with two discriminators and a cross attention mechanism to blend crucial information from both modalities and hence eliminating the need to align the images altogether. The experimental results demonstrate the robustness and superior performance of MISFIT-V, outperforming a state-of-the-art baseline method while effectively handling misaligned images. By enabling a more complete representation of the environment, MISFIT-V has the potential to enhance WiSAR mission effectiveness and alleviate the cognitive load on human operators.
2309.05474
How robust are gravitational wave predictions from cosmological phase transitions?
Gravitational wave (GW) predictions of cosmological phase transitions are almost invariably evaluated at either the nucleation or percolation temperature. We investigate the effect of the transition temperature choice on GW predictions, for phase transitions with weak, intermediate and strong supercooling. We find that the peak amplitude of the GW signal varies by a factor of a few for weakly supercooled phase transitions, and by an order of magnitude for strongly supercooled phase transitions. The variation in amplitude for even weakly supercooled phase transitions can be several orders of magnitude if one uses the mean bubble separation, while the variation is milder if one uses the mean bubble radius instead. We also investigate the impact of various approximations used in GW predictions. Many of these approximations introduce at least a 10% error in the GW signal, with others introducing an error of over an order of magnitude.
Peter Athron, Lachlan Morris, Zhongxiu Xu
2023-09-11T14:15:08Z
http://arxiv.org/abs/2309.05474v2
# How robust are gravitational wave predictions from cosmological phase transitions? ###### Abstract Gravitational wave (GW) predictions of cosmological phase transitions are almost invariably evaluated at either the nucleation or percolation temperature. We investigate the effect of the transition temperature choice on GW predictions, for phase transitions with weak, intermediate and strong supercooling. We find that the peak amplitude of the GW signal varies by a factor of a few for weakly supercooled phase transitions, and by an order of magnitude for strongly supercooled phase transitions. The variation in amplitude for even weakly supercooled phase transitions can be several orders of magnitude if one uses the mean bubble separation, while the variation is milder if one uses the mean bubble radius instead. We also investigate the impact of various approximations used in GW predictions. Many of these approximations introduce at least a 10% error in the GW signal, with others introducing an error of over an order of magnitude. ## I Introduction We are now in an era where existing gravitational wave (GW) data can have an impact on our understanding of physics beyond the Standard Model (BSM) of particle physics. Very recently pulsar timing array experiments have detected a stochastic GW background (SGWB) [1, 2, 3, 4] and find that new physics explanations have a slight preference over less exotic sources [5]. Existing data on GWs from the LIGO/VIRGO network [6] is also constraining well-motivated Pati-Salam models that can lead to gauge coupling unification [7] as well as models of the dark sector [8]. However, with this exciting progress also comes significant challenges. It is now essential that we have reliable calculations of the GW spectra for BSM models where we understand the uncertainties involved and the effects of various approximations and assumptions that are commonly used. There are many challenging calculations involved in going from a particular BSM scenario to a predicted GW spectrum; see Ref. [9] for a review. Quantities derived from the effective potential can strongly depend on the method used [10] and uncertainties in the GW spectra from effective potential computations have been investigated in Ref. [11]. Here we show that even if the effective potential calculation was under full control, there are many other challenges for reliable predictions of GW spectra. Since the first direct detection of GWs [12] in 2015, there has been substantial progress in understanding how to characterise phase transitions and extract GW predictions. Here we mention a few important points. Sound waves are expected to be the largest source of GWs following Ref. [13] which showed that sound waves source last long after the bubbles have merged. However, more recently it has been shown that in many cases the lifetime is nonetheless significantly shorter than the Hubble time [14, 15] and suppression factors were introduced [16, 17] to account for the finite lifetime of the source. These suppression factors were subsequently refined to address issues stemming from the derivation of the Hubble time as the maximum lifetime of the source [18]. Furthermore, the modelling of GWs from sound waves has improved considerably from simulations [19, 20] and the construction of the sound shell model [21] and its further development [22, 23, 24]. Significant improvements have also been made in determining the kinetic energy fraction that is available to source GWs. New parameterisations have been developed that go beyond simplified models such as the bag model, first for the case where bubbles expand as supersonic detonations [25] and later generalised to cover subsonic deflagrations and hybrids [26]. These advances have both improved predictions and raised questions about our previous and current understanding of how sensitive GW experiments can be to first-order phase transitions. In particular, strongly supercooled phase transitions present significant challenges for calculations and may lead to erroneous explanations of GW signals [27]. We therefore treat the extent of supercooling as an important parameter when considering the uncertainties and compare scenarios with weak, intermediate, and strong supercooling. Previously, we have shown that in the presence of supercooling various possible choices of transition temperature decouple [28] and it has been argued that the percolation temperature should be used [28, 29, 30, 17]. Here we show explicitly that the peak amplitude and frequency of the GW spectrum -- and thus the resulting signal-to-noise ratio (SNR) at a detector -- are sensitive to the choice of transition temperature. This is especially true for strongly supercooled phase transitions as one might expect, but is also true for weakly supercooled phase transitions. We show that if one chooses the nucleation temperature as the transition temperature (as is very common practice), then the peak amplitude, peak frequency, and SNR can change by orders of magnitude compared to when using the percolation temperature. This has a huge impact on the the prospects for detection. However, such a drastic change only arises when using the mean bubble separation as the characteristic length scale. If one is more careful about the choice of length scale, the discrepancy can potentially be reduced to a factor of a few. Additionally, we investigate how the predictions can be affected by different estimates of the thermal parameters which determine the GW spectrum. We compare various parameterisations of the kinetic energy fraction, which determines the energy available for sourcing GWs. Another important factor that determines the peak GW amplitude and frequency is the timescale during which the source is active, which is usually replaced by a characteristic length scale. The mean bubble separation is used as this length scale in lattice simulations. We compare the impact different estimates of this have on GW signals, and we qualitatively explore the consequences of using the mean bubble radius instead. Finally, because the turbulence contribution to the overall GW signal is not well modelled, but could be significant, we also compare several different choices for the energy available for sourcing GWs from turbulence and show the impact that this can have on the SNR. In section II we describe first-order phase transitions and supercooling in more detail, and we define important milestone temperatures. In section III we describe how properties of the phase transition and the thermal parameters are computed in particle physics models. We also discuss various estimates for these thermal parameters that are made in the literature. We briefly describe how we use these thermal parameters to predict GW spectra in section IV. We then introduce the model we use to obtain a first-order phase transition in section V. Finally, we present our results in section VI and provide concluding remarks in section VII. ## II First-order phase transitions and supercooling As the Universe cools down the shape of the effective potential changes such that minima (or phases) can appear and disappear and cosmological phase transitions take place. These cosmological phase transitions play an important role in particle physics, such as breaking the electroweak symmetry and thereby generating masses for the fundamental particles via the Higgs mechanism. Further, if a phase transition is of first order (i.e. there is a potential barrier separating the phases), GWs are produced in the process. A potential barrier between the phases prevents an instantaneous transition from the local metastable minimum to the deeper minimum on the other side of the barrier. Instead, the phase transition must proceed via either tunnelling through the barrier or fluctuating over it. This first becomes possible when the Universe cools below the critical temperature, \(T_{c}\), where the free energy densities of the two minima are degenerate. Below \(T_{c}\) the transition begins through a stochastic process where the tunnelling or fluctuations occur at localised points in space-time, and when this happens bubbles of the new phase can form and grow in a process known as bubble nucleation. The phase transition completes if the bubbles of the new phase fill the whole universe. More precisely, because it is a stochastic process we define the the completion temperature, \(T_{f}\), to be the temperature when the fraction of the universe left in the false vacuum (i.e. the old phase) is less then \(1\%\), \(P_{f}(T_{f})<0.01\). When this process takes a long time to complete \(T_{f}\) may be much smaller than the critical temperature \(T_{c}\) at which the new minimum first becomes energetically favoured. This is known as supercooling in analogy with the phenomenon where liquids are supercooled well below their freezing point. All first-order cosmological phase transitions exhibit some degree of supercooling because they do not happen instantly. However, the temperature change can vary from \(T_{f}\) being within \(1\%\) of \(T_{c}\) to being orders of magnitude smaller. The degree of supercooling can have a significant impact on a phase transition and is an important characteristic when comparing phase transitions. Increasing supercooling may boost the energy released in the phase transition and the amplitude of resultant GWs, but too much supercooling can lead to the transition failing to complete. Strongly supercooled phase transitions admit qualitatively different behaviour compared to weakly supercooled phase transitions. Because the nucleation rate is lower, the smaller number of bubbles that are nucleated grow to much larger sizes. This means that the number of bubbles per Hubble volume, \(N\), can be less than one during the period where most of the bubbles are colliding or even by the time the phase transition has completed [28]. This can be expressed more precisely as follows. The nucleation temperature \(T_{n}\) is defined by the condition \(N(T_{n})=1\). Usually \(T_{n}\) is higher than the percolation temperature \(T_{p}\), defined by the moment when the false vacuum fraction, \(P_{f}\), is roughly \(71\%\): \(P_{f}(T_{p})=0.71\). Roughly speaking, \(T_{p}\) is where the bubbles should be in contact with each other (see section IV.7.2 of Ref. [9] for more details). In strongly supercooled scenarios the nucleation temperature can be reached some time after most of the the bubble collisions have taken place. In more extreme cases the phase transition may complete, reaching \(P_{f}(T_{f})<0.01\), before \(N(T)=1\). In such cases there is no nucleation temperature. However, strongly supercooled scenarios can also have enough bubble nucleation such that \(N(T)=1\) is reached relatively early in the phase transition but the transition is still slow leading to a substantial gap between \(T_{n}\) and \(T_{p}\) or \(T_{f}\). Thus, the nucleation temperature is not coupled with the actual progress of the phase transition and the production of GWs. ## III Determining properties of the phase transition ### Temperatures and length scales The rate of a phase transition depends strongly on the size and persistence of the potential barrier. In fast transitions the barrier disappears fairly quickly. The nucleation rate is initially zero at \(T_{c}\) and then increases rapidly as the barrier dissolves, giving an exponential nucleation rate of the form \[\Gamma(t)=\Gamma(t_{*})\exp(\beta(t-t_{*})), \tag{1}\] where \(t_{*}\) is some relevant time in the transition (often taken to correspond to \(T_{n}\)). In contrast, if the barrier persists at low temperatures or even at \(T=0\), the nucleation rate can instead reach a maximum at some temperature \(T_{\Gamma}\) because lower temperature reduces the likelihood of thermal fluctuations over the barrier. The nucleation rate is given by [31] \[\Gamma(T)=T^{4}\left(\frac{S(T)}{2\pi}\right)\exp(-S(T)), \tag{2}\] where \(S(T)\) is the bounce action which we obtain from a modified version of CosmoTransitions [32].1 If one expresses \(S\) as a function of time and Taylor expands about \(t_{*}\), Footnote 1: See appendix F of Ref. [28] for details of the modifications. \[S(t)\approx S(t_{*}) +\left.\frac{\mathrm{d}S}{\mathrm{d}t}\right|_{t=t_{*}}\!\!\!(t-t _{*}) \tag{4}\] \[+\left.\frac{1}{2}\frac{\mathrm{d}^{2}S}{\mathrm{d}t^{2}}\right|_ {t=t_{*}}\!\!\!(t-t_{*})^{2}+\cdots, \tag{5}\] then truncating at first order gives the exponential nucleation rate given in eq. (1), and we can identify \[\beta=-\left.\frac{\mathrm{d}S}{\mathrm{d}t}\right|_{t=t_{*}}. \tag{6}\] This can be useful because \(\beta\) is related to the mean separation of bubbles, \(R_{\mathrm{sep}}\), through [33] \[R_{\mathrm{sep}}=(8\pi)^{\frac{1}{3}}\frac{v_{\mathrm{w}}}{\beta}. \tag{7}\] The mean bubble separation is an important quantity for GW predictions. Equation (7) should hold when evaluated at the temperature where \(P_{f}\) has decreased to \(1/e\), denoted by \(T_{e}\). Computing \(\beta\) directly from the bounce action and using eq. (7) to estimate \(R_{\mathrm{sep}}\) can simply calculations significantly. However, while an exponential nucleation rate is a common assumption and eq. (7) is widely used, these approximations can be problematic in strongly supercooled scenarios. We will demonstrate the potential consequences of this in section VI. Note that if the transition temperature \(T_{*}\) used to evaluate \(\beta\) is close to the temperature where nucleation rate is maximised, \(T_{\Gamma}\), then \(\beta\approx 0\). Further, \(\beta\) is negative when \(T_{*}<T_{\Gamma}\). Therefore, the use of \(\beta\) entirely breaks down in these cases. However, because \(\beta\) vanishes one can truncate eq. (5) at second order and obtain a Gaussian nucleation rate, \[\Gamma(t)=\Gamma(t_{*})\exp\!\left(\frac{\beta_{V}^{2}}{2}(t-t_{*})^{2}\right)\!, \tag{8}\] where \[\beta_{V}=\sqrt{\left.\frac{\mathrm{d}^{2}S}{\mathrm{d}t^{2}}\right|_{t=t_{ \Gamma}}}. \tag{9}\] We can relate \(\beta_{V}\) to \(R_{\mathrm{sep}}\) through [14] \[R_{\mathrm{sep}}=\left(\sqrt{2\pi}\frac{\Gamma(T_{\Gamma})}{\beta_{V}}\right)^ {-\frac{1}{3}}. \tag{10}\] It is unclear how well the approximations eq. (7) and eq. (10) perform, so we include this investigation in our study. We note that we use temperature rather than time in our analysis, so we employ the usual time-temperature relation [28] \[\frac{\mathrm{d}t}{\mathrm{d}T}=\frac{-1}{TH(T)}. \tag{11}\] Thus, \(\beta\) and \(\beta_{V}\) are in fact calculated from \(\mathrm{d}S/\mathrm{d}T\). The Hubble rate is given by \[H(T)=\sqrt{\frac{8\pi G}{3}\rho_{\mathrm{tot}}(T)}, \tag{12}\] where \(\rho_{\mathrm{tot}}\) is the total energy density. We use energy conservation such that \(\rho_{\mathrm{tot}}=\rho_{f}-\rho_{\mathrm{gs}}\), where \(\rho_{f}\) is the false vacuum energy density and \(\rho_{\mathrm{gs}}\) is the ground state energy density. We renormalise the free energy density such that \(\rho_{\mathrm{gs}}=0\), leaving \(\rho_{\mathrm{tot}}=\rho_{f}\). Returning to the full treatment, the nucleation rate in eq. (2) can be used directly to compute the false vacuum fraction \(P_{f}\) as a function of temperature, given by \[P_{f}(T)=\exp\!\left[-\frac{4\pi}{3}\!\int_{T}^{T_{e}}\!\frac{dT^{\prime}}{T^{ \prime 4}}\frac{\Gamma(T^{\prime})}{H(T^{\prime})}\!\left(\int_{T}^{T^{\prime}}\!\! \!dT^{\prime\prime\prime}\frac{v_{w}(T^{\prime\prime})}{H(T^{\prime\prime})} \right)^{3}\right]. \tag{13}\] Here we have assumed that the Universe is expanding adiabatically and we neglect the initial radius of the bubble at formation. See Ref. [9] for more details on the derivations and assumptions. The last undetermined quantity in eq. (12) is the bubble wall velocity, \(v_{w}\). We discuss our treatment of \(v_{w}\) in section III.2. The number of bubbles nucleated at any given temperature can also be computed from eq. (2). In the literature it is standard to calculate the nucleation temperature from an approximation for the number of bubbles per Hubble volume, \[N(T)=\int_{T}^{T_{c}}\!\!dT^{\prime}\,\frac{\Gamma(T^{\prime})}{T^{\prime}H^{ 4}(T^{\prime})}. \tag{13}\] This implicitly assumes a fast transition so that one can assume \(P_{f}=1\) before \(T_{n}\), and thus omit \(P_{f}\) from the integrand [28].2 In this study we only use \(T_{n}\) to show the impact of approximations made in the literature, so we use the expression in eq. (13) to calculate \(T_{n}\) for consistency. Footnote 2: A factor of \(4\pi/3\) from the spherical Hubble volume is also neglected in this treatment. In contrast, to compute the mean bubble separation we determine the bubble number density with \(P_{f}(T)\) included to account for the fact that true vacuum bubbles can only nucleate in regions that are still in the false vacuum. The mean bubble separation is given by \[R_{\rm sep}(T)=(n_{B}(T))^{-\frac{1}{3}}, \tag{14}\] where \[n_{B}(T)=T^{3}\!\!\int_{T}^{T_{c}}\!\!dT^{\prime}\frac{\Gamma(T^{\prime})P_{f} (T^{\prime})}{T^{\prime 4}H(T^{\prime})} \tag{15}\] is the bubble number density. Finally, there are possibly other choices for the characteristic length scale in GW predictions [34; 35; 9; 14]. However, fits for GW predictions are determined in terms of \(R_{\rm sep}\), and one cannot directly replace \(R_{\rm sep}\) with alternative length scales in those fits. Still, we seek to investigate (among other things) the impact of the choice of \(T_{\star}\) on the GW predictions (see section VI), so it is important to understand the impact of \(T_{\star}\) on various length scales. Thus, we also consider the mean bubble radius, \[\bar{R}(T)=\frac{T^{2}}{n_{B}(T)}\int_{T}^{T_{c}}\!\!dT^{\prime}\frac{\Gamma( T^{\prime})P_{f}(T^{\prime})}{T^{\prime 4}H(T^{\prime})}\!\int_{T}^{T^{\prime}}\!\! dT^{\prime\prime}\frac{v_{w}(T^{\prime\prime})}{H(T^{\prime\prime})}. \tag{16}\] For more details see section 5.5 of Ref. [9] and references therein. We can compute the milestone temperatures \(T_{n}\), \(T_{p}\), \(T_{e}\) and \(T_{f}\) using eqs. (12) and (13), and we can similarly use eqs. (14) and (16) to compute \(R_{\rm sep}\) and \(\bar{R}\) at these milestone temperatures or at arbitrary temperatures. We use PhaseTracer[36] to map the phase structure of the potential and TransitionSolver[37] to analyse the phase history, including all relevant phase transitions,3 as well as determine the milestone temperatures and relevant parameters for GW predictions. The GW fits are parameterised in terms of thermal parameters, which -- in addition to the transition temperature and the characteristic length scale -- also include hydrodynamic parameters such as the kinetic energy fraction and the bubble wall velocity. Footnote 3: There is another high-temperature phase transition with \(T_{c}\sim 180\) GeV in the intermediate and strong supercooling scenarios considered in section V. The phase transition is very fast and is not relevant to our analysis. ### Hydrodynamic parameters Here we discuss the hydrodynamic parameters used in GW fits. First we discuss our best treatment for these parameters, then we introduce several common variations to this treatment used in the literature. We will investigate the impact of these variations on the GW signature in section VI.2. All of these parameters -- and all of the quantities that they depend on -- should be evaluated at the transition temperature, \(T_{\star}\). In our best treatment, we take \(T_{\star}=T_{p}\), and we determine the kinetic energy fraction using the pseudotrace difference between the phases, corresponding to M2 in Ref. [25]: \[K=\frac{\bar{\theta}_{f}(T_{\star})-\bar{\theta}_{t}(T_{\star})}{\rho_{\rm tot} (T_{\star})}\kappa_{\bar{\theta}}(\alpha_{\bar{\theta}}(T_{\star}),c_{s,f}(T_{ \star})). \tag{17}\] Here, \(c_{s,f}\) is the speed of sound in the false vacuum and \(\alpha_{\bar{\theta}}\) is the transition strength parameter. The speed of sound in phase \(i\) is given by \[c_{s,i}^{2}(T)=\left.\frac{\partial_{T}V}{T\partial_{T}^{2}V}\right|_{\mathbf{ \phi}_{i}(T)}, \tag{18}\] where \(V\) is the effective potential, or free energy density, and \(\mathbf{\phi}_{i}\) is the field configuration for phase \(i\). The transition strength parameter is defined as \[\alpha_{x}(T)=\frac{4(x_{f}(T)-x_{t}(T))}{3w_{f}(T)}, \tag{19}\] where \(x\) is a hydrodynamic quantity for which various choices exist in the literature, and \(w_{f}\) is the enthalpy density in the false vacuum. We use the pseudotrace for \(x\) in our best treatment, given by [25] \[\bar{\theta}_{i}(T)=\frac{1}{4}\bigg{(}\rho_{i}(T)-\frac{p_{i}(T)}{c_{s,t}^{2} (T)}\bigg{)} \tag{20}\] in phase \(i\), where \(\rho\) and \(p\) are the energy density and pressure, respectively. The pseudotrace generalises the trace anomaly to models where the speed of sound deviates from \(1/\sqrt{3}\). We use the code snippet provided in the appendix of Ref. [25] to determine the efficiency coefficient \(\kappa_{\bar{\theta}}\). Turbulence from cosmological phase transitions is not well understood because current hydrodynamic simulations cannot probe the turbulent regime. Hence, it is difficult to estimate the efficiency coefficient for turbulence, \(\kappa_{\rm turb}\), which is needed for turbulence contributions to the production of GWs. However, it is expected that stronger phase transitions (with larger \(\alpha\)) could result in more turbulence developing sooner and could reduce the lifetime of the sound wave source. Lacking sufficient modelling of the turbulence source, we consider the efficiency coefficient as a fraction of \(\kappa_{\bar{\theta}}\), \[\kappa_{\rm turb}=\epsilon\kappa_{\bar{\theta}}, \tag{21}\] and we take \(\epsilon=0.05\) as our default treatment. Finally, for our treatment of the bubble wall velocity, we assume bubbles grow as supersonic detonations regardless of the extent of supercooling for simplicity. General friction estimates are beyond the scope of this study, and neither the ultra-relativistic or non-relativistic limits of friction are applicable for all benchmark points in our study. We assume the bubbles expand at the Chapman-Jouguet velocity, \[v_{w}=v_{\rm CJ}=\frac{1+\sqrt{3\alpha_{\bar{\theta}}(1+c_{s,f}^{2}(3\alpha_{ \bar{\theta}}-1))}}{c_{s,f}^{-1}+3\alpha_{\bar{\theta}}c_{s,f}}, \tag{22}\] where temperature dependence has been suppressed. The Chapman-Jouguet velocity is by no means the most likely supersonic detonation solution, however it does capture dependence on the transition temperature and ensures a supersonic detonation regardless of the extent of supercooling. The same cannot be said for any fixed choice of \(v_{w}\). We now turn to the variations on our best treatment. First, we consider the effect of setting \(T_{*}\) to the other milestone temperatures: \(T_{n}\), \(T_{e}\) and \(T_{f}\). This involves using our best treatment (e.g. calculating \(K\) using eq. (17)) but evaluating all quantities at, for example, \(T_{n}\) instead of \(T_{p}\). As a reminder, \(T_{n}\) can be obtained by the condition \(N(T_{n})=1\) (see eq. (13)), while \(T_{p}\), \(T_{e}\) and \(T_{f}\) all come from conditions on the false vacuum fraction (see eq. (12)); specifically, \(P_{f}(T_{p})=0.71\), \(P_{f}(T_{e})=1/e\) and \(P_{f}(T_{f})=0.01\). The approach we use for estimating \(K\) was developed only recently in Refs. [25; 26], so it is not yet widely adopted. More approximate treatments are widespread, which we enumerate here. It is very common to determine \(K\) through \[K_{x}=\frac{\kappa_{x}\alpha_{x}}{1+\alpha_{x}}, \tag{23}\] with various choice of \(x\) often being made. This parameterisation alone introduces error in the determination of \(K\), regardless of the choice of \(x\) (see appendix A for details). The trace anomaly, \[\theta(T)=\frac{1}{4}(\rho(T)-3p(T)), \tag{24}\] is the closest alternative to \(\bar{\theta}\), in fact exactly matching \(\bar{\theta}\) when \(c_{s,t}=1/\sqrt{3}\) like in the bag model. The other common choices for \(x\) are the pressure \(p\) and the energy density \(\rho\). The efficiency coefficient used for these choices of \(x\) was derived in the bag model, and is given by [38] \[\kappa=\frac{\sqrt{\alpha_{x}}}{0.135+\sqrt{0.98+\alpha_{x}}} \tag{25}\] for \(v_{w}=v_{\rm CJ}\), which is implicitly dependent on temperature. In these more approximate treatments of \(K\), the enthalpy density in the denominator of eq. (19) is usually replaced with \(w_{f}=\frac{4}{5}\rho_{R}\), where \(\rho_{R}=\frac{\pi^{2}}{30}g_{\rm eff}T^{4}\) is the radiation energy density and \(g_{\rm eff}\) is the effective number of relativistic degrees of freedom. We find the replacement of the enthalpy density in this way (which comes from the bag model) to be a very good approximation. This replacement leads to less than 1% error in the GW predictions. Therefore our \(\alpha_{\rho}\) effectively corresponds to the latent heat definition frequently found in the literature, see eq. 5.35 of Ref. [9]. Similarly \(\alpha_{\theta}\) also effectively corresponds to eq. 5.36 of Ref. [9], which also frequently appears in the literature, though here one also needs to substitute \(\theta=\frac{1}{4}(\rho-3p)\). One could also replace \(\bar{\theta}\) with \(\theta\) in eq. (17) and use eq. (25) for \(\kappa\), corresponding to M3 in Ref. [25]. However, we find this introduces at most 1% difference in the GW predictions compared to using eq. (23) with \(x=\theta\), so we do not consider this variation in our results. As described in section III.1, one can approximate the mean bubble separation \(R_{\rm sep}\) through the often-used thermal parameter \(\beta\), or through \(\beta_{V}\). We investigate the error in these approximations for \(R_{\rm sep}\) and the corresponding effect on GW predictions. We also demonstrate the impact of using \(\bar{R}\) instead of \(R_{\rm sep}\), but we do not treat this as a variation of the treatment because mapping \(\bar{R}\) onto existing GW fits is currently ambiguous. We also consider alternative treatments of the turbulence efficiency coefficient. The most obvious variation is to simply choose another arbitrary, fixed value. We choose \(\epsilon_{2}=0.1\), where the subscript '2' denotes the index of this variation for \(\epsilon\). We also consider \(\epsilon_{3}=(1-\min(H(T_{*})\tau_{\rm sw},1))^{2/3}\), which comes from assuming that a reduction in the lifetime of the sound wave source \(\tau_{\rm sw}\) could boost the turbulence contribution to GW production [16; 39]. However, the effective lifetime of the sound wave source is more accurately suppressed by the factor \(\Upsilon=1-1/\sqrt{1+2H(T_{*})\tau_{\rm sw}}\) derived in Ref. [18]. This motivates a slightly modified choice: \(\epsilon_{4}=(1-\Upsilon)^{2/3}\). There are of course many other variations to the treatment that could be considered, but we restrict our study to the variations mentioned thus far. Changes to the bubble wall velocity could significantly impact the GW predictions and even the phase transition properties, particularly if the expansion mode of the bubbles changes from a supersonic detonation. TransitionSolver currently does not use a full hydrodynamic treatment of bubble profiles and therefore only provides accurate predictions for supersonic detonations.4 Thus, we currently cannot explore effect of \(v_{w}\) on GW predictions. We explored the impact of approximations made for the reheating temperature and GW redshifting factors in Ref. [27], and found that their effects were small. We do not reconsider these approximations here due to their accuracy. Also, we explored accuracy of various approximations for \(T_{n}\) as a function of supercooling in Ref. [28]. Here we only calculate \(T_{n}\) using eq. (13), but we note that rougher approximations for \(T_{n}\) are unreliable in strongly supercooled scenarios, and would thus lead to significant errors in GW predictions. Footnote 4: Reheating in the false vacuum for other bubble expansion modes affects both bubble nucleation and growth [40; 41; 42]. ## IV Gravitational waves We consider only the sound wave and turbulence sources of GWs in this study. The collision source is expected to contribute negligibly due to friction with the plasma. Even though some of the benchmark points listed in section V admit strong supercooling, the bubbles nucleate at temperatures where the plasma still imposes significant friction on the expanding bubble walls. Thus, we do not expect runaway bubble walls and consequently neglect the collision source altogether. The general scaling of the GW equations is predominantly governed by two key parameters: the kinetic energy fraction \(K\) and the characteristic length scale \(L_{*}\). We set \(L_{*}=R_{\rm sep}(T_{p})\) in our best treatment. The scaling of the peak amplitude \(\Omega_{\rm peak}\) and the peak frequency \(f_{\rm peak}\) is roughly \[\Omega_{\rm peak} \propto K^{n}L_{*}, \tag{26}\] \[f_{\rm peak} \propto L_{*}^{-1}, \tag{27}\] where \(n=2\) for sound waves and \(n=3/2\) for turbulence. The details of the GW equations we use can be found in appendix A.5 of Ref. [27]. In addition to the turbulence fit [43] and the sound shell model [21; 22] used for the sound wave source, we also consider another fit for the sound wave source provided in Ref. [19]. We will refer to this fit as the 'lattice fit' for the sound wave source, for lack of a better name. In this fit, the redshifted peak amplitude is \[h^{2}\Omega_{\rm sw}^{\rm lat}(f)=5.3\!\times\!10^{-2}\,\mathcal{R}_{\Omega}K ^{2}\!\left(\!\frac{\!HL_{*}}{c_{s,f}}\!\right)\Upsilon(\tau_{\rm sw})S_{\rm sw }(f), \tag{28}\] the redshifted peak frequency is \[f_{\rm sw}^{\rm lat}=1.58\,\mathcal{R}_{f}\!\left(\frac{1}{L_{*}}\right)\! \left(\frac{z_{p}}{10}\right), \tag{29}\] matching one of the key frequencies in the sound shell model, and the spectral shape is \[S_{\rm sw}(f)=\left(\frac{f}{f_{\rm sw}^{\rm lat}}\right)^{\!\!3}\!\left( \frac{7}{4+3(f/f_{\rm sw}^{\rm lat})^{2}}\right)^{\!\!\frac{7}{2}}. \tag{30}\] See Ref. [9] and the appendices of Ref. [27] for details of the redshifting factors \(\mathcal{R}_{f}\) and \(\mathcal{R}_{\Omega}\), the lifetime suppression factor \(\Upsilon\), and the simulation-derived factor \(z_{p}\) (which is taken to be \(z_{p}=10\)). All quantities in the fit are evaluated at \(T_{*}\), except for the redshifting factors. These are instead evaluated at the reheating temperature, which itself depends on \(T_{*}\). Just as in Ref. [27], we do not include a suppression factor coming from bubbles not reaching their asymptotic hydrodynamic profiles in the simulations from which the GW fits are obtained. This suppression factor would likely depend on \(T_{*}\) and the extent of supercooling, however further modelling is required. We also compute the SNR for the planned space-based GW detector LISA [44]. LISA has a peak sensitivity at the frequency scale \(f_{\rm LISA}\sim 10^{-3}\) Hz, which is the expected scale of GW signals from a first-order electroweak phase transition [45]. We use the projected sensitivity curve \(\Omega_{\rm LISA}\) from Refs. [46; 47], plotted in fig. 5. We calculate the SNR as [47] \[{\rm SNR}=\sqrt{\mathcal{T}\!\int_{0}^{\infty}\!\!df\!\left(\frac{\Omega_{ \rm GW}(f)}{\Omega_{\rm LISA}(f)}\right)^{\!\!2}}, \tag{31}\] where \(\Omega_{\rm GW}\) is the total GW signal from the sound wave and turbulence sources, and assume an effective observation time \(\mathcal{T}\) of three years, coming from a mission duration of four years and 75% data taking uptime. ## V Model We use the real scalar singlet model -- which is a simple yet realistic extension to the Standard Model -- to realise a first-order electroweak phase transition. Details of this model and our treatment of one-loop corrections are available in section 4.2 of Ref. [28]. We improve the treatment by adding extra fermions (including all quarks and the muon and tau), and adding Boltzmann suppression factors to the Debye corrections. We also appropriately adjust the radiation degrees of freedom to \(g_{*}^{\prime}=22.25\). A similar treatment in a simpler single-field model was used in Ref. [27]. We consider four benchmark points (BPs) in this study, each with a different extent of supercooling. All BPs come from a narrow slice of the total parameter space of the model. We start with M2-BP2 of Ref. [28] and vary only the mixing angle \(\theta_{m}\) to vary the extent of supercooling. The other input parameters are fixed as \(\kappa_{hhs}=-1259.83\) GeV, \(\kappa_{sss}=-272.907\) GeV, \(v_{s}=663.745\) GeV and \(m_{s}=351.183\) GeV. The mixing angles and the milestone temperatures for the BPs are listed in table 1. The supercooling increases with the BP index. BP1 represents a typical weakly supercooled phase transition with only 1 GeV difference between the onset of bubble nucleation and percolation, and \(\alpha_{\bar{\theta}}\approx 0.01\). BP2 represents a moderately supercooled phase transition with \(\alpha_{\bar{\theta}}\approx 0.05\). Both of these BPs have an exponential nucleation rate, thus we do not calculate \(T_{\Gamma}\) for them. BP3 represents a very strongly supercooled phase transition, where the physical volume of the false vacuum only begins to decrease just below \(T_{p}\). While BP3 has a completion temperature, percolation is questionable [14; 28; 48]. The transition strength parameter is \(\alpha_{\bar{\theta}}\approx 1.7\), beyond the reach of current hydrodynamic simulations of GWs [20]. Thus, one must be cautious when interpreting GW predictions from BP3, and indeed BP4. BP4 has even stronger supercooling, so much so that the phase transition does not complete. The transition strength parameter in BP4 is \(\alpha_{\bar{\theta}}\approx 177\). ## VI Results ### Dependence on the transition temperature In this section we discuss the impact on GW predictions when varying the transition temperature, \(T_{*}\). The SNR as a function of \(T_{*}\) is shown in fig. 1 for each BP. The SNR varies by orders of magnitude over the duration of the phase transition. However, GWs are not produced until the phase transition is well underway, so we restrict our attention to the temperature range \(T\in[T_{f},\max(T_{n},T_{\Gamma})]\). There are two sets of curves -- solid and dashed -- which have starkly different forms in the temperature domain. The solid curves use \(L_{*}=R_{\rm sep}\) while the dashed curves use \(L_{*}=\bar{R}\), with everything else in the treatment being the same between the two sets of curves. The most immediate difference between the two sets is that the SNR increases with \(T_{*}\) when using \(R_{\rm sep}\), and decreases with \(T_{*}\) when using \(\bar{R}\). In fig. 2(a,b) we see that the peak amplitude of GWs follows a similar behaviour: the amplitude increases (decreases) with \(T_{*}\) when using \(R_{\rm sep}\) (\(\bar{R}\)). Inversely, in fig. 3(a,b) we see that the peak frequency of GWs decreases with \(T_{*}\) when using \(R_{\rm sep}\), and increases considerably slower with \(T_{*}\) when using \(\bar{R}\). These observations can be easily explained by investigating the behaviour of \(R_{\rm sep}\) and \(\bar{R}\) as function of \(T_{*}\) (see fig. 4). In fact, we find that the dominant thermal parameter when varying \(T_{*}\) is \(L_{*}\), not \(K\). In fig. 4(a) we plot choices of the length scale as a function of \(T_{*}\) for BP2 (intermediate supercooling). The mean bubble separation is large near the start of the phase transition (at higher \(T_{*}\)) because there are few bubbles so their separation is large. The separation decreases over the course of the phase transition (with decreasing \(T_{*}\)) because new bubbles are nucleated. The mean bubble radius, on the other hand, begins very small because the first bubbles to nucleate have not had time to grow significantly. As the phase transition continues, pre-existing bubbles grow, but more small bubbles are nucleated, suppressing an increase in the mean radius. Thus, the mean bubble radius increases over time (i.e. with decreasing \(T_{*}\)) but varies less than the mean bubble separation. We also see that the mean bubble separation estimated using \(\beta\) actually emulates the mean bubble radius. This is unsurprising, because \(R_{\rm sep}\) is supposedly inversely proportional to \(\beta\), and \(\beta\) is much higher at the start of a phase transition with the bounce action diverging at \(T_{c}\). Thus, \(R_{\rm sep}\) estimated using \(\beta\) is small at high \(T_{*}\), in line with \(\bar{R}\), whereas the true \(R_{\rm sep}\) is large at high \(T_{*}\). The behaviour of \(R_{\rm sep}\) in BP3 (see fig. 4(b)) is more complicated due to strong supercooling. The expansion of space dilutes the bubble number density and increases the separation between bubbles. Additionally, bubble nucleation is negligible well below \(T_{\Gamma}\) so new bubbles are not nucleated to reduce the mean separation. With even stronger supercooling in BP4 (not shown), \(R_{\rm sep}\) begins to increase rapidly as \(T_{*}\) drops below \(T_{p}\). We also see that \(\beta\) cannot be used to estimate \(R_{\rm sep}\) in BP3 (at least below \(T_{\Gamma}\)). However, one can instead use \(\beta_{V}\) under the Gaussian nucleation rate approximation, which is seen to reproduce both \(R_{\rm sep}\) and \(\bar{R}\) quite well at \(T_{p}\) in this example. Now that the temperature scaling of the length scales is clear, we can return to effects on the GW signal. First, the peak frequency for all sources is inversely proportional to \(L_{*}\) and is largely unaffected by any other thermal parameters. Only the frequency corresponding to the sound shell thickness scale (in the sound shell model) is directly affected by the hydrodynamic parameters \(K\), \(v_{w}\) and \(c_{s}\). The two key frequencies in the sound shell model are less separated with increased supercooling due to thickening of the sound shell. Otherwise, the behaviour of the peak frequencies in fig. 3 can be explained purely by the behaviour of the length scales in fig. 4. If one uses \(\bar{R}\), the change in frequency with \(T_{*}\) is milder than when using \(R_{\rm sep}\). In general, stronger supercooling lowers the peak frequency at \(T_{p}\). Next, the peak amplitude for all sources is proportional \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \(\theta_{m}\) & \(T_{c}\) & \(T_{n}\) & \(T_{p}\) & \(T_{c}\) & \(T_{f}\) & \(T_{\Gamma}\) & \(\log_{10}(\alpha_{\bar{\theta}})\) \\ \hline BP1 & 0.24 & 117.0 & 106.0 & 104.8 & 104.7 & 104.6 & N/A & \(-1.938\) \\ \hline BP2 & 0.258 & 108.3 & 78.10 & 74.17 & 73.80 & 73.24 & N/A & \(-1.264\) \\ \hline BP3 & 0.262 & 106.2 & N/A & 32.46 & 25.65 & 12.69 & 59.47 & 0.2178 \\ \hline BP4 & 0.2623 & 106.1 & N/A & 10.09 & N/A & N/A & 59.57 & 2.248 \\ \hline \end{tabular} \end{table} Table 1: Benchmark points and their corresponding milestone temperatures. The mixing angle is expressed in radians, and the temperatures have units of GeV. The transition strength parameter \(\alpha_{\bar{\theta}}\) is evaluated at \(T_{p}\). to \(L_{*}\). However, the amplitude also depends on \(K\) and \(c_{s}\), as well as \(v_{w}\) indirectly through \(K\). Nevertheless, \(L_{*}\) typically has a dominant effect on the amplitude. In the absence of strong supercooling, \(R_{\rm sep}\) changes considerably with \(T_{*}\) while \(\bar{R}\) does not. Yet, \(K\) and the other hydrodynamic parameters change very little, so \(L_{*}\) still has a dominant effect even when using \(\bar{R}\). With strong supercooling, \(K\) and the other hydrodynamic parameters can vary considerably between \(T_{\Gamma}\) and \(T_{p}\). So too can \(\bar{R}\), while \(R_{\rm sep}\) remains approximately constant, and is in fact minimised near \(T_{\Gamma}\). The peak amplitude increases rapidly below \(T_{p}\) in BP4 not because of \(K\) (which is roughly unity even at \(T_{p}\)), but because of the expansion of space causing a rapid increase in \(L_{*}\).5 These are all generic features of strongly supercooled phase transitions so the results and analysis presented here should apply to other BPs and other models. Footnote 5: This also causes a rapid decrease in the peak frequency at low temperature, consistent with the findings in Ref. [27]. Combining the peak amplitudes and frequencies of the GW sources, one can then compare the GW signal to the sensitivity of a GW detector to obtain the SNR. We consider LISA in this study, but in principle the SNR at any detector could be calculated. Although we now have a clear picture of the behaviour of the peak amplitude and frequency, the behaviour of the SNR is complicated by the sensitivity window of LISA. The SNR is enhanced when the peak frequency matches the frequency range where LISA is most sensitive; that is, near \(f_{\rm LISA}\sim 10^{-3}\) Hz. If by varying \(T_{*}\) one would obtain a higher peak amplitude but shift the peak frequency further from LISA's optimal frequency range, the SNR could decrease. Thus, investigating the peak amplitude or peak frequency in isolation will not give a clear indication of detectability. In fig. 5 we plot the peak of the GW signal in the amplitude-frequency plane as a function of \(T_{*}\) for BP3 to provide further insight into these competing effects. When using \(R_{\rm sep}\) for a strongly supercooled phase transition, the peak frequency (amplitude) increases (decreases) with decreasing \(T_{*}\), until a reversal at \(T_{\Gamma}\). However, between \(T_{\Gamma}\) and \(T_{p}\) the amplitude increases faster than the frequency decreases, increasing the SNR at LISA. Meanwhile, if one uses \(\bar{R}\) for a strongly supercooled phase transition, the peak frequency (amplitude) decreases (increases) with decreasing \(T_{*}\). In the example of BP3, the peak of the GW signal slides across the boundary of LISA's sensitivity curve, leading to an almost constant SNR between \(T_{\Gamma}\) and \(T_{f}\). One can imagine that a slightly different BP could alter the GW peak scaling slightly, leading to a substantially different scaling of SNR with \(T_{*}\). Naturally, the curves for \(R_{\rm sep}\) and \(\bar{R}\) meet near \(T_{f}\) because the two length scales are very similar near the end of the phase transition (as was also demonstrated in Ref. [34]). The GW signal is formed from the sound wave and turbulence contributions, noting again that we have neglected the collision contribution. We consider one GW fit for the turbulence throughout, but we present results for two GW fits for sound waves: the sound shell model and lattice fits. First we compare the two fits for the sound wave source. Based on the SNR alone (see fig. 1) we find a significant discrepancy between the two fits at \(T_{p}\) in BP1 and BP2. The fits agree quite well for BP3 and BP4 when using \(R_{\rm sep}\) but this is a coincidence due to LISA's sensitivity window. Looking at the peak amplitudes and frequencies separately for BP3 and BP4 (see fig. 2(c,d) and fig. 3(c,d)), we see that the predicted GW signals are still different. When using \(\bar{R}\) instead, the SNR of the sound shell model is consistently smaller in BP1 and BP2 for all \(T_{*}\) because the peak frequency is always above LISA's optimal frequency, \(f_{\rm LISA}\). The situation is more complicated in BP3 and BP4 because the peak frequency crosses \(f_{\rm LISA}\) as \(T_{*}\) is varied. The ratio of peak amplitudes in the two sound wave fits is \(\Omega_{\rm sw}^{\rm ss}/\Omega_{\rm sw}^{\rm lat}\approx 0.20\) for \(v_{w}\sim 1\) and \(c_{s,f}\sim 1/\sqrt{3}\), where the superscripts'ss' and 'lat' denote the sound shell and lattice fits, respectively. This ratio is approximately independent of \(T_{*}\) and is similar for all BPs. The ratio of peak frequencies is \(f_{\rm sw}^{\rm ss}/f_{\rm sw}^{\rm lat}\approx 2.4\) for \(v_{w}\sim 1\) and \(c_{s,f}\sim 1/\sqrt{3}\) as in BP3, but increases to roughly 8.1 in BP1 where \(v_{\rm CJ}\approx 0.65\). The ratio of peak frequencies has a slight dependence on \(T_{*}\) due to our choice \(v_{w}=v_{\rm CJ}\), with \(v_{\rm CJ}\) implicitly depending on \(T_{*}\) through \(\alpha\). The large frequency ratio in BP1 and BP2 leads to a large difference in the SNR at LISA between the two sound wave fits. The choice \(v_{w}=v_{\rm CJ}\) results in a large separation in length scales -- \(L_{*}\) and \(L_{*}\Delta_{w}\) -- when \(v_{\rm CJ}\sim c_{s,f}\), which occurs when \(\alpha\ll 1\). Here, \(\Delta_{w}=(v_{w}-c_{s,f})/v_{w}\) is a multiplier for the sound shell thickness, and can be applied to either \(R_{\rm sep}\) or \(\bar{R}\). Next we compare the sound wave source to the turbulence source. In general, \(\Omega_{\rm turb}\) decreases faster than \(\Omega_{\rm sw}\) with decreasing \(T_{*}\) when using \(R_{\rm sep}\). This is because both amplitudes are proportional to the decreasing \(L_{*}\), but \(\Omega_{\rm sw}\) is proportional to the increasing \(K^{2}\) while \(\Omega_{\rm turb}\) is proportional to \(K^{3/2}\). Thus, the fractional contribution of turbulence to the total GW signal decreases with \(T_{*}\). However, when \(K\sim 1\), as in BP4 below \(T_{p}\), the scaling with \(K\) is equivalent between the two GW sources. The comparison of the two sources does not change when instead using \(\bar{R}\), although the amplitudes now monotonically increase with decreasing \(T_{*}\). There is little insight to gain when comparing the peak frequencies of the GW sources because they largely differ by a constant factor. The peak frequency for the turbulence contribution is between the peak frequencies of the two sound wave fits; it is larger than that of the lattice fit and smaller than that of the sound shell model fit. However, because the sound shell thickens with supercooling (at least when choosing \(v_{w}=v_{\rm CJ}\)), we find that the peak frequency of turbulence closely matches the peak frequency in the sound shell model in strongly supercooled scenarios. Though, the GW fits were obtained in weak and intermediate supercooling scenarios, so their use in scenarios with strong supercooling requires extrapolation and should be interpreted with care. Finally, one can compare the contribution to the SNR from the sound wave and turbulence sources. This information cannot be inferred from the results shown in fig. 1. Instead, we will discuss the turbulence contribution -- and the impact on the SNR when increasing it -- in the next section, where we consider variations of our best treatment. ### Variations of the treatment We now discuss the impact of individual variations to our best treatment for GW prediction. These variations involve estimating \(R_{\rm sep}\) using \(\beta\) and \(\beta_{V}\), estimating \(K\) using other hydrodynamic quantities, and changing the efficiency coefficient for turbulence, as discussed in section III.2. The numerical results are stated in tables 2 to 4. We do not consider BP4 here because the phase transition does not complete; besides the results should qualitatively match those of BP3. Note that studies typically do not vary from our best treatment by one small change. Usually many approximations are made for all thermal parameters used in GW predictions. Our investigation here does not encompass such treatments; instead we point the reader to Ref. [30] where they compare low and high diligence treatments. However, one cannot easily determine from their results the effects of individual variations to indicate whether an approximation is appropriate. First, we briefly discuss the impact of the varying the transition temperature, which is otherwise treated in more detail in section VI.1. The two main parameters affecting the GW predictions are \(K\) and \(L_{*}\). We see that \(K\) changes by at most a factor of a few between \(T_{n}\) and \(T_{f}\) even in the strongly supercooled scenario, BP3.6 Yet the peak amplitudes and frequencies change by several orders of magnitude. This is because \(R_{\rm sep}\) changes by several orders of magnitude between \(T_{n}\) and \(T_{f}\). Whether the SNR is higher or lower for some choice of \(T_{*}\) depends on where the peak frequency lies with respect to LISA's peak sensitivity, \(f_{\rm LISA}\). Because of this, there is no consistent trend in the effect of \(T_{*}\) on the SNR across the BPs, even though there is a consistent trend in the peak amplitudes and frequencies. Footnote 6: Evaluating the GW signal at \(T_{f}\) (defined by \(P_{f}(T_{f})=0.01\)) is not a standard treatment. We show this variation to demonstrate the limiting behaviour of quantities near the end of the phase transition. Next, we find that using \(\beta(T_{p})\) to estimate \(R_{\rm sep}(T_{p})\) results in roughly a factor of two error in peak amplitudes and frequencies in BP1 and BP2. A similar error is present when using \(\beta_{V}\) to estimate \(R_{\rm sep}(T_{p})\) in BP3. However, it is common practice to evaluate \(\beta\) at \(T_{n}\) rather than at \(T_{p}\), which introduces a larger error as seen in Figure 1: SNR at LISA as a function of \(T_{*}\). From top to bottom: (a) BP1, (b) BP2, (c) BP3, (d) BP4. The vertical dashed lines correspond to key temperatures: \(T_{\Gamma}\) (magenta), \(T_{n}\) (red), \(T_{p}\) (green), \(T_{e}\) (blue) and \(T_{f}\) (black). Completion occurs at the left border of each plot, except for BP4 where there is no completion. The solid curves correspond to \(L_{*}=R_{\rm sep}\) and the dashed curves correspond to \(L_{*}=\bar{R}\). Figure 3: Peak GW frequency as a function of transition temperature. See the caption of fig. 1 for further details. Figure 2: Peak GW amplitude as a function of transition temperature. See the caption of fig. 1 for further details. fig. 4(a). Yet using \(\beta(T_{n})\) is more appropriate than using \(R_{\rm sep}(T_{n})\) simply because the bubble number density changes faster than \(\beta\) between \(T_{n}\) and \(T_{p}\). We do not consider the variation \(L_{*}=\bar{R}\) here because GW fits are derived in terms of \(R_{\rm sep}\) rather than \(\bar{R}\). An appropriate mapping would need to be applied to use \(\bar{R}\) in the fits, such as multiplying \(L_{*}\) by an unknown constant factor in the fits. Varying the hydrodynamic quantity \(x\) in eq. (23) has a significant impact on the prediction of \(K\) in BP1 and BP2. The effect is considerably smaller in BP3. This can be understood as follows. The pressure difference \(\Delta p\) and energy density difference \(\Delta\rho\) are starkly different at high temperature, with \(\Delta p=0\) and \(\Delta\rho\neq 0\) at \(T_{c}\). We always have \(\alpha_{p}<\alpha_{\theta}<\alpha_{\rho}\)[25]. Using the pressure difference underestimates \(K\), while using the energy density difference overestimates \(K\). Our results match the findings of Refs. [25; 26]. With increased supercooling (i.e. at lower temperature), the energy density approaches the pressure such that \(\alpha_{p}\approx\alpha_{\rho}\), and \(c_{s,t}^{2}\approx 1/3\) such that \(\bar{\theta}\approx\theta\). Thus, for strong supercooling we find that all methods to estimate \(K\) lead to similar results, while significant discrepancies arise for weak and intermediate supercooling. Lastly, we consider the impact of varying the turbulence efficiency coefficient, \(\kappa_{\rm turb}\), through variation of \(\epsilon\) (see eq. (21)). Increasing \(\kappa_{\rm turb}\) can have a large impact on the SNR, particularly if the peak frequency of turbulence better matches the detector's sensitivity window than the peak frequency of sound waves does. The variations \(\epsilon_{3}\) and \(\epsilon_{4}\) increase the amplitude of the turbulence source by two orders of magnitude because \(\epsilon\) approaches unity, and \((1/0.05)^{3/2}\approx 90\). However, \(\epsilon_{3}\) predicts zero turbulence in BP3 because \(H(T_{*})\tau_{\rm sw}>1\). Increasing the turbulence contribution increases the SNR significantly in BP1 when using the sound shell model but has little effect when using the lattice fit for sound waves. The effect is small in BP2 with up to a 50% increase in SNR when using the sound shell model. The effect is significant in BP3 when using either sound wave fit. ## VII Discussion In this study we have investigated several ambiguities and approximations made in predictions of GWs from cosmological phase transitions. We considered each approximation in isolation to provide a clear indication of their individual effects on the GW signal. We recommend our results be used in conjunction with the results of Ref. [30] to determine whether a particular set of approximations can lead to reliable GW predictions. Alternatively, one could use our best treatment described in section III.2 if feasible, and even improve on it with a proper treatment of hydrodynamic profile around bubble walls and a method for estimating friction on the bubble wall. To our knowledge, our investigation is the first to explicitly determine the effect of varying the transition tem Figure 4: Characteristic length scale as a function of transition temperature. From top to bottom: (a) BP2, (b) BP3. The qualitative features of BP1 and BP4 are respectively very similar to those of BP2 and BP3, although \(R_{\rm sep}\) and \(\bar{R}\) increase rapidly near \(T_{f}\) in BP4. The vertical dashed lines correspond to key temperatures: \(T_{\Gamma}\) (magenta), \(T_{n}\) (red), \(T_{p}\) (green), \(T_{e}\) (blue) and \(T_{f}\) (black). Completion occurs at the left border of each plot Figure 5: The peak amplitude and frequency of the GW signal for BP3 as a function of temperature. Here we show only the sound shell model for the sound wave source. The noise curve for LISA is shown in blue. perature, \(T_{*}\). We note that our investigation is fundamentally different from studies that vary thermal parameters (including \(T_{*}\)) separately, treating them as independent quantities. We account for the implicit interdependence of all thermal parameters. The correct choice of the transition temperature is still unknown because the hydrodynamic simulations from which GW fits are obtained hold the temperature fixed. In fact, evaluating GW predictions at a single temperature may fall out of favour once modelling of GW production is improved further. We have demonstrated that using the current set of thermal parameters (in particular \(R_{\rm sep}\)), the GW signal can change by several orders of magnitude between commonly chosen transition temperatures: \(T_{n}\) and \(T_{p}\). If a more appropriate choice of transition temperature lies somewhere between \(T_{n}\) and \(T_{p}\), then new GW predictions would significantly differ from those obtained using the current best treatments which use \(T_{*}=T_{p}\). We argued in section VI.1 that evaluating the GW signal at temperatures above \(T_{n}\) is not meaningful because bubble collisions would not have occurred to source GWs at that stage in the phase transition. This same reasoning can also be used to discard \(T_{n}\) as a reasonable transition temperature. The only case where the nucleation temperature reflects a time when collisions are occurring is in some strongly supercooled phase transitions -- where in extreme cases \(T_{n}\sim T_{p}\), counter-intuitively [28]. However, using \(T_{n}\) in strongly supercooled phase transitions is not recommended. For one, it decouples from the progress of the phase transition, so it does not represent a consistent stage in the phase transition. Further, the existence \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Variation & \(h^{2}\Omega_{\rm sw}^{\rm lat}\) & \(h^{2}\Omega_{\rm sw}^{\rm ss}\) & \(f_{\rm sw}^{\rm lat}\) & \(f_{\rm sw}^{\rm ss}\) & \(h^{2}\Omega_{\rm turb}\) & \(f_{\rm turb}\) & \({\rm SNR}_{\rm lat}\) & \({\rm SNR}_{\rm ss}\) & \(\alpha\) & \(\kappa\) & \(K\) \\ & (\(\times 10^{-17}\)) & (\(\times 10^{-18}\)) & (\(\times 10^{-5}\)) & (\(\times 10^{-4}\)) & (\(\times 10^{-20}\)) & (\(\times 10^{-5}\)) & (\(\times 10^{-5}\)) & (\(\times 10^{-7}\)) & (\(\times 10^{-3}\)) & (\(\times 10^{-2}\)) & (\(\times 10^{-4}\)) \\ \hline None & 22.57 & 31.49 & 1422 & 1157 & 21.28 & 3150 & 156.2 & 39.60 & 11.52 & 9.900 & 11.20 \\ \hline \(T_{*}=T_{e}\) & 13.97 & 19.50 & 1833 & 1490 & 12.90 & 4061 & 56.44 & 11.24 & 11.57 & 9.921 & 11.27 \\ \hline \(T_{*}=T_{f}\) & 11.10 & 15.50 & 2080 & 1685 & 10.16 & 4607 & 33.82 & 6.105 & 11.66 & 9.955 & 11.39 \\ \hline \(T_{*}=T_{n}\) & 147000 & 204300 & 2.611 & 2.187 & 5448000 & 5.785 & 10230 & 5026000 & 10.74 & 9.565 & 10.09 \\ \hline \(R_{\rm sep}(\beta)\) & 11.04 & 15.41 & 2062 & 1678 & 10.12 & 4567 & 34.32 & 6.216 & & & \\ \hline \(K(\alpha(\theta))\) & 21.09 & 29.44 & & 19.92 & & 146.0 & 37.03 & 11.46 & 9.466 & 10.72 \\ \hline \(K(\alpha(p))\) & 1.403 & 1.957 & & 1.489 & & 9.711 & 2.509 & 3.590 & 5.317 & 1.902 \\ \hline \(K(\alpha(\rho))\) & 261.9 & 365.5 & & & 234.7 & & 1813 & 456.2 & 35.05 & 16.39 & 55.50 \\ \hline \(\epsilon_{2}\) & & & & & 60.18 & & 156.4 & 54.06 & & & \\ \hline \(\epsilon_{3}\) & & & & & 1776 & & 166.0 & 1035 & & & \\ \hline \(\epsilon_{4}\) & & & & & 1787 & & 166.0 & 1041 & & & \\ \hline \end{tabular} \end{table} Table 2: GW predictions and hydrodynamic parameters for BP1. Each row corresponds to a different variation of our best treatment. Blank cells match the result of our best treatment (i.e. the top row). Frequencies are stated in units of GeV, with all other quantities being dimensionless. The scripts ‘ss’ and ‘lat’ respectively denote the sound shell model fit and the lattice fit for the sound wave of GWs. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline Variation & \(h^{2}\Omega_{\rm sw}^{\rm lat}\) & \(h^{2}\Omega_{\rm sw}^{\rm ss}\) & \(f_{\rm sw}^{\rm lat}\) & \(f_{\rm sw}^{\rm ss}\) & \(h^{2}\Omega_{\rm turb}\) & \(f_{\rm turb}\) & \({\rm SNR}_{\rm lat}\) & \({\rm SNR}_{\rm ss}\) & \(\alpha\) & \(\kappa\) & \(K\) \\ & (\(\times 10^{-13}\)) & (\(\times 10^{-18}\)) & (\(\times 10^{-5}\)) & (\(\times 10^{-4}\)) & (\(\times 10^{-20}\)) & (\(\times 10^{-5}\)) & (\(\times 10^{-5}\)) & & & (\(\times 10^{-2}\)) & & (\(\times 10^{-3}\)) \\ \hline None & 3.590 & 5.673 & 129.6 & 60.20 & 3.898 & 287.0 & 10.08 & 2.031 & 5.450 & 0.2074 & 10.64 \\ \hline \(T_{*}=T_{e}\) & 2.552 & 4.042 & 159.9 & 73.75 & 2.662 & 354.2 & 8.763 & 1.204 & 5.575 & 0.2096 & 10.99 \\ \hline \(T_{*}=T_{f}\) & 2.146 & 3.410 & 181.7 & 82.91 & 2.187 & 402.5 & 8.110 & 0.8892 & 5.771 & 0.2129 & 11.54 \\ \hline \(T_{*}=T_{n}\) & 676.5 & 1046 & 2.189 & 1.098 & 8968 & 4.849 & 1.310 & 5.142 & 4.297 & 0.1857 & 7.597 \\ \hline \(R_{\rm sep}(\beta)\) & 2.019 & 3.191 & 177.5 & 82.45 & 2.078 & 393.1 & 7.510 & 0.8449 & & & \\ \hline \(K(\alpha(\theta))\) & 3.372 & 5.329 & & & 3.676 & & 9.469 & 1.908 & 5.362 & 0.2011 & 10.23 \\ \hline \(K(\alpha(p))\) & 1.428 & 2.256 & & & 1.649 & & 4.010 & 0.8081 & 3.698 & 0.1682 & 5.997 \\ \hline \(K(\alpha(\rho))\) & 14.45 & 22.84 & & & 14.61 & & 40.59 & 8.172 & 10.35 & 0.2736 & 25.68 \\ \hline \(\epsilon_{2}\) & & & & & 11.03 & & 10.11 & 2.064 & & & \\ \hline \(\epsilon_{3}\) & & & & & 290.2 & & 11.21 & 3.406 & & & \\ \hline \(\epsilon_{4}\) & & & & & 301.7 & & 11.26 & 3.462 & & & \\ \hline \end{tabular} \end{table} Table 3: The same as table 2 but for BP2. of a nucleation temperature does not indicate whether a phase transition occurs or completes, as discussed in Ref. [28]. Thus, one must be careful when using \(T_{n}\), and ensure that the phase transition is in fact weakly supercooled. It is commonly assumed that the GW signal should be similar at \(T_{n}\) and \(T_{p}\) for weakly supercooled phase transitions. This is not consistent with our findings. Calculating the mean bubble separation properly (from the bubble number density) would suggest orders of magnitude difference in the GW signal between \(T_{n}\) and \(T_{p}\). Using the mean bubble radius or \(\beta\) instead still suggests a factor of a few difference in the GW signal between \(T_{n}\) and \(T_{p}\). The hydrodynamic parameters like the kinetic energy fraction, however, are similar at the two temperatures. The mean bubble radius varies much slower with temperature than the mean bubble separation. Thus, studies evaluating GWs at \(T_{n}\) should use the mean bubble radius or \(\beta\) instead of calculating the mean bubble separation directly from the bubble number density. However, we note that if one could calculate the bubble number density, then one could calculate \(T_{p}\) and use the recommended treatment outlined in section III.2. In general, we find that variations of the treatment of GW predictions can lead to sizeable deviations in the SNR and peak amplitudes and frequencies; potentially deviations of many orders of magnitude. In the context of GW predictions from cosmological phase transitions, mild deviation is of the order of 10%, suggesting that constraints on particle physics models from GW observations will be hard to apply reliably at this stage. Nevertheless, the recent emergence of successful GW astronomy offers hope for constraining particle physics models at scales beyond the reach of particle physics experiments. ## VIII Acknowledgements LM thanks Thomas Konstandin for assistance with numerical accuracy in calculating \(\kappa_{\bar{\theta}}\). LM was supported by an Australian Government Research Training Program (RTP) Scholarship and a Monash Graduate Excellence Scholarship (MGES). The work of PA is supported by the National Natural Science Foundation of China (NNSFC) under grant No. 12150610460 and by the supporting fund for foreign experts grant wgxx2022021L. ZX is also supported in part by NNSFC grant No. 12150610460. ## Appendix A Correction to the kinetic energy fraction parameterisation The kinetic energy fraction is often parameterised as \[K=\frac{\kappa\alpha}{1+\alpha}. \tag{12}\] This parameterisation introduces approximation to the fundamental definition [22; 25; 9] \[K=\frac{\rho_{\rm kin}(T_{*})}{\rho_{\rm tot}(T_{*})}, \tag{13}\] where \(\rho_{\rm kin}\) is the fluid kinetic energy. In the following we assume \(\rho\) and \(p\) are renormalised such that the ground state energy density vanishes. In this case, \(\rho_{\rm tot}=\rho_{f}\). The inexact nature of eq. (12) was demonstrated in appendix B.2 of Ref. [22] and implied in Ref. [25] (seen by comparing methods M2 and M3). A correction \(\delta\) can be applied such that [22] \[K=\frac{\kappa\alpha}{1+\alpha+\delta}. \tag{14}\] \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|} \hline Variation & \(h^{2}\Omega_{\rm sw}^{\rm lat}\) & \(h^{2}\Omega_{\rm sw}^{\rm ss}\) & \(f_{\rm sw}^{\rm lat}\) & \(f_{\rm sw}^{\rm ss}\) & \(h^{2}\Omega_{\rm turb}\) & \(f_{\rm turb}\) & SNR\({}_{\rm lat}\) & SNR\({}_{\rm ss}\) & \(\alpha\) & \(\kappa\) & \(K\) \\ & \((\!\!\times\!10^{-7}\!)\) & \((\!\!\times\!10^{-8}\!)\) & \((\!\!\times\!10^{-6}\!)\) & \((\!\!\times\!10^{-6}\!)\) & \((\!\!\times\!10^{-10}\!)\) & \((\!\!\times\!10^{-6}\!)\) & & & & \\ \hline None & 1.861 & 3.748 & 9.345 & 23.48 & 6.348 & 20.70 & 249.6 & 307.7 & 1.651 & 0.7175 & 0.4536 \\ \hline \(T_{*}=T_{e}\) & 4.318 & 8.872 & 7.908 & 19.12 & 14.74 & 17.52 & 443.7 & 498.2 & 4.257 & 0.8422 & 0.6950 \\ \hline \(T_{*}=T_{f}\) & 17.04 & 35.42 & 4.111 & 9.722 & 81.84 & 9.106 & 864.5 & 876.4 & 71.06 & 0.9831 & 0.9803 \\ \hline \(R_{\rm sep}(\beta_{V})\) & 1.193 & 2.402 & 12.80 & 32.17 & 3.394 & 28.36 & 222.6 & 356.9 & & & \\ \hline \(K(\alpha(\theta))\) & 1.819 & 3.663 & & & 6.227 & & 244.9 & 301.5 & 1.605 & 0.7269 & 0.4478 \\ \hline \(K(\alpha(p))\) & 1.768 & 3.560 & & & 6.083 & & 239.2 & 294.2 & 1.564 & 0.7269 & 0.4409 \\ \hline \(K(\alpha(\rho))\) & 1.967 & 3.962 & & & 6.646 & & 261.4 & 323.0 & 1.728 & 0.7383 & 0.4677 \\ \hline \(\epsilon_{2}\) & & & & & 17.95 & & 700.0 & 742.2 & & & \\ \hline \(\epsilon_{3}\) & & & & & 0 & & 18.36 & 130.9 & & & \\ \hline \(\epsilon_{4}\) & & & & & & 288.4 & & 11210 & 11230 & & & \\ \hline \end{tabular} \end{table} Table 4: The same as table 2 but for BP3. There is no row for \(T_{*}=T_{n}\) because there is no nucleation temperature for BP3. This time there is a row for \(R_{\rm sep}(\beta_{V})\) instead of \(R_{\rm sep}(\beta)\) because the bubble nucleation rate is Gaussian rather than exponential. In fact, \(\beta\) is negative and leads to invalid predictions. One can solve for \(\delta\) by equating eq. (25) and eq. (26). If \(\alpha\) is calculated using the trace anomaly \[\theta=\frac{1}{4}(\rho-3p) \tag{27}\] as in Ref. [22], one finds \[\delta=\frac{\theta_{t}}{3w_{f}}. \tag{28}\] If \(\alpha\) is calculated using the pseudotrace [25] \[\bar{\theta}=\frac{1}{4}\bigg{(}\rho-\frac{p}{c_{s,t}^{2}}\bigg{)}\,, \tag{29}\] which reduces to the trace anomaly if \(c_{s,t}^{2}=1/3\) (e.g. as in the bag model), one instead finds \[\delta=\frac{4}{3w_{f}}\big{(}\rho_{\rm tot}-\Delta\bar{\theta}\big{)}-1. \tag{30}\] In our benchmark points we find \(\delta\ll 1+\alpha\) such that the difference between eq. (24) and eq. (26) is at most 1%. Thus, we do not include such variations on the treatment of \(K\) in our results.
2309.15667
Uniform Poincaré inequalities for the Discrete de Rham complex on general domains
In this paper we prove Poincar\'e inequalities for the Discrete de Rham (DDR) sequence on a general connected polyhedral domain $\Omega$ of $\mathbb{R}^3$. We unify the ideas behind the inequalities for all three operators in the sequence, deriving new proofs for the Poincar\'e inequalities for the gradient and the divergence, and extending the available Poincar\'e inequality for the curl to domains with arbitrary second Betti numbers. A key preliminary step consists in deriving "mimetic" Poincar\'e inequalities giving the existence and stability of the solutions to topological balance problems useful in general discrete geometric settings. As an example of application, we study the stability of a novel DDR scheme for the magnetostatics problem on domains with general topology.
Daniele A. Di Pietro, Marien-Lorenzo Hanot
2023-09-27T14:05:11Z
http://arxiv.org/abs/2309.15667v1
# Uniform Poincare inequalities for the Discrete de Rham complex on general domains ###### Abstract In this paper we prove Poincare inequalities for the Discrete de Rham (DDR) sequence on a general connected polyhedral domain \(\Omega\) of \(\mathbb{R}^{3}\). We unify the ideas behind the inequalities for all three operators in the sequence, deriving new proofs for the Poincare inequalities for the gradient and the divergence, and extending the available Poincare inequality for the curl to domains with arbitrary second Betti numbers. A key preliminary step consists in deriving "mimetic" Poincare inequalities giving the existence and stability of the solutions to topological balance problems useful in general discrete geometric settings. As an example of application, we study the stability of a novel DDR scheme for the magnetostatics problem on domains with general topology. **Key words.** Discrete de Rham complex, polytopal methods, Poincare inequalities **MSC2020.** 65N30, 65N99, 14F40 ## 1 Introduction Poincare inequalities are a key tool to prove the well-posedness of many common partial differential equation problems. Mimicking them at the discrete level is typically required for the stability of numerical approximations. Poincare inequalities for conforming Finite Element de Rham complexes can be derived through bounded cochain projections as described, e.g., in [2, Chapter 5]; see also [14] for a recent generalisation. In the context of Virtual Element de Rham complexes [4], similar results typically hinge on non-trivial norm comparison results, examples of which can be found in [5]. Discrete Poincare-type inequalities in the context of the (non-compatible) Hybrid High-Order methods have been derived, e.g., in [18] (gradient), [10] (symmetric gradient) and [13, 21] (curl). The focus of the present work is on the derivation of Poincare inequalities for the Discrete de Rham (DDR) sequence of [16] on domains with general topology. Unlike Finite and Virtual Elements, DDR formulations are fully discrete, with spaces spanned by vectors of polynomials and continuous vector calculus operators replaced by discrete counterparts. Discrete Poincare inequalities thus require to bound \(L^{2}\)-like norms of vectors of polynomials with \(L^{2}\)-like norms of suitable discrete operators applied to them. To establish such bounds, we take inspiration from [19], where it was noticed that the topological information is fully contained in the lowest-order DDR subsequence, and [17], where a Poincare inequality for the curl on topologically trivial domains of \(\mathbb{R}^{3}\) was derived. The lowest-order DDR sequence is strongly linked to Mimetic Finite Differences and related methods [6, 8, 9, 11, 12, 15]. The first step to prove discrete Poincare inequalities in DDR spaces is thus precisely to establish the mimetic counterparts stated in Theorems 4, 6, and 7 below. Their proofs require to work at the global level, with conditions accounting for the topology of the domain appearing for the curl. The discrete Poincare inequalities for arbitrary-order DDR spaces collected in Section 2.7 below are then obtained combining the mimetic Poincare inequalities with local estimates of the higher-order components. We next briefly discuss the links between the present work and previous results for DDR methods. Fully general Poincare inequalities for the gradient and the divergence had already been obtained, respectively, in [16, Theorem 3] and [17] using different techniques. The main novelty of the proofs provided here is that they are better suited to generalisations in the framework of the Polytopal Exterior Calculus recently introduced in [7]. A Poincare inequality for the curl on topologically trivial domains had been obtained in [17, Theorem 20]. The main novelty with respect to this result consists in the extension to domains encapsulating voids. The interest of the material in this paper is additionally that it contains preliminary results to establish discrete Poincare inequalities for advanced complexes, such as the three-dimensional discrete div-div complex recently introduced in [20]. The rest of the paper is organized as follows. The definitions of the relevant DDR spaces and operators are briefly recalled in Section 2. Mimetic Poincares inequalities are derived in Section 3, and then used to prove discrete Poincares inequalities for the DDR complex in Section 4. The latter are used in Section 5 to carry out the stability analysis of a DDR scheme for the magnetostatics problem on domains with general topology. Some arguments in the proofs of mimetic Poincare inequalities rely on specific shape functions for Finite Element spaces on a submesh, whose definitions and properties are summarised in Appendix A. ## 2 Discrete de Rham construction ### Domain and mesh Let \(\Omega\subset\mathbb{R}^{3}\) denote a connected polyhedral domain. We consider a polyhedral mesh \(\mathcal{M}_{h}\coloneqq\mathcal{T}_{h}\cup\mathcal{T}_{h}\cup\mathcal{E}_{h} \cup\mathcal{V}_{h}\), where \(\mathcal{T}_{h}\) gathers the elements, \(\mathcal{F}_{h}\) the faces, \(\mathcal{E}_{h}\) the edges, and \(\mathcal{V}_{h}\) the vertices. For all \(Y\in\mathcal{M}_{h}\), we denote by \(h_{Y}\) its diameter and set \(h\coloneqq\max_{T\in\mathcal{T}_{h}}h_{T}\). For each face \(F\in\mathcal{F}_{h}\), we fix a unit normal \(\boldsymbol{n}_{F}\) to \(F\) and, for each edge \(E\in\mathcal{E}_{h}\), a unit tangent \(\boldsymbol{t}_{E}\). For \(T\in\mathcal{T}_{h}\), \(\mathcal{F}_{T}\) gathers the faces on the boundary \(\partial T\) of \(T\) and \(\mathcal{E}_{T}\) the edges in \(\partial T\); if \(F\in\mathcal{T}_{h}\), \(\mathcal{E}_{F}\) is the set of edges contained in the boundary \(\partial F\) of \(F\). For \(F\in\mathcal{F}_{T}\), \(\omega_{TF}\in\{-1,+1\}\) is such that \(\omega_{TF}\boldsymbol{n}_{F}\) is the outer normal on \(F\) to \(T\). Each face \(F\in\mathcal{F}_{h}\) is oriented counter-clockwise with respect to \(\boldsymbol{n}_{F}\) and, for \(E\in\mathcal{E}_{F}\), we let \(\omega_{FE}\in\{-1,+1\}\) be such that \(\omega_{FE}=+1\) if \(\boldsymbol{t}_{E}\) points along the boundary \(\partial F\) of \(F\) in the clockwise sense, and \(\omega_{FE}=-1\) otherwise; we also denote by \(\boldsymbol{n}_{FE}\) the unit normal vector to \(E\), in the plane spanned by \(F\), such that \(\omega_{FE}\boldsymbol{n}_{FE}\) points outside \(F\). We denote by \(\boldsymbol{\mathrm{grad}}_{F}\) and \(\mathrm{div}_{F}\) the tangent gradient and divergence operators acting on smooth enough functions. Moreover, for any \(r:F\to\mathbb{R}\) and \(\boldsymbol{z}:F\to\mathbb{R}^{2}\) smooth enough, we let \(\boldsymbol{\mathrm{rot}}_{F}\boldsymbol{r}:=(\boldsymbol{\mathrm{grad}}_{F} \boldsymbol{r})^{\perp}\) and \(\mathrm{rot}_{F}\boldsymbol{z}=\mathrm{div}_{F}(\boldsymbol{z}^{\perp})\), with \(\perp\) denoting the rotation of angle \(-\frac{\pi}{2}\) in the oriented tangent space to \(F\). We further assume that \((\mathcal{T}_{h},\mathcal{F}_{h})\) belongs to a regular mesh sequence in the sense of [18, Definition 1.9], with mesh regularity parameter \(\varrho>0\). This implies that, for each \(Y\in\mathcal{T}_{h}\cup\mathcal{F}_{h}\cup\mathcal{E}_{h}\), there exists a point \(\boldsymbol{x}_{Y}\in Y\) such that the ball centered at \(\boldsymbol{x}_{Y}\) and of radius \(\varrho h_{Y}\) is contained in \(Y\). Throughout the paper, \(a\lesssim b\) (resp., \(a\gtrsim b\)) stands for \(a\leq Cb\) (resp., \(a\geq Cb\)) with \(C\) depending only on \(\Omega\), the mesh regularity parameter and, when polynomial functions are involved, the corresponding polynomial degree. We also write \(a\simeq b\) when both \(a\lesssim b\) and \(b\lesssim a\) hold. ### Polynomial spaces and \(L^{2}\)-orthogonal projectors For any \(Y\in\mathcal{M}_{h}\) and an integer \(\ell\geq 0\), we denote by \(\mathcal{P}^{\ell}(Y)\) the space spanned by the restriction to \(Y\) of polynomial functions of the space variables. Let, for \(Y\in\mathcal{T}_{h}\cup\mathcal{F}_{h}\), \(\mathcal{P}^{\ell}(Y)\coloneqq\mathcal{P}^{\ell}(Y)^{n}\) with \(n\) denoting the dimension of \(Y\). We have the following direct decompositions: For all \(F\in\mathcal{F}_{h}\), \[\mathcal{P}^{\ell}(F)=\mathcal{R}^{\ell}(F)\oplus\mathcal{R}^{\mathrm{c}, \ell}(F)\text{ with }\mathcal{R}^{\ell}(F)\coloneqq\boldsymbol{\mathrm{rot}}_{F}\, \mathcal{P}^{\ell+1}(F)\text{ and }\mathcal{R}^{\mathrm{c},\ell}(F)\coloneqq(\boldsymbol{x}- \boldsymbol{x}_{F})\mathcal{P}^{\ell-1}(F)\] and, for all \(T\in\mathcal{T}_{h}\), \[\mathcal{P}^{\ell}(T) =\mathbf{\mathcal{G}}^{\ell}(T)\oplus\mathbf{\mathcal{G}}^{\mathrm{c},\ell }(T)\ \ \text{with }\mathbf{\mathcal{G}}^{\ell}(T)\coloneqq\mathbf{\mathrm{grad}}\,\mathcal{P}^{\ell+1}(T) \ \text{and}\ \mathbf{\mathcal{G}}^{\mathrm{c},\ell}(T)\coloneqq(\mathbf{x}-\mathbf{x}_{T})\times \mathcal{P}^{\ell-1}(T)\] \[=\mathcal{R}^{\ell}(T)\oplus\mathbf{\mathcal{R}}^{\mathrm{c},\ell}(T) \ \ \text{with }\mathcal{R}^{\ell}(T)\coloneqq\mathbf{\mathrm{curl}}\,\mathcal{P}^{\ell+1}(T) \ \text{and}\ \mathcal{R}^{\mathrm{c},\ell}(T)\coloneqq(\mathbf{x}-\mathbf{x}_{T})\mathcal{P}^{\ell- 1}(T).\] We extend the above notations to negative exponents \(\ell\) by setting all the spaces appearing in the decompositions equal to the trivial vector space \(\{\mathbf{0}\}\). Given a polynomial (sub)space \(\mathcal{X}^{\ell}(Y)\) on \(Y\in\mathcal{M}_{h}\), the corresponding \(L^{2}\)-orthogonal projector is denoted by \(\pi^{\ell}_{\mathcal{X},Y}\). Boldface font will be used when the elements of \(\mathcal{X}^{\ell}(Y)\) are vector-valued, and \(\pi^{\mathrm{c},\ell}_{\mathcal{X},Y}\) will denote the \(L^{2}\)-orthogonal projector on \(\mathcal{X}^{\mathrm{c},\ell}(Y)\). ### DDR spaces The discrete counterparts of the spaces appearing in the continuous de Rham complex are defined as follows: \[\underline{X}^{k}_{\mathbf{\mathrm{grad}},h}\coloneqq\left\{\underline {q}_{h}=\left((q_{T})_{T\in\mathcal{T}_{h}},(q_{F})_{F\in\mathcal{T}_{h}},(q_ {E})_{E\in\mathcal{E}_{h}},(q_{V})_{V\in\mathcal{V}_{h}}\right)\ :\right.\] \[\qquad\qquad\qquad\left.q_{T}\in\mathcal{P}^{k-1}(T)\ \text{for all}\ T\in\mathcal{T}_{h},\,q_{F}\in \mathcal{P}^{k-1}(F)\ \text{for all}\ F\in\mathcal{T}_{h},\right.\] \[\qquad\qquad\qquad\left.q_{E}\in\mathcal{P}^{k-1}(E)\ \text{for all}\ E\in\mathcal{E}_{h},\,\text{and}\ q_{V}\in\mathbb{R}\ \text{for all}\ V\in\mathcal{V}_{h}\right\},\] \[\underline{X}^{k}_{\mathbf{\mathrm{curl}},h}\coloneqq\left\{ \underline{v}_{h}=\left((\mathbf{v}_{\mathcal{R},T},\mathbf{v}^{\mathrm{c}}_{ \mathcal{R},T})_{T\in\mathcal{T}_{h}},(\mathbf{v}_{\mathcal{R},F},\mathbf{v}^{\mathrm{ c}}_{\mathcal{R},F})_{F\in\mathcal{T}_{h}},(v_{E})_{E\in\mathcal{E}_{h}}\right)\ :\right.\] \[\qquad\qquad\qquad\qquad\left.\mathbf{v}_{\mathcal{R},T}\in\mathbf{ \mathcal{R}}^{k-1}(T)\ \text{and}\ \mathbf{v}^{\mathrm{c}}_{\mathcal{R},T}\in\mathbf{\mathcal{R}}^{\mathrm{c},k}(T) \ \text{for all}\ T\in\mathcal{T}_{h},\right.\] \[\qquad\qquad\qquad\qquad\left.\mathbf{v}_{\mathcal{R},F}\in\mathbf{ \mathcal{R}}^{k-1}(F)\ \text{and}\ \mathbf{v}^{\mathrm{c}}_{\mathcal{R},F}\in\mathbf{\mathcal{R}}^{\mathrm{c},k}(F) \ \text{for all}\ F\in\mathcal{T}_{h},\right.\] \[\qquad\qquad\qquad\qquad\left.\text{and}\ v_{E}\in\mathcal{P}^{k}(E) \ \text{for all}\ E\in\mathcal{E}_{h}\right\},\] \[\underline{X}^{k}_{\mathrm{div},h}\coloneqq\left\{\underline{w}_{ h}=\left((\mathbf{w}_{\mathcal{G},T},\mathbf{w}^{\mathrm{c}}_{\mathcal{G},T})_{T\in \mathcal{T}_{h}},(w_{F})_{F\in\mathcal{T}_{h}}\right)\ :\right.\] \[\qquad\qquad\qquad\qquad\left.\mathbf{w}_{\mathcal{G},T}\in\mathbf{ \mathcal{G}}^{k-1}(T)\ \text{and}\ \mathbf{w}^{\mathrm{c}}_{\mathbf{\mathcal{G}},T}\in\mathbf{ \mathcal{G}}^{\mathrm{c},k}(T)\ \text{for all}\ T\in\mathcal{T}_{h},\right.\] \[\qquad\qquad\qquad\qquad\left.\text{and}\ w_{F}\in\mathcal{P}^{k}(F) \ \text{for all}\ F\in\mathcal{F}_{h}\right\},\] and \[\mathcal{P}^{k}(\mathcal{T}_{h})\coloneqq\left\{q_{h}\in L^{2}(\Omega)\ :\ (q_{h})_{|T}\in\mathcal{P}^{k}(T)\ \text{for all}\ T\in\mathcal{T}_{h}\right\}.\] ### Local vector calculus operators and potentials #### 2.4.1 Gradient For any \(E\in\mathcal{E}_{h}\), the edge gradient \(G^{k}_{E}:\underline{X}^{k}_{\mathbf{\mathrm{grad}},E}\to\mathcal{P}^{k}(E)\) is such that, for all \(\underline{q}_{E}\in\underline{X}^{k}_{\mathbf{\mathrm{grad}},E}\), \[\int_{E}G^{k}_{E}\underline{q}_{E}\,r=-\int_{E}q_{E}\,r^{\prime}+\llbracket q _{V}\,r\rrbracket_{E}, \tag{1}\] with derivative taken in the direction of \(\mathbf{t}_{E}\) and with \(\llbracket\cdot\rrbracket_{E}\) denoting the difference between vertex values on an edge such that, for any function \(\phi\in C^{0}(\overline{E})\) and any family \(\{w_{V_{1}},w_{V_{2}}\}\) of vertex values such that \(\mathbf{t}_{E}\) points from \(V_{1}\) to \(V_{2}\), \[\llbracket w_{V}\phi\rrbracket_{E}\coloneqq w_{V_{2}}\phi(\mathbf{x}_{V_{2}})-w_{V_{ 1}}\phi(\mathbf{x}_{V_{1}}).\] For any \(F\in\mathcal{T}_{h}\), the face gradient \(\mathbf{G}_{F}^{k}:\underline{X}_{\mathbf{grad},F}^{k}\to\mathbf{\mathcal{P}}^{k}(F)\) and the scalar trace \(\gamma_{F}^{k+1}:\underline{X}_{\mathbf{grad},F}^{k}\to\mathbf{\mathcal{P}}^{k+1}(F)\) are such that, for all \(\underline{q}_{F}\in\underline{X}_{\mathbf{grad},F}^{k}\), \[\int_{F}\mathbf{G}_{F}^{k}\underline{q}_{F}\cdot\mathbf{v} =-\int_{F}q_{F}\operatorname{div}_{F}\mathbf{v}+\sum_{E\in\mathcal{E}_ {F}}\omega_{FE}\int_{E}q_{E}\ (\mathbf{v}\cdot\mathbf{n}_{FE})\quad\forall\mathbf{v}\in\mathbf{ \mathcal{P}}^{k}(F), \tag{2}\] \[\int_{F}\gamma_{F}^{k+1}\underline{q}_{F}\operatorname{div}_{F} \mathbf{v} =-\int_{F}\mathbf{G}_{F}^{k}\underline{q}_{F}\cdot\mathbf{v}+\sum_{E\in \mathcal{E}_{F}}\omega_{FE}\int_{E}q_{E}\ (\mathbf{v}\cdot\mathbf{n}_{FE})\quad\forall\mathbf{v}\in\mathbf{ \mathcal{P}}^{c,k+2}(F).\] Similarly, for all \(T\in\mathcal{T}_{h}\), the element gradient \(\mathbf{G}_{T}^{k}:\underline{X}_{\mathbf{grad},T}^{k}\to\mathbf{\mathcal{P}}^{k}(T)\) is defined such that, for all \(\underline{q}_{T}\in\underline{X}_{\mathbf{grad},T}^{k}\), \[\int_{T}\mathbf{G}_{T}^{k}\underline{q}_{T}\cdot\mathbf{v}=-\int_{T}q_{T} \operatorname{div}\mathbf{v}+\sum_{F\in\mathcal{F}_{T}}\omega_{TF}\int_{F}\gamma _{F}^{k+1}\underline{q}_{F}\ (\mathbf{v}\cdot\mathbf{n}_{F})\qquad\forall\mathbf{v}\in\mathbf{ \mathcal{P}}^{k}(T), \tag{3}\] #### 2.4.2 Curl For all \(F\in\mathcal{F}_{h}\), the face curl \(C_{F}^{k}:\underline{X}_{\mathbf{curl},F}^{k}\to\mathbf{\mathcal{P}}^{k}(F)\) and tangential trace \(\mathbf{\gamma}_{\mathrm{t},F}^{k}:\underline{X}_{\mathbf{curl},F}^{k}\to\mathbf{ \mathcal{P}}^{k}(F)\) are such that, for all \(\underline{\mathbf{v}}_{F}\in\underline{X}_{\mathbf{curl},F}^{k}\), \[\int_{F}C_{F}^{k}\underline{\mathbf{v}}_{F}\cdot\mathbf{r}=\int_{F}\mathbf{ \nu}_{\mathcal{R},F}\cdot\mathbf{rot}_{F}\,r-\sum_{E\in\mathcal{E}_{F}}\omega _{FE}\int_{E}v_{E}\,r\qquad\forall r\in\mathbf{\mathcal{P}}^{k}(F) \tag{4}\] and, for all \((r,\mathbf{w})\in\mathcal{P}_{0}^{k+1}(F)\times\mathbf{\mathcal{R}}^{c,k}(F)\), \[\int_{F}\gamma_{\mathrm{t},F}^{k}\underline{\mathbf{v}}_{F}\cdot( \mathbf{rot}_{F}\,r+\mathbf{w})=\int_{F}C_{F}^{k}\underline{\mathbf{v}}_{F}\,r+\sum_{E \in\mathcal{E}_{F}}\omega_{FE}\int_{E}v_{E}\,r+\int_{F}\mathbf{v}_{\mathcal{R},F}^ {c}\cdot\mathbf{w}.\] For all \(T\in\mathcal{T}_{h}\), the element curl \(\mathbf{C}_{T}^{k}:\underline{X}_{\mathbf{curl},T}^{k}\to\mathbf{\mathcal{P}}^{k}(T)\) is defined such that, for all \(\underline{\mathbf{v}}_{T}\in\underline{X}_{\mathbf{curl},T}^{k}\), \[\int_{T}C_{T}^{k}\underline{\mathbf{v}}_{T}\cdot\mathbf{w}=\int_{T}\mathbf{ \nu}_{\mathcal{R},T}\cdot\mathbf{curl}\,\mathbf{w}+\sum_{F\in\mathcal{F}_{T}} \omega_{TF}\int_{F}\gamma_{\mathrm{t},F}^{k}\underline{\mathbf{v}}_{F}\cdot(\mathbf{w }\times\mathbf{n}_{F})\qquad\forall\mathbf{w}\in\mathbf{\mathcal{P}}^{k}(T). \tag{5}\] #### 2.4.3 Divergence For all \(T\in\mathcal{T}_{h}\), the element divergence \(D_{T}^{k}:\underline{X}_{\operatorname{div},T}^{k}\to\mathbf{\mathcal{P}}^{k}(T)\) is defined by: For all \(\underline{\mathbf{w}}_{T}\in\underline{X}_{\operatorname{div},T}^{k}\), \[\int_{T}D_{T}^{k}\underline{\mathbf{w}}_{T}\,q=-\int_{T}\mathbf{w}_{ \mathcal{G},T}\cdot\mathbf{grad}\,q+\sum_{F\in\mathcal{F}_{T}}\omega_{TF} \int_{F}w_{F}\,q\qquad\forall q\in\mathbf{\mathcal{P}}^{k}(T). \tag{6}\] ### DDR complex The DDR complex reads: \[\begin{CD}0@>{}>{}>\underline{X}_{\mathbf{grad},h}^{k}@>{}>{}>\underline{G}_{ h}^{k}@>{}>{}>\underline{X}_{\mathbf{curl},h}^{k}@>{}>{}>\underline{C}_{ \operatorname{div},h}^{k}@>{D_{h}^{k}}>{}>\mathcal{P}^{k}(\mathcal{T}_{h})@>{ }>{}>\{0\},\end{CD}\] where, for all \((\underline{q}_{h},\underline{\mathbf{v}}_{h},\underline{\mathbf{w}}_{h})\in\underline{X} _{\mathbf{grad},h}^{k}\times\underline{X}_{\mathbf{curl},h}^{k}\times \underline{X}_{\operatorname{div},h}^{k}\), \[\underline{G}_{h}^{k}\underline{q}_{h} \coloneqq\big{(}(\mathbf{\pi}_{\mathcal{R},T}^{k-1}\mathbf{G}_{T}^{k} \underline{q}_{T},\mathbf{\pi}_{\mathcal{R},T}^{c,k}\mathbf{G}_{T}^{k}\underline{q}_{T} )_{T\in\mathcal{T}_{h}},(\mathbf{\pi}_{\mathcal{R},F}^{k-1}\mathbf{G}_{F}^{k}\underline{ q}_{F},\mathbf{\pi}_{\mathcal{R},F}^{c,k}\mathbf{G}_{F}^{k}\underline{q}_{F})_{F\in \mathcal{T}_{h}},(G_{E}^{k}q_{E})_{E\in\mathcal{E}_{h}}\big{)}, \tag{7}\] \[\underline{C}_{h}^{k}\underline{\mathbf{v}}_{h} \coloneqq\big{(}(\mathbf{\pi}_{\mathcal{R},T}^{k-1}\mathbf{C}_{T}^{k} \underline{\mathbf{v}}_{T},\mathbf{\pi}_{\mathcal{R},T}^{c,k}\mathbf{C}_{T}^{k}\underline{ \mathbf{v}}_{T}^{k})_{T\in\mathcal{T}_{h}},(C_{F}^{k}\underline{\mathbf{v}}_{F})_{F\in \mathcal{T}_{h}}\big{)},\] (8) \[(D_{h}^{k}\underline{\mathbf{w}}_{h})_{|T} \coloneqq D_{T}^{k}\underline{\mathbf{w}}_{T}\qquad\forall T\in\mathcal{T}_{h}.\] ### Component norms We endow the discrete spaces defined in Section 2.3 with the \(L^{2}\)-like norms defined as follows: For all \((\underline{q}_{h},\underline{v}_{h},\underline{w}_{h})\in\underline{X}^{k}_{ \mathbf{grad},h}\times\underline{X}^{k}_{\mathbf{curl},h}\times\underline{X}^{ k}_{\mathrm{div},h}\), \[\begin{split}&\left|\!\left|\underline{q}_{h}\right|\!\right|\!\right|_{ \mathbf{grad},h}^{2}&\coloneqq\sum_{T\in\mathcal{T}_{h}}\left|\! \left|\underline{q}_{T}\right|\!\right|_{\mathbf{grad},T}^{2}\text{ with }\\ &\left|\!\left|\underline{q}_{T}\right|\!\right|\!\right|_{ \mathbf{grad},T}^{2}&\coloneqq\left|\!\left|q_{T}\right|\!\right|_{ L^{2}(T)}^{2}+h_{T}\sum_{F\in\mathcal{T}_{h}}\left|\!\left|\underline{q}_{F} \right|\!\right|_{F}^{2}&\forall T\in\mathcal{T}_{h},\\ &\left|\!\left|\underline{q}_{F}\right|\!\right|\!\right|_{ \mathbf{grad},F}^{2}&\coloneqq\left|\!\left|q_{F}\right|\!\right|_{ L^{2}(F)}^{2}+h_{F}\sum_{E\in\mathcal{E}_{h}}\left|\!\left|\underline{q}_{E} \right|\!\right|_{\mathbf{grad},E}^{2}&\forall F\in\mathcal{F}_{h},\\ &\left|\!\left|\underline{q}_{E}\right|\!\right|\!\right|_{ \mathbf{grad},E}^{2}&\coloneqq\left|\!\left|q_{E}\right|\!\right|_{ L^{2}(E)}^{2}+h_{E}\sum_{V\in\mathcal{V}_{E}}\left|q_{V}\right|\!\right|^{2}& \forall E\in\mathcal{E}_{h},\end{split}\] \[\begin{split}&\left|\!\left|\underline{v}_{h}\right|\!\right| \!\right|_{\mathbf{curl},h}^{2}&\coloneqq\sum_{T\in\mathcal{T}_{h }}\left|\!\left|\underline{v}_{T}\right|\!\right|\!\right|_{\mathbf{curl},T}^{ 2}\text{ with }\\ &\left|\!\left|\underline{v}_{T}\right|\!\right|\!\right|_{ \mathbf{curl},T}^{2}&\coloneqq\left|\!\left|\underline{v}_{\mathcal{ R},T}\right|\!\right|_{L^{2}(T;\mathbb{R}^{3})}^{2}+\left|\!\left|\underline{v}_{ \mathcal{R},T}^{c}\right|\!\right|_{L^{2}(T;\mathbb{R}^{3})}^{2}+h_{T}\sum_{F \in\mathcal{T}_{T}}\left|\!\left|\underline{v}_{F}\right|\!\right|_{\mathbf{ curl},F}^{2}&\forall T\in\mathcal{T}_{h},\\ &\left|\!\left|\underline{v}_{F}\right|\!\right|\!\right|_{ \mathbf{curl},F}^{2}&\coloneqq\left|\!\left|\underline{v}_{ \mathcal{R},F}\right|\!\right|_{L^{2}(F;\mathbb{R}^{2})}^{2}+\left|\!\left| \underline{v}_{\mathcal{R},F}\right|\!\right|_{L^{2}(F;\mathbb{R}^{2})}^{2}+h_ {F}\sum_{E\in\mathcal{E}_{F}}\left|\!\left|v_{E}\right|\!\right|_{L^{2}(E)}& \forall F\in\mathcal{F}_{h},\end{split} \tag{9}\] and \[\begin{split}&\left|\!\left|\underline{w}_{h}\right|\!\right|\! \right|_{\mathrm{div},h}^{2}&\coloneqq\sum_{T\in\mathcal{T}_{h}} \left|\!\left|\underline{w}_{\mathcal{T}}\right|\!\right|_{\mathrm{div},T}^{ 2}\text{ with }\\ &\left|\!\left|\underline{w}_{T}\right|\!\right|\!\right|_{ \mathrm{div},T}^{2}&\coloneqq\left|\!\left|\underline{w}_{ \mathcal{G},T}\right|\!\right|_{L^{2}(T;\mathbb{R}^{3})}^{2}+\left|\!\left| \underline{w}_{\mathcal{G},T}^{c}\right|\!\right|_{L^{2}(T;\mathbb{R}^{3})}^{2} +h_{T}\sum_{F\in\mathcal{T}_{T}}\left|\!\left|\underline{v}_{F}\right|\!\right|_{ L^{2}(F)}^{2}&\forall T\in\mathcal{T}_{h}.\end{split} \tag{10}\] ### Main results **Theorem 1** (Poincare inequality for the gradient).: _For all \(\underline{p}_{h}\in\underline{X}^{k}_{\mathbf{grad},h}\), it holds_ \[\inf_{\underline{r}_{h}\in\mathrm{Ker}\,\underline{G}^{k}_{h}}\left|\!\left| \underline{p}_{h}-\underline{r}_{h}\right|\!\right|\!\right|\!\right|\!\right|\! \left|\underline{\mathbf{grad},h}\lesssim\left|\!\left|\underline{G}^{k}_{h} \underline{p}_{h}\right|\!\right|\!\right|\!\right|\!\right|\!\right|\!\right|\! \left|\!\right|\!\right|\!\left|\!\right|\!\right|\!\left|\!\right|\!\left|\! \right|\!\left|\!\right|\!\right|\!\left|\!\right|\!\right|\!\] _with hidden constant only depending on \(\Omega\), the mesh regularity parameter, and \(k\)._ Proof.: See Section 4.1. **Theorem 2** (Poincare inequality for the curl).: _For all \(\underline{v}_{h}\in\underline{X}^{k}_{\mathbf{curl},h}\), it holds_ \[\inf_{\underline{z}_{h}\in\mathrm{Ker}\,\underline{G}^{k}_{h}}\left|\!\left| \underline{v}_{h}-\underline{z}_{h}\right|\!\right|\!\right|\!\left|\!\right|\! \left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\right|\!\left|\!\right|\! \left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\right|\!\left|\!\right|\! \left|\!\right|\!\right|\!\left|\!\right|\!\right|\!\left|\!\right|\!\left|\! \right|\!\right|\!\left|\!\right|\!\left|\!\right|\!\right|\!\left|\!\right|\! \left|\!\right|\!\!\left|\!\right|\!\right|\!\left|\!\right|\!\right|\!\] _with hidden constant only depending on \(\Omega\), the mesh regularity parameter, and \(k\)._ Proof.: See Section 4.2. **Theorem 3** (Poincare inequality for the divergence).: _For all \(\underline{w}_{h}\in\underline{X}^{k}_{\mathrm{div},h}\), it holds_ \[\inf_{\underline{z}_{h}\in\mathrm{Ker}\,\underline{G}^{k}_{h}}\left|\!\left| \underline{w}_{h}-\underline{z}_{h}\right|\!\right|\!\right|\!\!\left|\!\right| \!\left|\!\left|\!\right|\!\left|\!\right|\!\!\left|\!\right|\!\left|\!\right|\! \left|\!\right|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\! \left|\!\right|\!\left|\!\right|\!\right|\!\left|\!\right|\!\right|\!\left|\! \right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\! \left|\!\right|\!\left|\!\right|\!\right|\!\left|\!\right|\!\left|\!\right|\! \left|\!\right|\!\right|\!\left|\!\right|\!\left|\!\right|\!\right|\!\left|\! \right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\! \left|\!\right|\!\right|\!\left|\!\right|\!\left|\!\right|\!\right|\!\left|\! \right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\! \left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\right|\! \left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\! \right|\!\left|\!\right|\!\left|\right|\!\right|\!\left|\!\right|\!\left|\!\right| \!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\! \right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\! \left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\! \right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\! \left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right| \!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\!\left|\!\right|\! \left|\!\right|\!\left|\!\right|\!\left|\!\right| ## 3 Mimetic Poincare inequalities This section contains Poincare inequalities in mimetic spaces that are instrumental in proving the main results stated in the previous section. Their proofs rely on the use of a tetrahedral submesh \(\mathfrak{W}_{h}=\mathfrak{T}_{h}\cup\mathfrak{F}_{h}\cup\mathfrak{F}_{h}\cup \mathfrak{F}_{h}\) in the sense of [18, Definition 1.8], with \(\mathfrak{T}_{h}\) collecting the tetrahedral subelements and \(\mathfrak{F}_{h}\), \(\mathfrak{F}_{h}\), and \(\mathfrak{F}_{h}\) their faces, edges, and vertices, respectively. We assume, for the sake of simplicity, that this submesh can be obtained adding as new vertices only centers of the faces and elements of \(\mathcal{M}_{h}\). As a result of the assumptions in [18, Definition 1.8], the regularity parameter of the submesh only depends on that of \(\mathcal{M}_{h}\) and, for a given element \(T\in\mathcal{T}_{h}\), the diameters of the submesh entities contained in \(\overline{T}\) are comparable to \(h_{T}\) uniformly in \(h\). ### Mimetic Poincare inequality for collections of vertex values **Theorem 4** (Mimetic Poincare inequality for collections of vertex values).: _Let \((\alpha_{V})_{V\in\mathcal{V}_{h}}\in\mathbb{R}^{\mathcal{V}_{h}}\) be a collection of values at vertices. Then, there is \(C\in\mathbb{R}\) such that_ \[\sum_{T\in\mathcal{T}_{h}}h_{T}^{3}\sum_{V\in\mathcal{V}_{T}}(\alpha_{V}-C)^{2 }\lesssim\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{E\in\mathcal{E}_{T}}|\!|\!| \!|\alpha_{V}|\!|\!|_{E}|\!|^{2}, \tag{11}\] _with hidden constant only depending on \(\Omega\) and the mesh regularity parameter._ Proof.: We extend the collection \((\alpha_{V})_{V\in\mathcal{V}_{h}}\) to \(\mathfrak{B}_{h}\) setting the values at face/element centers equal to the value taken at an arbitrary vertex of the face/element in question. For any simplex \(S\in\mathfrak{T}_{h}\) and any vertex \(V\in\mathfrak{B}_{S}\) (with \(\mathfrak{B}_{S}\) collecting the vertices of \(S\)), let \(\phi_{S,V}\) denote the restriction to \(S\) of the piecewise affine "hat" function associated with \(V\) given by (77), and let \(\phi_{h}\in H^{1}(\Omega)\) be the piecewise polynomial function defined by setting \[(\phi_{h})_{|S}\coloneqq\sum_{V\in\mathfrak{B}_{S}}(\alpha_{V}-C)\phi_{S,V} \qquad\forall S\in\mathfrak{T}_{h}, \tag{12}\] where \(C\in\mathbb{R}\) is chosen so that the zero-average condition \(\int_{\Omega}\phi_{h}=0\) is satisfied. We next prove the following norm equivalences: \[\|\phi_{h}\|_{L^{2}(\Omega)}^{2} \simeq\sum_{T\in\mathcal{T}_{h}}h_{T}^{3}\sum_{V\in\mathcal{V}_{T }}(\alpha_{V}-C)^{2},\] (13) \[\|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!| \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!\!|\!|\!|\!|\!|\!|\!\!|\!| \!|\!|\!|\!\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!|\!|\!|\!|\!|\!|\! \!|\!|\!|\!\!|\!|\!|\!\!|\!|\!\!|\!|\!|\!\!|\!|\!|\!|\!\!|\!\!|\!|\!|\!|\!\!|\!|\!|\! \!|\!|\!\!|\!|\!|\!|\!|\!\!|\!|\!|\!\!|\!\!|\!\!|\!|\!|\!\!|\!|\!|\!|\!|\!\!|\!|\! \!|\!|\!\!|\!|\!\!|\!|\!|\!|\!|\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!|\!\!|\! \!|\!|\!|\!\!|\!\!|\!|\!\!|\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\! \!|\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!|\! \!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\! \!|\!|\!\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!|\!\!|\!|\!\!|\!\!|\!|\!\! \(\mathfrak{T}_{T}\) collecting the subelements contained in \(T\)). It holds \[\begin{split}\left\|\phi_{h}\right\|_{L^{2}(\Omega)}^{2}& \stackrel{{\eqref{eq:T}}}{{=}}\sum_{T\in\mathcal{T}_{h}}\sum_{S \in\mathfrak{T}_{T}}\int_{S}\left(\sum_{V\in\mathfrak{B}_{S}}(\alpha_{V}-C) \phi_{S,V}\right)^{2}\\ &\simeq\sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}}\sum_ {V\in\mathfrak{B}_{S}}(\alpha_{V}-C)^{2}\|\phi_{S,V}\|_{L^{2}(S)}^{2}\\ &\stackrel{{\eqref{eq:T}}}{{=}}\sum_{T\in\mathcal{T}_ {h}}h_{T}^{3}\sum_{S\in\mathfrak{T}_{T}}\sum_{V\in\mathfrak{B}_{S}}(\alpha_{V}- C)^{2}\\ &\simeq\sum_{T\in\mathcal{T}_{h}}h_{T}^{3}\sum_{V\in\mathcal{V}_ {T}}(\alpha_{V}-C)^{2},\end{split} \tag{15}\] where the second equivalence follows from the fact that \(\operatorname{card}(\mathfrak{B}_{S})\lesssim 1\), in the third one we have additionally used \(h_{S}\simeq h_{T}\) for all \(T\in\mathcal{T}_{h}\) and all \(S\in\mathfrak{T}_{T}\), and the last one is justified by the choice we made at the beginning for \(\alpha_{V}\), \(V\in\mathfrak{B}_{h}\setminus\mathcal{V}_{h}\). This readily gives (13). (ii) _Proof of (14)_. The key argument to obtain (14) lies in the de Rham theorem: Let \((\not{\psi}_{S,E})_{E\in\mathfrak{E}_{S}}\) (with \(\mathfrak{C}_{S}\) collecting the edges of \(S\)) be the basis for the edge Nedelec space given by (78). Then, summing (86), we have \[\mathbf{grad}\left(\sum_{V\in\mathfrak{B}_{S}}\alpha_{V}\phi_{S,V}\right)=\sum _{E\in\mathfrak{E}_{S}}\llbracket\alpha_{V}\rrbracket_{E}\not{\psi}_{S,E}. \tag{16}\] Starting from (16) and proceeding in a similar way as in (15) with (83) replacing (82), we have \[\int_{\Omega}\left\|\mathbf{grad}\,\phi_{h}\right\|^{2}\simeq\sum_{T\in \mathcal{T}_{h}}h_{T}\sum_{S\in\mathfrak{T}_{T}}\sum_{E\in\mathfrak{E}_{S}}| \llbracket\alpha_{V}\rrbracket_{E}|^{2}, \tag{17}\] with, for any \(\boldsymbol{v}\in\mathbb{R}^{3}\), \(\left\|\boldsymbol{v}\right\|\) denoting the Euclidian norm of \(\boldsymbol{v}\). Now, for any edge \(E\in\mathfrak{E}_{S}\) of any simplex \(S\in\mathfrak{T}_{T}\), either \(E\in\mathcal{E}_{T}\) or, by the choice made at the beginning of this proof for \(\alpha_{V}\) with \(V\) face or element center, \(\llbracket\alpha_{V}\rrbracket_{E}\) can be computed as the sum of jumps along the boundary of \(T\) (i.e. \(\llbracket\alpha_{V}\rrbracket_{E}=\sum_{E^{\prime}\in\mathcal{E}_{T}}\omega_{ E^{\prime}E}\llbracket\alpha_{V}\rrbracket_{E^{\prime}}\) for \(\omega_{E^{\prime}E}\in\{-1,0,1\}\)). Therefore, \[\sum_{S\in\mathfrak{T}_{T}}\sum_{E\in\mathfrak{E}_{S}}|\llbracket\alpha_{V} \rrbracket_{E}|^{2}\simeq\sum_{E\in\mathcal{E}_{T}}|\llbracket\alpha_{V} \rrbracket_{E}|^{2},\] and we infer (14) from (17). ### Mimetic Poincare inequality for collections of edge values If the topology of the domain is non-trivial, a suitable condition for each void must be satisfied in order to establish a mimetic Poincare inequality for collections of edge values. Denote by \(b_{2}\) the second Betti number, i.e., the number of voids encapsulated by \(\Omega\). Let \((\mathcal{F}_{\gamma_{i}})_{1\leq i\leq b_{2}}\) denote collections of boundary faces such that \(\gamma_{i}\coloneqq\bigcup_{F\in\mathcal{F}_{\gamma_{i}}}\overline{F}\) is the boundary if the \(i\)th void. We start by proving a necessary and sufficient condition under which a function in the lowest-order Raviart-Thomas-Nedelec face space on the tetrahedral submesh is the curl of a function in the edge Nedelec space on the same mesh. **Lemma 5** (Condition on the cohomology).: _Denote by \(\mathcal{RT}^{1}(\mathfrak{T}_{h})\) and \(\mathcal{N}^{1}(\mathfrak{T}_{h})\) lowest-order face and edge finite element spaces on the submesh. Then, for all \(\boldsymbol{\phi}_{h}\in\mathcal{RT}^{1}(\mathfrak{T}_{h})\), there exists \(\boldsymbol{\chi}_{h}\in\mathcal{N}^{1}(\mathfrak{T}_{h})\) such that \(\boldsymbol{\phi}_{h}=\mathbf{curl}\,\boldsymbol{\chi}_{h}\) if and only if_ \[\operatorname{div}\boldsymbol{\phi}_{h}=0\text{ and }\sum_{F\in\mathcal{F}_{ \gamma_{i}}}\int_{F}\boldsymbol{\phi}_{h}\cdot\boldsymbol{n}_{\Omega}=0\text{ for all integer }i\text{ such that }1\leq i\leq b_{2}, \tag{18}\] _with \(\boldsymbol{n}_{\Omega}\) denoting the unit normal vector to the boundary of \(\Omega\) pointing out of \(\Omega\)._ Proof.: To check that (18) is necessary, notice that the first condition comes from the identity \(\operatorname{div}\,\operatorname{\mathbf{curl}}=0\), while the second one follows from Green's theorem along with the fact that the flux of the curl of any function across a closed boundary is zero. We next prove that condition (18) is sufficient using a counting argument. From the de Rham Theorem, we know that that the dimension of the space of harmonic forms is precisely the number of voids \(b_{2}\). For all integer \(i\) such that \(1\leq i\leq b_{2}\), we define the linear form \(L_{i}\) such that, for any vector-valued function \(\boldsymbol{\phi}\) smooth enough, \[L_{i}(\boldsymbol{\phi})\coloneqq-\sum_{F\in\mathcal{F}_{Y_{i}}}\int_{F} \boldsymbol{\phi}\cdot\boldsymbol{n}_{\Omega}.\] For any \(1\leq i\leq b_{2}\), let \(\boldsymbol{x}_{i}\in\mathbb{R}^{3}\) be any point inside the \(i\)th void and consider the function \(\boldsymbol{\phi}_{i}:\mathbb{R}^{3}\ni\boldsymbol{x}\mapsto\frac{ \boldsymbol{x}-\boldsymbol{x}_{i}}{\|\boldsymbol{x}-\boldsymbol{x}_{i}\|^{3}} \in\mathbb{R}^{3}\). Noticing that \(\operatorname{div}\,\boldsymbol{\phi}_{i}=0\) and applying the divergence theorem inside the \(j\)th void, we infer that \(L_{j}(\boldsymbol{\phi}_{i})=0\) if \(j\neq i\). Let us now show that \(L_{i}(\boldsymbol{\phi}_{i})>0\). Let \(r>0\) be the distance between \(\boldsymbol{x}_{i}\) and \(\Omega\). Denoting by \(\mathcal{S}_{i}\) the sphere of radius \(\frac{r}{2}\) centred in \(\boldsymbol{x}_{i}\), we have that \(\int_{\mathcal{S}_{i}}\boldsymbol{\phi}_{i}\cdot\frac{(\boldsymbol{x}- \boldsymbol{x}_{i})}{\|\boldsymbol{x}-\boldsymbol{x}_{i}\|}=\int_{\mathcal{S}_ {i}}4r^{-2}=4\pi>0\). Applying once again the divergence theorem to \(\boldsymbol{\phi}_{i}\) on the volume \(\mathcal{V}_{i}\) enclosed between \(\mathcal{S}_{i}\) and \(\gamma_{i}\), we have that \(0=\int_{\mathcal{V}_{i}}\operatorname{div}\,\boldsymbol{\phi}_{i}=L_{i}( \boldsymbol{\phi}_{i})-\int_{\mathcal{S}_{i}}\boldsymbol{\phi}_{i}\cdot\frac{( \boldsymbol{x}-\boldsymbol{x}_{i})}{\|\boldsymbol{x}-\boldsymbol{x}_{i}\|}\). Therefore, \[L_{i}(\boldsymbol{\phi}_{i})=\int_{\mathcal{S}_{i}}\boldsymbol{\phi}_{i}\cdot \frac{(\boldsymbol{x}-\boldsymbol{x}_{i})}{\|\boldsymbol{x}-\boldsymbol{x}_{i} \|}>0. \tag{19}\] Denoting by \(\boldsymbol{\pi}_{\mathcal{R}\mathcal{T},h}^{1}\) the canonical interpolator onto \(\mathcal{R}\mathcal{T}^{1}(\mathfrak{T}_{h})\), we know that, for any function \(\boldsymbol{\phi}\), \(\operatorname{div}(\boldsymbol{\pi}_{\mathcal{R}\mathcal{T},h}^{1}\boldsymbol {\phi})=\boldsymbol{\pi}_{\mathcal{P},0}^{1}(\operatorname{div}\, \boldsymbol{\phi})\) and \(L_{i}(\boldsymbol{\pi}_{\mathcal{R}\mathcal{T},h}^{1}\boldsymbol{\phi})=L_{i} (\boldsymbol{\phi})\) by definition of the interpolator. Therefore, for any integer \(i\) such that \(1\leq i\leq b_{2}\), \(\boldsymbol{\pi}_{\mathcal{R}\mathcal{T},h}^{1}\boldsymbol{\phi}_{i}\) is a discrete harmonic form and, by a counting argument, the linearly independent family \((\boldsymbol{\pi}_{\mathcal{R}\mathcal{T},h}^{1}\boldsymbol{\phi}_{i})_{1 \leq i\leq b_{2}}\) spans the space of discrete harmonic forms. Let now \(\boldsymbol{\phi}_{h}\) be such that \(\operatorname{div}\,\boldsymbol{\phi}_{h}=0\). Then, \(\boldsymbol{\phi}_{h}=\operatorname{\mathbf{curl}}\,\boldsymbol{\chi}_{h}+ \sum_{i=1}^{b_{2}}\lambda_{i}\,\boldsymbol{\pi}_{\mathcal{R}\mathcal{T},h}^{1 }\boldsymbol{\phi}_{i}\) for some \(\boldsymbol{\chi}_{h}\in\mathcal{N}^{1}(\mathfrak{T}_{h})\) and \((\lambda_{i})_{1\leq i\leq b_{2}}\in\mathbb{R}^{b_{2}}\). We prove that the condition \(L_{i}(\boldsymbol{\phi})=0\) for all \(1\leq i\leq b_{2}\) is sufficient to ensure that \(\boldsymbol{\phi}_{h}\) is in the range of \(\operatorname{\mathbf{curl}}\) by contradiction. As a matter of fact, if this were not the case, then there would be \(i_{0}\) such that \(\lambda_{i_{0}}\neq 0\). However, by (19), this would also imply \(L_{i_{0}}(\boldsymbol{\phi}_{h})=\lambda_{i_{0}}L_{i_{0}}(\boldsymbol{\pi}_{ \mathcal{R}\mathcal{T},h}^{1}\boldsymbol{\phi}_{i0})\neq 0\), which is the sought contradiction. **Theorem 6** (Mimetic Poincare inequality for collections of edge values).: _Let \((\alpha_{F})_{F\in\mathcal{T}_{h}}\in\mathbb{R}^{\mathcal{T}_{h}}\) be a collection of values at faces satisfying_ \[\sum_{F\in\mathcal{T}_{H}}\omega_{TF}\alpha_{F}=0\text{ for all }T\in \mathcal{T}_{h}\text{ and } \tag{20}\] \[\sum_{F\in\mathcal{T}_{Y_{i}}}\omega_{\Omega F}\alpha_{F}=0\text{ for all integer }i\text{ such that }1\leq i\leq b_{2}\text{, }\] _where \(\omega_{\Omega F}\in\{-1,1\}\) is such that \(\omega_{\Omega F}\boldsymbol{n}_{F}\) points outside the domain \(\Omega\). Then, there is a collection \((\alpha_{E})_{E\in\mathcal{E}_{h}}\in\mathbb{R}^{\mathcal{E}_{h}}\) of values at edges such that, for all \(F\in\mathcal{F}_{h}\),_ \[\sum_{E\in\mathcal{E}_{F}}\omega_{FE}\alpha_{E}=\alpha_{F}\text{ and }\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{E\in\mathcal{E}_{T}}\alpha_{E}^{2}\lesssim \sum_{T\in\mathcal{T}_{h}}h_{T}^{-1}\sum_{F\in\mathcal{T}_{T}}\alpha_{F}^{2} \tag{21}\] _with hidden constant only depending on \(\Omega\) and the mesh regularity parameter._ Proof.: Let \((\boldsymbol{\phi}_{E,S})_{S\in\mathfrak{T}_{h},E\in\mathfrak{S}_{S}}\) and \((\boldsymbol{\psi}_{F,S})_{S\in\mathfrak{T}_{h},F\in\mathfrak{S}_{S}}\) (with \(\mathfrak{S}_{S}\) denoting the set of triangular faces of \(S\)) be the families of basis functions respectively given by (78) and (79) below. The main difficulty is to extend the family \((\alpha_{F})_{F\in\mathcal{T}_{h}}\) to a family \((\alpha_{F})_{F\in\mathfrak{S}_{h}}\) satisfying, for all \(S\in\mathfrak{T}_{h}\), \[\sum_{F\in\mathfrak{S}_{S}}\omega_{SF}\alpha_{F}=0.\] We perform the construction locally on each element \(T\in\mathcal{T}_{h}\). Let \(g\in L^{2}(T)\) be the piecewise constant function on \(\mathfrak{T}_{T}\) such that \[g_{|S}\coloneqq-\frac{1}{|S|}\sum_{F\in\overline{\mathfrak{G}_{S}}\cap\mathcal{ F}_{T}}\omega_{TF}\alpha_{F}\qquad\forall S\in\mathfrak{T}_{T}. \tag{22}\] Recalling (20), by definition we have \(\int_{T}g=0\). Hence, using Lion's Lemma [1, Theorem 3.1.e], we infer the existence of \(\boldsymbol{u}\in\boldsymbol{H}^{1}_{0}(T;\mathbb{R}^{3})\) such that \[\operatorname{div}\boldsymbol{u}=g\text{ and }|\boldsymbol{u}|_{ \boldsymbol{H}^{1}(T;\mathbb{R}^{3})}\lesssim\|g\|_{L^{2}(T)}\lesssim h_{T}^{- \frac{1}{2}}\sum_{F\in\mathcal{F}_{T}}|\alpha_{F}|, \tag{23}\] where the last inequality follows from the definition of \(g\) after observing that \(|S|\simeq h_{T}^{3}\) for all \(S\in\mathfrak{T}_{T}\) by mesh regularity. Setting \(\alpha_{F}\coloneqq\int_{F}\boldsymbol{u}\cdot\boldsymbol{n}_{F}\) (with \(\boldsymbol{n}_{F}\) denoting the unit normal vector to \(F\), with orientation consistent with that of \(F\)), for all \(S\in\mathfrak{T}_{T}\) and all \(F\in\mathfrak{G}_{S}\setminus\mathcal{F}_{T}\). We infer that, for all \(S\subset\mathfrak{T}_{h}\), \(\sum_{F\in\mathfrak{G}_{S}}\omega_{SF}\alpha_{F}=0\), noticing that \[\sum_{F\in\mathfrak{G}_{S}}\omega_{SF}\alpha_{F} =\sum_{F\in\mathfrak{G}_{S}\setminus\mathcal{F}_{T}}\omega_{SF} \alpha_{F}+\sum_{F\in\mathfrak{G}_{S}\cap\mathcal{F}_{T}}\omega_{SF}\alpha_{F}\] \[=\sum_{F\in\mathfrak{G}_{S}}\omega_{SF}\int_{F}\boldsymbol{u} \cdot\boldsymbol{n}_{F}+\sum_{F\in\mathfrak{G}_{S}\cap\mathcal{F}_{T}}\omega_ {SF}\alpha_{F}\] \[=\int_{S}\operatorname{div}\boldsymbol{u}+\sum_{F\in\mathfrak{G}_ {S}\cap\mathcal{F}_{T}}\omega_{SF}\alpha_{F}\] \[\stackrel{{\eqref{eq:20}}}{{=}}-\sum_{F\in\mathfrak{ G}_{S}\cap\mathcal{F}_{T}}\omega_{SF}\alpha_{F}+\sum_{F\in\mathfrak{G}_{S} \cap\mathcal{F}_{T}}\omega_{SF}\alpha_{F}=0,\] where we have used the fact that \(\boldsymbol{u}\cdot\boldsymbol{n}_{F}=0\) on every face \(F\in\mathcal{F}_{T}\) lying on the boundary of \(T\) to obtain the second equality. Let \(\overline{\boldsymbol{u}}_{T}\coloneqq\frac{1}{|T|}\int_{T}\boldsymbol{u}\) and, for all integer \(i\) such that \(1\leq i\leq 3\), denote by \(u_{i}\) and \(\overline{u}_{i}\) the \(i\)th components of \(\boldsymbol{u}\) and \(\overline{\boldsymbol{u}}_{T}\), respectively. It holds \[|T|\,\overline{\boldsymbol{u}}_{i}=\int_{T}u_{i}=\int_{T} \boldsymbol{u}\cdot\boldsymbol{e}_{i} =\int_{T}\boldsymbol{u}\cdot\boldsymbol{\mathrm{grad}}(x_{i}- \boldsymbol{x}_{T}\cdot\boldsymbol{e}_{i})\] \[=-\int_{T}\operatorname{div}\boldsymbol{u}\ (x_{i}- \boldsymbol{x}_{T}\cdot\boldsymbol{e}_{i})=-\int_{T}g\ (x_{i}- \boldsymbol{x}_{T}\cdot\boldsymbol{e}_{i})\lesssim h_{T}\sum_{F\in\mathcal{F}_{ T}}|\alpha_{F}|,\] so that, recalling that \(|T|\simeq h_{T}^{3}\), \[\|\overline{\boldsymbol{u}}_{T}\|_{\boldsymbol{L}^{2}(T;\mathbb{R}^{3})} \lesssim h_{T}^{-\frac{1}{2}}\sum_{F\in\mathcal{F}_{T}}|\alpha_{F}|. \tag{24}\] Therefore, we can use the Poincare inequality on the domain \(T\) to write \[\|\boldsymbol{u}\|_{\boldsymbol{L}^{2}(T;\mathbb{R}^{3})} \leq\|\boldsymbol{u}-\overline{\boldsymbol{u}}_{T}\|_{\boldsymbol {L}^{2}(T;\mathbb{R}^{3})}+\|\overline{\boldsymbol{u}}_{T}\|_{\boldsymbol{L}^ {2}(T;\mathbb{R}^{3})}\] \[\lesssim h_{T}|\boldsymbol{u}|_{\boldsymbol{H}^{1}(T;\mathbb{R}^ {3})}+\|\overline{\boldsymbol{u}}_{T}\|_{\boldsymbol{L}^{2}(T;\mathbb{R}^{3})} \stackrel{{\eqref{eq:20}}}{{\lesssim}}h_{T}^{-\frac{1}{2}}\sum_{F \in\mathcal{F}_{T}}|\alpha_{F}|.\] Combining this result with the continuous trace inequality we have, for all \(S\in\mathfrak{T}_{T}\) and all \(F\in\mathfrak{G}_{S}\setminus\mathcal{F}_{T}\), \[|\alpha_{F}|\lesssim h_{F}\|\boldsymbol{u}\|_{\boldsymbol{L}^{2}(F;\mathbb{R}^ {3})}\lesssim h_{T}^{\frac{1}{2}}\|\boldsymbol{u}\|_{\boldsymbol{L}^{2}(T; \mathbb{R}^{3})}+h_{T}^{\frac{3}{2}}|\boldsymbol{u}|_{\boldsymbol{H}^{1}(T; \mathbb{R}^{3})}\lesssim\sum_{F\in\mathcal{F}_{T}}|\alpha_{F}|.\] Therefore, summing over all tetrahedra inside \(T\) and all tetrahedral faces, we obtain \[\sum_{S\in\mathfrak{T}_{T}}\sum_{F\in\mathfrak{G}_{S}}\alpha_{F}^{2}\lesssim \sum_{F\in\mathcal{T}_{T}}\alpha_{F}^{2}. \tag{25}\] We next define the following piecewise polynomial function: \[\boldsymbol{\psi}_{h}\coloneqq\sum_{S\in\mathfrak{T}_{h}}\sum_{F\in\mathfrak{ G}_{S}}\alpha_{F}\boldsymbol{\psi}_{F,S}\in\boldsymbol{H}(\operatorname{div} ;\Omega). \tag{26}\] Since \(\operatorname{div}\boldsymbol{\psi}_{h}=0\) and, for all \(1\leq i\leq b_{2}\), \(\sum_{F\in\mathcal{T}_{T_{i}}}\omega_{\Omega F}\int_{F}\boldsymbol{\psi}_{h} \cdot\boldsymbol{n}_{F}=0\), we can use Lemma 5 to infer from the uniform Poincare inequality on the simplicial de Rham complex [2] the existence of \[\boldsymbol{\phi}_{h}\coloneqq\sum_{S\in\mathfrak{T}_{h}}\sum_{E\in\mathfrak{ G}_{S}}\alpha_{E}\boldsymbol{\phi}_{E,S}\in\boldsymbol{H}(\operatorname{curl} ;\Omega)\] such that \[\operatorname{\mathbf{curl}}\boldsymbol{\phi}_{h}=\boldsymbol{\psi}_{h}\text{ and }\|\boldsymbol{\phi}_{h}\|_{\boldsymbol{L}^{2}(\Omega;\mathbb{R}^{3})} \lesssim\|\boldsymbol{\psi}_{h}\|_{\boldsymbol{L}^{2}(\Omega;\mathbb{R}^{3})}. \tag{27}\] Summing (87), we have \[\operatorname{\mathbf{curl}}\boldsymbol{\phi}_{h}=\sum_{S\in\mathfrak{T}_{h}} \sum_{F\in\mathfrak{G}_{S}}\left(\sum_{E\in\mathcal{E}_{F}}\omega_{FE}\alpha _{E}\right)\boldsymbol{\psi}_{F,S}. \tag{28}\] Hence, equating (26) and (28), we infer that, for all \(F\in\mathcal{F}_{h}\subset\mathfrak{G}_{h}\), \(\sum_{E\in\mathcal{E}_{F}}\omega_{FE}\alpha_{E}=\alpha_{F}\). Moreover, noticing that both \(\boldsymbol{\phi}_{E,S}\) and \(\boldsymbol{\psi}_{F,S}\) are only supported in \(S\), we have \[\|\boldsymbol{\phi}_{h}\|_{\boldsymbol{L}^{2}(\Omega;\mathbb{R}^{3})}^{2} =\sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}}\int_{S} \left(\sum_{E\in\mathfrak{G}_{S}}\alpha_{E}\boldsymbol{\phi}_{E,S}\right)^{2}\] (29) \[\simeq\sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}}\sum_ {E\in\mathfrak{G}_{S}}\alpha_{E}^{2}\|\boldsymbol{\phi}_{E,S}\|_{\boldsymbol {L}^{2}(S;\mathbb{R}^{3})}^{2}\] \[\stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq: ### Mimetic Poincare inequality for collections of face values **Theorem 7** (Mimetic Poincare inequality for collections of face values).: _Let \((\alpha_{T})_{T\in\mathcal{T}_{h}}\in\mathbb{R}^{\mathcal{T}_{h}}\) be a collection of values at elements. Then, there is a collection \((\alpha_{F})_{F\in\mathcal{T}_{h}}\in\mathbb{R}^{\mathcal{T}_{h}}\) of values at faces such that, for all \(T\in\mathcal{T}_{h}\),_ \[\sum_{F\in\mathcal{T}_{T}}\omega_{TF}\alpha_{F}=\alpha_{T}\text{ and }\sum_{T\in\mathcal{T}_{h}}h_{T}^{-1}\sum_{F\in\mathcal{T}_{T}}\alpha_{F}^{2} \lesssim\sum_{T\in\mathcal{T}_{h}}h_{T}^{-3}\alpha_{T}^{2} \tag{31}\] _with hidden constant only depending on \(\Omega\) and the mesh regularity parameter._ Proof.: Let \((\boldsymbol{\phi}_{F})_{F\in\mathfrak{T}_{h}}\) and \((\psi_{S})_{S\in\mathfrak{T}_{h}}\) be the basis functions of the face Raviart-Thomas-Nedelec and of the fully discontinuous piecewise affine spaces on the tetrahedral submesh, respectively given by (79) and (80) below. Define the following piecewise polynomial function: \[\psi_{h}\coloneqq\sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}}\alpha_ {T}\frac{|S|}{|T|}\psi_{S}. \tag{32}\] We infer from the uniform Poincare inequality on the simplicial de Rham complex [2] the existence of \(\boldsymbol{\phi}_{h}\coloneqq\sum_{F\in\mathfrak{T}_{h}}\alpha_{F} \boldsymbol{\phi}_{F}\in\mathcal{RT}^{1}(\mathfrak{T}_{h})\subset\boldsymbol{ H}(\operatorname{div};\Omega)\) such that \[\operatorname{div}\boldsymbol{\phi}_{h}=\psi_{h}\text{ and }\|\boldsymbol{\phi}_{h}\|_{L^{2}( \Omega;\mathbb{R}^{3})}\lesssim\|\psi_{h}\|_{L^{2}(\Omega)}. \tag{33}\] Summing (88), on the other hand, we obtain \[\operatorname{div}\boldsymbol{\phi}_{h}=\sum_{T\in\mathcal{T}_{h}}\sum_{S\in \mathfrak{T}_{T}}\sum_{F\in\mathfrak{T}_{S}}\omega_{SF}\alpha_{F}\psi_{S}, \tag{34}\] where \(\omega_{SF}\) is the orientation of \(F\) relative to \(S\). Since each \(\psi_{S}\) is supported in \(S\), we infer equating (32) and (34) that, for all \(T\in\mathcal{T}_{h}\) and all \(S\in\mathfrak{T}_{T}\), \[\alpha_{T}\frac{|S|}{|T|}=\sum_{F\in\mathfrak{T}_{S}}\omega_{SF}\alpha_{F}.\] For all \(T\in\mathcal{T}_{h}\), summing this relation over \(S\in\mathfrak{T}_{T}\), we have \[\sum_{S\in\mathfrak{T}_{T}}\alpha_{T}\frac{|S|}{|T|}=\sum_{S\in\mathfrak{T}_{ T}}\sum_{F\in\mathfrak{T}_{S}}\omega_{SF}\alpha_{F}\implies\alpha_{T}\frac{ \sum_{S\in\mathfrak{T}_{T}}|\mathfrak{ST}^{1}}{|T|}=\sum_{F\in\mathcal{T}_{T}} \omega_{TF}\alpha_{F},\] where we have used the fact that the contributions from the simplicial faces internal to \(T\) cancel out in the right-hand side. It remains to check that (31) holds. Noticing that the only \(\boldsymbol{\phi}_{F}\) supported in \(S\) are those associated with its faces \(F\) collected in the set \(\mathfrak{T}_{S}\), we have \[\begin{split}\|\boldsymbol{\phi}_{h}\|_{L^{2}(\Omega;\mathbb{R}^{3 })}^{2}&=\sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}} \int_{S}\Bigg{(}\sum_{F\in\mathfrak{T}_{S}}\alpha_{F}\boldsymbol{\phi}_{F} \Bigg{)}^{2}\\ &\simeq\sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathfrak{T}_{T}}\sum_{ F\in\mathfrak{T}_{S}}\alpha_{F}^{2}\|\boldsymbol{\phi}_{F}\|_{L^{2}(S;\mathbb{R}^{3})} ^{2}\\ &\overset{(\ref{eq:22})}{\simeq}\sum_{T\in\mathcal{T}_{h}}h_{T}^{- 1}\sum_{S\in\mathfrak{T}_{T}}\sum_{F\in\mathfrak{T}_{S}}\alpha_{F}^{2}\\ &\gtrsim\sum_{T\in\mathcal{T}_{h}}h_{T}^{-1}\sum_{F\in\mathcal{T}_ {T}}\alpha_{F}^{2},\end{split} \tag{35}\] where we have used \(h_{S}^{-1}\simeq h_{T}^{-1}\) (consequence of mesh regularity) in the third line. Likewise, we have \[\|\psi_{h}\|_{L^{2}(\Omega)}^{2}=\sum_{T\in\mathcal{T}_{h}}\sum_{S\in\mathcal{I} _{T}}\alpha_{T}^{2}\frac{|S|^{2}}{|T|^{2}}\|\psi_{S}\|_{L^{2}(S)}^{2}\overset{ \eqref{eq:2.1}}{\simeq}\sum_{T\in\mathcal{T}_{h}}h_{T}^{-3}\alpha_{T}^{2}, \tag{36}\] where we have used \(h_{S}\simeq h_{T}\) and \(|S|\simeq|T|\). Finally, to prove (31), we write \[\sum_{T\in\mathcal{T}_{h}}h_{T}^{-1}\sum_{F\in\mathcal{T}_{T}}\alpha_{F}^{2} \overset{\eqref{eq:2.1}}{\lesssim}\|\mathbf{\phi}_{h}\|_{L^{2}(\Omega,\mathbb{R}^ {3})}^{2}\overset{\eqref{eq:2.1}}{\lesssim}\|\psi_{h}\|_{L^{2}(\Omega)}^{2} \overset{\eqref{eq:2.1}}{\simeq}\sum_{T\in\mathcal{T}_{h}}h_{T}^{-3}\alpha_{T }^{2}.\qed\] ## 4 Proofs of Poincare inequalities in DDR spaces ### Poincare inequality for the gradient We start with the following preliminary lemma. **Lemma 8** (Continuous inverse of the discrete gradient).: _For all \(\underline{p}_{h}\in\underline{X}_{\mathbf{grad},h}^{k}\), there is \(\underline{q}_{h}\in\underline{X}_{\mathbf{grad},h}^{k}\) such that_ \[\underline{\mathbf{G}}_{h}^{k}\underline{q}_{h}=\underline{\mathbf{G}}_{h}^{k} \underline{p}_{h}\text{ and }\|\underline{q}_{h}\|\|_{\mathbf{grad},h}\lesssim\| \underline{\mathbf{G}}_{h}^{k}\underline{p}_{h}\|_{\mathbf{curl},h}. \tag{37}\] Proof.: We provide an explicit definition of \(\underline{q}_{h}\) and check that (37) holds. Specifically, we let \(\underline{q}_{h}\in\underline{X}_{\mathbf{grad},h}^{k}\) be such that, for all \(E\in\mathcal{E}_{h}\), \[\llbracket q_{V}\rrbracket_{E}=\int_{E}G_{E}^{k}\underline{p}_{E} \qquad\forall E\in\mathcal{E}_{h}, \tag{38}\] \[\int_{E}q_{E}\,r^{\prime}=-\int_{E}G_{E}^{k}\underline{p}_{E} \,r+\llbracket q_{V}\,r\rrbracket_{E}\qquad\forall r\in\mathcal{P}_{0}^{k}(E), \tag{39}\] for all \(F\in\mathcal{F}_{h}\), \[\int_{F}q_{F}\,\operatorname{div}_{F}\mathbf{v}=-\int_{F}\mathbf{G}_{F}^{k}\underline {p}_{F}\cdot\mathbf{v}+\sum_{E\in\mathcal{E}_{F}}\omega_{FE}\int_{E}q_{E}\,(\mathbf{v }\cdot\mathbf{n}_{FE})\qquad\forall\mathbf{v}\in\mathcal{R}^{c,k}(F), \tag{40}\] and, for all \(T\in\mathcal{T}_{h}\), \[\int_{T}q_{T}\,\operatorname{div}\mathbf{v}+\sum_{F\in\mathcal{F}_{T}}\omega_{TF} \int_{F}q_{F}\,(\mathbf{v}\cdot\mathbf{n}_{F})\qquad\forall\mathbf{v}\in\mathcal{R}^{c,k} (T). \tag{41}\] Notice that the vertex values \((q_{V})_{V\in\mathcal{V}_{h}}\) are only defined up to a global constant and that \(q_{F}\) (resp., \(q_{T}\)) is well-defined by condition (40) (resp., (41)) since \(\operatorname{div}_{F}:\mathcal{R}^{c,k}(F)\to\mathcal{P}^{k-1}(F)\) (resp., \(\operatorname{div}:\mathcal{R}^{c,k}(T)\to\mathcal{P}^{k-1}(T)\)) is an isomorphism. 1. _Equality of the discrete gradient._ Let us first briefly show that \[\underline{\mathbf{G}}_{h}^{k}\underline{q}_{h}=\underline{\mathbf{G}}_{h}^{k} \underline{p}_{h}. \tag{42}\] By (38) and (39) along with the definition (1) of \(G_{E}^{k}\), it holds \[G_{E}^{k}\underline{q}_{E}=G_{E}^{k}\underline{p}_{E}\qquad\forall E\in \mathcal{E}_{h}. \tag{43}\] Let now \(F\in\mathcal{F}_{h}\). By [7, Lemma 14] for \((k,d)=(0,2)\), \(\int_{F}\mathbf{G}_{F}^{k}\underline{p}_{F}\cdot\mathbf{rot}_{F}\,r=-\sum_{E\in \mathcal{E}_{F}}\omega_{FE}\int_{E}G_{E}^{k}\underline{p}_{E}\,r\) for all \(r\in\mathcal{P}_{0}^{k+1}(F)\), so that, by (43), \(\mathbf{\pi}_{\mathcal{R},F}^{k}\mathbf{G}_{F}^{k}\underline{q}_{F}=\mathbf{\pi}_{\mathcal{ R},F}^{k}\mathbf{G}_{F}^{k}\underline{p}_{F}\). On the other hand, plugging the definition (40) of \(q_{F}\) into (2), we readily infer that \(\mathbf{\pi}_{\mathcal{R},F}^{c,k}\mathbf{G}_{F}^{k}\underline{q}_{F}=\mathbf{\pi}_{\mathcal{ R},F}^{c,k}\mathbf{G}_{F}^{k}\underline{p}_{F}\). The above relations imply \[\mathbf{G}_{F}^{k}\underline{q}_{F}=\mathbf{G}_{F}^{k}\underline{p}_{F}\qquad\forall F \in\mathcal{F}_{h}. \tag{44}\] The equality of the components associated with an element \(T\in\mathcal{T}_{h}\) is proved in a similar way. First, using again [7, Lemma 14], this time with \((k,d)=(0,3)\) (which corresponds to [16, Proposition 1]), we infer that \(\int_{T}\mathbf{G}_{T}^{k}\underline{q}_{T}\cdot\mathbf{\mathrm{curl}}\mathbf{v}=-\sum_{F \in\mathcal{F}_{T}}\int_{F}\mathbf{G}_{F}^{k}\underline{q}_{F}\cdot(\mathbf{v}\times \mathbf{n}_{F})\) for all \(\mathbf{v}\in\mathbf{G}^{c,k+1}(T)\). Accounting for (44), this yields \(\mathbf{\pi}_{\mathcal{R},T}^{k}\mathbf{G}_{T}^{k}\underline{q}_{T}=\mathbf{\pi}_{\mathcal{ R},T}^{k}\mathbf{G}_{T}^{k}\underline{p}_{T}\). Then, plugging the definition (41) of \(q_{T}\) into (3), we get \(\mathbf{\pi}_{\mathcal{R},T}^{c,k}\mathbf{G}_{T}^{k}\underline{q}_{T}=\mathbf{\pi}_{ \mathcal{R},T}^{c,k}\mathbf{G}_{T}^{k}\underline{p}_{T}\). These equalities give \[\mathbf{G}_{T}^{k}\underline{q}_{T}=\mathbf{G}_{T}^{k}\underline{p}_{T}\qquad\forall F \in\mathcal{T}_{h}. \tag{45}\] Gathering (43), (44), and (45), and recalling the definition (7) of the discrete gradient (42) follows. _2. Continuity._ Using the fact that, for all \(T\in\mathcal{T}_{h}\), \(h_{Y}\lesssim h_{T}\) for all \(Y\in\mathcal{F}_{T}\cup\mathcal{E}_{T}\) and that the number of faces of each element and of edges of each face is \(\lesssim 1\) by mesh regularity, we have \[\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{F\in\mathcal{F}_{T}}h_{F}\sum_{E\in \mathcal{E}_{F}}h_{E}\sum_{V\in\mathcal{V}_{E}}|q_{V}|^{2}\lesssim\sum_{T\in \mathcal{T}_{h}}h_{T}^{3}\sum_{V\in\mathcal{V}_{T}}|q_{V}|^{2}\overset{\eqref{ eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq so that, using the fact that \(h_{F}\leq h\) for all \(F\in\mathcal{F}_{h}\) in the first term, \[\begin{split}\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{F\in\mathcal{F}_{ T}}\left\|q_{F}\right\|_{L^{2}(F)}^{2}&\lesssim h^{2}\sum_{T\in\mathcal{T}_{h}}h_{T} \sum_{F\in\mathcal{F}_{T}}\left\|\mathbf{G}_{F}^{k}\underline{p}_{F}\right\|_{L^{2 }(F,\mathbb{R}^{2})}^{2}\\ &+\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{F\in\mathcal{T}_{T}}h_{F} \sum_{E\in\mathcal{E}_{F}}\left\|q_{E}\right\|_{L^{2}(E)}^{2}\stackrel{{ \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq: For all \(F\in\mathcal{T}_{h}\), the face components \(z_{\mathcal{R},F}\in\mathcal{R}^{k-1}(F)\) and \(z_{\mathcal{R},F}^{\mathrm{c}}\in\mathcal{R}^{\mathrm{c},k}(F)\) are selected such that \[\int_{F}z_{\mathcal{R},F}\cdot\mathbf{rot}_{F}\,r=\int_{F}C_{F}^{k}\underline{ v}_{F}\,r+\sum_{E\in\mathcal{E}_{F}}\int_{E}z_{E}\,r\qquad\forall r\in \mathcal{P}_{0}^{r}(F) \tag{53}\] and \[z_{\mathcal{R},F}^{\mathrm{c}}=\mathbf{0}. \tag{54}\] Similarly, for any \(T\in\mathcal{T}_{h}\), the element components \(z_{\mathcal{R},T}\in\mathcal{R}^{k-1}(T)\) and \(z_{\mathcal{R},T}^{\mathrm{c}}\in\mathcal{R}^{\mathrm{c},k}(T)\) satisfy \[\int_{T}z_{\mathcal{R},T}\cdot\mathbf{curl}\,\boldsymbol{w}=\int_{T}\boldsymbol {C}_{T}^{k}\underline{v}_{T}\cdot\boldsymbol{w}+\sum_{F\in\mathcal{T}_{T}} \omega_{TF}\int_{F}\gamma_{\star,F}^{k}\underline{z}_{F}\cdot(\boldsymbol{w} \times\boldsymbol{n}_{F})\qquad\forall\boldsymbol{w}\in\boldsymbol{G}^{ \mathrm{c},k}(T) \tag{55}\] and \[z_{\mathcal{R},T}^{\mathrm{c}}=\mathbf{0}. \tag{56}\] Notice that (55) defines \(z_{\mathcal{R},T}\) uniquely since \(\mathbf{curl}:\boldsymbol{G}^{\mathrm{c},k}(T)\mapsto\mathcal{R}^{k-1}(T)\) is an isomorphism by [2, Corollary 7.3]. 1. _Equality of the discrete curl._ By definition of \(z_{E}\), it holds \(\pi_{\mathcal{P},F}^{0}C_{F}^{k}\underline{z}_{F}=\pi_{\mathcal{P},F}^{0}C_{F }^{k}\underline{v}_{F}\) for all \(F\in\mathcal{T}_{h}\). The equality of the higher-order components is obtained plugging (53) into the definition (4) of \(C_{F}^{k}\underline{z}_{F}\), which leads to \(\int_{F}C_{F}^{k}\underline{z}_{F}\,r=\int_{F}C_{F}^{k}\underline{v}_{F}\,r\) for all \(r\in\mathcal{P}_{0}^{k}(F)\). By [16, Proposition 4] (which corresponds to [7, Lemma 14] with \((d,k)=(3,1)\)), the equality of the face curls implies \(\boldsymbol{\pi}_{\mathcal{G},T}^{k}(\boldsymbol{C}_{T}^{k}\underline{z}_{T}) =\pi_{\boldsymbol{G},T}^{k}(\boldsymbol{C}_{T}^{k}\underline{v}_{T})\). The equality of the projections on \(\mathcal{R}^{\mathrm{c},k}(T)\) results plugging (55) into the definition (5) of \(\boldsymbol{C}_{T}^{k}\underline{z}_{T}\) with test function \(\boldsymbol{w}\in\boldsymbol{G}^{\mathrm{c},k}(T)\). Gathering the above results, we obtain \(\boldsymbol{C}_{T}^{k}\underline{z}_{T}=\boldsymbol{C}_{T}^{k}\underline{v}_{T}\) for all \(T\in\mathcal{T}_{h}\), from which \(\underline{\boldsymbol{C}}_{h}^{k}\underline{z}_{h}=\boldsymbol{C}_{h}^{k} \underline{v}_{h}\) follows recalling the definition (8) of the discrete curl. 2. _Continuity._ Let us now show the bound in (50). Concerning edge components, we write \[\begin{split}\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{F\in\mathcal{T} _{T}}h_{F}\sum_{E\in\mathcal{E}_{F}}\|z_{E}\|_{L^{2}(E)}^{2}& \lesssim\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{E\in\mathcal{E}_{T}} \alpha_{E}^{2}\\ &\lesssim\sum_{T\in\mathcal{T}_{h}}h_{T}^{-1}\sum_{F\in\mathcal{T }_{T}}\left(\int_{F}C_{F}^{k}\underline{v}_{F}\right)^{2}\lesssim\|\underline{ \boldsymbol{C}}_{h}^{k}\underline{v}_{h}\|_{\mathrm{div},h}^{2},\end{split} \tag{57}\] where we have used (52) along with the fact that each edge \(E\in\mathcal{E}_{T}\) is shared by exactly two faces in \(\mathcal{F}_{T}\), the mimetic Poincare inequality (21) together with the definition (51) of \(\alpha_{F}\) in the second passage, and \(\left|\int_{F}C_{F}^{k}\underline{v}_{F}\right|\lesssim|F|^{\frac{1}{2}}\|C_{F }^{k}\underline{v}_{F}\|_{L^{2}(F)}\lesssim h_{F}\|C_{F}^{k}\underline{v}_{F} \|_{L^{2}(F)}\leq h_{T}\|C_{F}^{k}\underline{v}_{F}\|_{L^{2}(F)}\) for all \(F\in\mathcal{F}_{h}\) and all \(T\in\mathcal{T}_{h}\) to which \(F\) belongs together with the definition (10) of \(\|\cdot\|_{\mathrm{div},h}\) to conclude. To estimate the face component, we let \(r\) in (53) be such that \(\mathbf{rot}_{F}\,r=z_{\mathcal{R},F}\) and use Cauchy-Schwarz, inverse, and trace inequalities in the right-hand side to infer \(\|z_{\mathcal{R},F}\|_{\boldsymbol{L}^{2}(F;\mathbb{R}^{2})}\lesssim h_{F}\|C_{F }^{k}\underline{v}_{F}\|_{L^{2}(F)}+h_{F}^{\frac{1}{2}}\sum_{E\in\mathcal{E}_{F} }\|z_{E}\|_{L^{2}(E)}\). Squaring the above relation and using standard inequalities for the square of a finite sum of terms, we obtain, after noticing that \(h_{F}\leq h\) for all \(F\in\mathcal{F}_{h}\), \[\begin{split}\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{F\in\mathcal{F} _{T}}\|z_{\mathcal{R},F}\|_{\boldsymbol{L}^{2}(F;\mathbb{R}^{2})}^{2}& \lesssim h^{2}\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{F\in\mathcal{F} _{T}}\|C_{F}^{k}\underline{v}_{F}\|_{L^{2}(F)}^{2}\\ &+\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{F\in\mathcal{F}_{T}}h_{F}\sum_ {E\in\mathcal{E}_{F}}\|z_{E}\|_{L^{2}(E)}^{2}\lesssim\|\underline{\boldsymbol{C}} _{h}^{k}\underline{v}_{h}\|_{\mathrm{div},h}^{2},\end{split} \tag{58}\] where the conclusion follows noticing that \(h\leq\mathrm{diam}(\Omega)\lesssim 1\), recalling the definition (10) of \(\|\cdot\|\cdot\|_{\mathrm{div},h}\) for the first term, and invoking (57) for the second one. The estimate of the element component is obtained in a similar way, starting from (55) with \(\mathbf{w}\) such that \(\mathbf{\mathrm{curl}}\,\mathbf{w}=\mathbf{z}_{\mathcal{R},T}\), leading to \[\sum_{T\in\mathcal{T}_{h}}\left\|\underline{z}_{\mathcal{R},T}\right\|_{\mathbf{L}^ {2}(T;\mathbb{R}^{3})}^{2}\lesssim\left\|\underline{C}_{h}^{k}\underline{v}_{ h}\right\|_{\mathrm{div},h}^{2}. \tag{59}\] Summing (57), (58), and (59), recalling the definition (9) of \(\left\|\underline{z}_{h}\right\|_{\mathbf{\mathrm{curl}},h}\) as well as (54) and (56), the bound in (50) follows. ### Poincare inequality for the divergence Theorem 3 is established in the same way as Theorem 1 (see the end of Section 4.1) starting from the following result. **Lemma 10** (Continuous inverse of the discrete divergence).: _For any \(\underline{w}_{h}\in\underline{X}_{\mathrm{div},h}^{k}\), there is \(\underline{z}_{h}\in\underline{X}_{\mathrm{div},h}^{k}\) such that_ \[D_{h}^{k}\underline{z}_{h}=D_{h}^{k}\underline{w}_{h}\text{ and }\| \underline{z}_{h}\|_{\mathrm{div},h}\lesssim\|D_{h}^{k}\underline{w}_{h}\|_{ L^{2}(\Omega)}. \tag{60}\] Proof.: The face components of \(\underline{z}_{h}\) are obtained applying Theorem 7 with \(\alpha_{T}=\int_{T}D_{T}^{k}\underline{w}_{T}\) for all \(T\in\mathcal{T}_{h}\) and letting \(z_{F}\in\mathcal{P}^{0}(F)\) be such that \[\int_{F}z_{F}=\left|F\right|z_{F}=\alpha_{F}\qquad\forall F\in\mathcal{T}_{h}. \tag{61}\] For all \(T\in\mathcal{T}_{h}\), the element component \(\mathbf{z}_{\mathcal{G},T}\in\mathbf{\mathcal{G}}^{k-1}(T)\) is defined by the following relation: \[\int_{T}\mathbf{z}_{\mathcal{G},T}\cdot\mathbf{\mathrm{grad}}\,q=-\int_{T}D_{T}^{k} \underline{w}_{T}\,q+\sum_{F\in\mathcal{T}_{T}}\omega_{TF}\int_{F}z_{F}\,q \qquad\forall q\in\mathcal{P}_{0}^{k}(T). \tag{62}\] Finally, we set \[\mathbf{z}_{\mathcal{G},T}^{c}=\mathbf{0}\qquad\forall T\in\mathcal{T}_{h}. \tag{63}\] 1. _Equality of the discrete divergence._ To check the first condition in (60), it suffices to show that \(D_{T}^{k}\underline{z}_{T}=\overline{D_{T}^{k}\underline{w}_{T}}\) for a generic \(T\in\mathcal{T}_{h}\). To this end, we start by noticing that \[\int_{T}D_{T}^{k}\underline{w}_{T}=\alpha_{T}=\sum_{F\in\mathcal{T}_{T}}\omega _{TF}\alpha_{F}=\sum_{F\in\mathcal{T}_{T}}\omega_{TF}\int_{F}z_{F}=\int_{T}D_ {T}^{k}\underline{z}_{T},\] showing that \(\pi_{\mathcal{P},T}^{0}(D_{T}^{k}\underline{z}_{T})=\pi_{\mathcal{P},T}^{0}(D _{T}^{k}\underline{w}_{T})\). To show the equivalence of the higher-order components, it suffices to use (62) in (6) written for \(\underline{z}_{T}\) to infer \(\int_{T}D_{T}^{k}\underline{z}_{T}\,q=\int_{T}D_{T}^{k}\underline{w}_{T}\,q\) for all \(q\in\mathcal{P}_{0}^{k}(T)\). 2. _Continuity._ Let us now show the continuity bound in (60). Observing that, by (61), for all \(F\in\mathcal{F}_{h}\) it holds \(\left\|z_{F}\right\|_{L^{2}(F)}^{2}=\left|F\right|z_{F}^{2}=\left|F\right|^{-1 }\alpha_{F}^{2}\lesssim h_{T}^{-2}\alpha_{F}^{2}\) for all \(T\in\mathcal{T}_{h}\) such that \(F\in\mathcal{F}_{T}\) (the last inequality being a consequence of mesh regularity), we have \[\begin{split}\sum_{T\in\mathcal{T}_{h}}h_{T}\sum_{F\in\mathcal{T}_ {T}}\left\|z_{F}\right\|_{L^{2}(F)}^{2}&\lesssim\sum_{T\in \mathcal{T}_{h}}h_{T}^{-1}\sum_{F\in\mathcal{F}_{T}}\alpha_{F}^{2}\overset{(31 )}{\lesssim}\sum_{T\in\mathcal{T}_{h}}h_{T}^{-3}\alpha_{T}^{2}\\ &\lesssim\sum_{T\in\mathcal{T}_{h}}h_{T}^{-3}\left|T\right|\left\| \pi_{\mathcal{P},T}^{0}(D_{T}^{k}\underline{w}_{T})\right\|_{L^{2}(T)}^{2} \lesssim\left\|D_{h}^{k}\underline{w}_{h}\right\|_{L^{2}(\Omega)}^{2},\end{split} \tag{64}\] where, to pass to the second line, we have used the mesh regularity assumption to infer \(h_{T}^{-3}\left|T\right|\lesssim 1\). To estimate the element components, we take \(q\) in (62) such that \(\mathbf{\mathrm{grad}}\,q=\mathbf{z}_{\mathcal{G},T}\), use Cauchy-Schwarz, trace, and inverse inequalities in the right-hand side, pass to the square and use standard inequalities for the square of a finite sum of terms to obtain \[\|\mathbf{z}_{\mathbf{G},T}\|_{L^{2}(T;\mathbb{R}^{3})}^{2}\lesssim h_{T}^{2}\|D_{T}^{k} \underline{\mathbf{w}}_{T}\|_{L^{2}(T)}^{2}+h_{T}\sum_{F\in\mathcal{T}_{T}}\|z_{F} \|_{L^{2}(F)}^{2}.\] Summing the above relation over \(T\in\mathcal{T}_{h}\), using the fact that \(h_{T}\leq\mathrm{diam}(\Omega)\lesssim 1\) for the first term in the right-hand side and (64) for the second, we obtain \[\sum_{T\in\mathcal{T}_{h}}\|\mathbf{z}_{\mathbf{G},T}\|_{L^{2}(T;\mathbb{R}^{3})}^{2} \lesssim\|D_{h}^{k}\underline{\mathbf{w}}_{h}\|_{L^{2}(\Omega)}^{2}. \tag{65}\] Summing (64) to (65) and recalling the definition (10) of \(|\!|\!|\!|\!|\!|\!|\!|\!|_{\mathrm{div},h}\) along with (63) yields the inequality in (60). ## 5 Stability analysis of a DDR scheme for the magnetostatics problem We apply the Poincare inequalities stated in Section 2.7 to the stability analysis of a DDR scheme for the magnetostatics problem which generalises the one presented in [17] to domains with non-trivial topology. We introduce the space of discrete harmonic forms \[\underline{\mathbf{\hat{y}}}_{\mathrm{div},h}^{k}\coloneqq\left\{\underline{\mathbf{w} }_{h}\in\underline{\mathbf{X}}_{\mathrm{div},h}^{k}\ :\ D_{h}^{k}\underline{\mathbf{w}}_{h}=0\ \text{and}\ (\underline{\mathbf{w}}_{h},\underline{\mathbf{C}}_{h}^{k} \underline{\mathbf{v}}_{h})_{\mathrm{div},h}=0\ \text{for all}\ \underline{\mathbf{v}}_{h}\ \in \underline{\mathbf{X}}_{\mathbf{cut},h}^{k}\ \right\}.\] For a given source term \(\mathbf{f}\in\mathbf{H}^{1}(\Omega;\mathbb{R}^{3})\), we consider the following DDR approximation of the magnetostatics problem: Find \((\underline{\mathbf{\sigma}}_{h},\underline{\mathbf{u}}_{h},\underline{\mathbf{p}}_{h}) \in\underline{\mathbf{X}}_{\mathbf{cut},h}^{k}\times\underline{\mathbf{X}}_{\mathrm{ div},h}^{k}\times\underline{\mathbf{\hat{y}}}_{\mathrm{div},h}^{k}\) such that, for all \((\underline{\mathbf{\tau}}_{h},\underline{\mathbf{v}}_{h},\underline{\mathbf{q}}_{h})\in \underline{\mathbf{X}}_{\mathbf{cut},h}^{k}\times\underline{\mathbf{X}}_{\mathrm{ div},h}^{k}\times\underline{\mathbf{\hat{y}}}_{\mathrm{div},h}^{k}\), \[A_{h}((\underline{\mathbf{\sigma}}_{h},\underline{\mathbf{u}}_{h},\underline{\mathbf{p}}_ {h}),(\underline{\mathbf{\tau}}_{h},\underline{\mathbf{v}}_{h},\underline{\mathbf{q}}_{h}) )=(\underline{\mathbf{I}}_{\mathrm{div},h}^{k}\mathbf{f},\underline{\mathbf{v}}_{h})_{ \mathrm{div},h},\] where the bilinear form \(A_{h}:\left[\underline{\mathbf{X}}_{\mathbf{cut},h}^{k}\times\underline{\mathbf{X}}_{ \mathrm{div},h}^{k}\times\underline{\mathbf{\hat{y}}}_{\mathrm{div},h}^{k}\right]^ {2}\to\mathbb{R}\) is given by: \[A_{h}((\underline{\mathbf{\sigma}}_{h},\underline{\mathbf{u}}_{h}, \underline{\mathbf{p}}_{h}),(\underline{\mathbf{\tau}}_{h},\underline{\mathbf{v}}_{h}, \underline{\mathbf{q}}_{h}))\coloneqq(\underline{\mathbf{\sigma}}_{h},\underline{\bm {\tau}}_{h})_{\mathbf{cut},h}-(\underline{\mathbf{u}}_{h},\underline{\mathbf{C}}_{h}^ {k}\underline{\mathbf{\tau}}_{h})_{\mathrm{div},h}+(\underline{\mathbf{C}}_{h}^{k} \underline{\mathbf{\sigma}}_{h},\underline{\mathbf{v}}_{h})_{\mathrm{div},h}\] (66) \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ Proof.: Let \((\underline{\sigma}_{h},\underline{u}_{h},\underline{p}_{h})\) be given, and let \(C_{\mathrm{P}}>0\) be the maximum of the hidden constants in the continuity estimates (60) for the divergence and (50) for the curl. Let \(\underline{z}_{h}\in\underline{X}_{\mathrm{div},h}^{k}\) be given by (60), i.e., such that \[D_{h}^{k}\underline{z}_{h}=D_{h}^{k}\underline{u}_{h}\text{ and }\|\underline{z}_{h} \|_{\mathrm{div},h}\leq C_{\mathrm{P}}\|D_{h}^{k}\underline{u}_{h}\|_{L^{2}( \Omega)}. \tag{69}\] Since \(D_{h}^{k}(\underline{u}_{h}-\underline{z}_{h})=0\), there exists \((\underline{\alpha}_{h},\underline{\beta}_{h})\in\underline{X}_{\mathrm{curl}, h}^{k}\times\underline{\beta}_{\mathrm{div},h}^{k}\) such that \(\underline{u}_{h}-\underline{z}_{h}=\underline{C}_{h}^{k}\underline{\alpha}_ {h}+\underline{\beta}_{h}\). Using (50) applied to \(\underline{\alpha}_{h}\), we infer the existence of \[\underline{\alpha}_{h}^{\prime}\text{ such that }\underline{C}_{h}^{k} \underline{\alpha}_{h}^{\prime}=\underline{C}_{h}^{k}\underline{\alpha}_{h} \text{ and }\|\underline{\alpha}_{h}^{\prime}\|_{\mathrm{curl},h}\leq C_{ \mathrm{P}}\|\underline{C}_{h}^{k}\underline{\alpha}_{h}\|_{\mathrm{div},h}. \tag{70}\] Noticing that \(\|\underline{u}_{h}-\underline{z}_{h}\|_{\mathrm{div},h}^{2}=\|\underline{C}_ {h}^{k}\underline{\alpha}_{h}\|_{\mathrm{div},h}^{2}+\|\underline{\beta}_{h} \|_{\mathrm{div},h}^{2}\) by orthogonality, using a triangle inequality we infer that \[\|\underline{C}_{h}^{k}\underline{\alpha}_{h}\|_{\mathrm{div},h}^{2}+\| \underline{\beta}_{h}\|_{\mathrm{div},h}^{2}\lesssim\|\underline{u}_{h}\|_{ \mathrm{div},h}^{2}+\|\underline{z}_{h}\|_{\mathrm{div},h}^{2}\overset{\eqref {eq:2.2}}{\leq}\|\underline{u}_{h}\|_{\mathrm{div},h}^{2}+C_{\mathrm{P}}\|D_{h }^{k}\underline{u}_{h}\|_{L^{2}(\Omega)}^{2}. \tag{71}\] We define \[\underline{\tau}_{h}\coloneqq 2C_{\mathrm{P}}^{2}\underline{\sigma}_{h}- \underline{\alpha}_{h}^{\prime},\quad\underline{v}_{h}\coloneqq\underline{C}_{ h}^{k}\underline{\sigma}_{h}+\underline{p}_{h}+2C_{\mathrm{P}}^{2}\underline{u}_{h}, \quad\underline{q}_{h}\coloneqq\underline{\beta}_{h}-2C_{\mathrm{P}}^{2} \underline{p}_{h}. \tag{72}\] The following bound is readily inferred using triangle inequalities: \[\|\underline{\tau}_{h}\|_{\mathrm{curl},h}^{2}+\|\underline{C}_{h }^{k}\underline{\tau}_{h}\|_{\mathrm{div},h}^{2}+\|\underline{v}_{h}\|_{ \mathrm{div},h}^{2}+\|D_{h}^{k}\underline{v}_{h}\|_{L^{2}(\sigma)}^{2}+\| \underline{q}_{h}\|_{\mathrm{div},h}^{2}\] \[\qquad\qquad\lesssim\|\underline{\sigma}_{h}\|_{\mathrm{curl},h}^ {2}+\|\underline{\alpha}_{h}^{\prime}\|_{\mathrm{curl},h}^{2}+\|\underline{C}_ {h}^{k}\underline{\sigma}_{h}\|_{\mathrm{div},h}^{2}+\|\underline{p}_{h}\|_{ \mathrm{div},h}^{2}+\|\underline{u}_{h}\|_{\mathrm{div},h}^{2}+\|\underline{ \beta}_{h}\|_{\mathrm{div},h}^{2} \tag{73}\] \[\overset{\eqref{eq:2.2}}{\lesssim}\eqref{eq:2.2}\|(\underline{ \sigma}_{h},\underline{u}_{h},\underline{p}_{h})\|_{h}^{2}.\] Plugging the test functions (72) into the expression (66) of \(A_{h}\) gives \[A_{h}((\underline{\sigma}_{h},\underline{u}_{h},\underline{p}_{h }),(\underline{\tau}_{h},\underline{v}_{h},\underline{q}_{h}))\] \[\quad=2C_{\mathrm{P}}^{2}\|\underline{\sigma}_{h}\|_{\mathrm{curl},h}^{2}-(\underline{\sigma}_{h},\underline{\alpha}_{h}^{\prime})_{\mathrm{ curl},h}\] \[\quad\quad\quad-\widetilde{2C_{\mathrm{P}}^{2}}(\underline{ \overline{u}_{h},\underline{C}_{h}^{k}\underline{\sigma}_{h})_{\mathrm{div},h}+( \underline{u}_{h},\underline{C}_{h}^{k}\underline{\alpha}_{h})_{\mathrm{div},h} \text{ }+\|\underline{C}_{h}^{2}\underline{C}_{\mathrm{P}}^{2}(\underline{C}_{h}^{k} \underline{\sigma}_{h\Gamma}^{k}\underline{u}_{h})_{\mathrm{div},h}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ Plugging (75) and (76) into (74), we have \[A_{h} ((\underline{\boldsymbol{\sigma}}_{h},\underline{\boldsymbol{u}}_{h}, \underline{\boldsymbol{p}}_{h}),(\underline{\boldsymbol{\tau}}_{h},\underline{ \boldsymbol{v}}_{h},\underline{\boldsymbol{q}}_{h}))\] \[\geq\frac{5C_{\mathrm{P}}^{2}}{4}\|\underline{\boldsymbol{ \sigma}}_{h}\|_{\mathbf{curl},h}^{2}+\|\underline{\boldsymbol{C}}_{h}^{k} \underline{\boldsymbol{\sigma}}_{h}\|_{\mathrm{div},h}^{2}+\|\underline{ \boldsymbol{p}}_{h}\|_{\mathrm{div},h}^{2}+\frac{5C_{\mathrm{P}}^{2}}{4}\|D_ {h}^{k}\underline{\boldsymbol{u}}_{h}\|_{L^{2}(\Omega)}^{2}+\frac{2}{3}\| \underline{\boldsymbol{u}}_{h}\|_{\mathrm{div},h}^{2}-\frac{1}{3}\|\underline {C}_{h}^{k}\underline{\boldsymbol{\alpha}}_{h}\|_{\mathrm{div},h}^{2}\] \[\geq\frac{5C_{\mathrm{P}}^{2}}{4}\|\underline{\boldsymbol{ \sigma}}_{h}\|_{\mathbf{curl},h}^{2}+\|\underline{\boldsymbol{C}}_{h}^{k} \underline{\boldsymbol{\sigma}}_{h}\|_{\mathrm{div},h}^{2}+\|\underline{ \boldsymbol{p}}_{h}\|_{\mathrm{div},h}^{2}+\frac{11C_{\mathrm{P}}^{2}}{12}\|D _{h}^{k}\underline{\boldsymbol{u}}_{h}\|_{L^{2}(\Omega)}^{2}+\frac{1}{3}\| \underline{\boldsymbol{u}}_{h}\|_{\mathrm{div},h}^{2}\] \[\gtrsim\|\|(\underline{\boldsymbol{\sigma}}_{h},\underline{ \boldsymbol{u}}_{h},\underline{\boldsymbol{p}}_{h})\|_{h}^{2}.\] Denoting by \(\$\) the supremum in (68), we then use the previous bound to write \[\|\|(\underline{\boldsymbol{\sigma}}_{h},\underline{\boldsymbol{u}}_{h}, \underline{\boldsymbol{p}}_{h})\|_{h}^{2}\lesssim A_{h}((\underline{ \boldsymbol{\sigma}}_{h},\underline{\boldsymbol{u}}_{h},\underline{ \boldsymbol{p}}_{h}),(\underline{\boldsymbol{\tau}}_{h},\underline{ \boldsymbol{v}}_{h},\underline{\boldsymbol{q}}_{h}))\leq\$\|(\underline{ \boldsymbol{\tau}}_{h},\underline{\boldsymbol{v}}_{h},\underline{\boldsymbol{q }}_{h})\|_{h}\stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq: with function \(\mathbf{\phi}_{ijk}\) associated to the simplicial face \(F_{ijk}\) with vertices \(V_{i}\), \(V_{j}\), and \(V_{k}\). Finally, the basis of \(\mathcal{P}_{1}^{-}\Lambda^{3}(S)\cong\mathcal{P}^{0}(S)\) is \[\phi_{0123}(\mathbf{x})\coloneqq\frac{6}{\det\left(\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}}, \mathbf{x}_{V_{2}}-\mathbf{x}_{V_{0}},\mathbf{x}_{V_{3}}-\mathbf{x}_{V_{0}}\right)} \tag{80}\] **Lemma 12** (Dual basis).: _The following identities hold:_ \[\phi_{i}(\mathbf{x}_{i^{\prime}}) =\delta_{i^{\prime}}^{i} \forall i,i^{\prime}\in\{0,1,2,3\} \tag{81a}\] \[\int_{E_{(ij)^{\prime}}}\mathbf{\phi}_{ij}\cdot\mathbf{t}_{E} =\delta_{(ij)^{\prime}}^{ij} \forall(ij),(ij)^{\prime}\in\{23,13,12,03,02,01\}\] (81b) \[\int_{F_{(ijk)^{\prime}}}\mathbf{\phi}_{ijk}\cdot\mathbf{n}_{F} =\delta_{(ijk)^{\prime}}^{ijk} \forall(ijk),(ijk)^{\prime}\in\{123,023,013,012\}, \tag{81c}\] _where \(\delta_{a}^{b}=1\) if \(a=b\), \(\delta_{a}^{b}=0\) otherwise._ Proof.: The proof of (81a) readily follows from the orthogonality of the cross product. Let us check (81b) for \(\mathbf{\phi}_{23}\), the other being similar. For \((ij)\in\{23,13,12,03,02,01\}\), we have \[\int_{E_{ij}}\mathbf{\phi}_{23}\cdot\mathbf{t}_{E}=\int_{t=0}^{1}\!\!\frac{t\left[(\bm {x}_{V_{1}}-\mathbf{x}_{V_{0}})\times(\mathbf{x}_{\mathbf{V_{T}}}-\mathbf{x}_{V_{T}})\right] \cdot\left(\mathbf{x}_{V_{I}}-\mathbf{x}_{V_{I}}\right)+\left[(\mathbf{x}_{V_{1}}-\mathbf{x}_{ V_{0}})\times(\mathbf{x}_{V_{I}}-\mathbf{x}_{V_{0}})\right]\cdot(\mathbf{x}_{V_{J}}-\mathbf{x}_{V_{I}})}{ \det\left(\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}},\mathbf{x}_{V_{2}}-\mathbf{x}_{V_{0}},\mathbf{x}_ {V_{3}}-\mathbf{x}_{V_{2}}\right)}.\] If \(i=0\) or \(i=1\), then \((\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}})\times(\mathbf{x}_{V_{I}}-\mathbf{x}_{V_{0}})=0\). If \(i=2\) and \(j=3\), we can use the vector triple product to write \((\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}})\times(\mathbf{x}_{V_{2}}-\mathbf{x}_{V_{0}})\cdot(\mathbf{ x}_{V_{3}}-\mathbf{x}_{V_{2}})=\det\left(\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}},\mathbf{x}_{V_{2}}- \mathbf{x}_{V_{0}},\mathbf{x}_{V_{3}}-\mathbf{x}_{V_{2}}\right)\), so that \(\int_{E_{23}}\mathbf{\phi}_{23}\cdot\mathbf{t}_{E}=1\). Finally, let us check (81c) for \(\mathbf{\phi}_{123}\). Using the change of variable induced by \(\mathbf{\psi}:(\lambda_{j},\lambda_{k})\mapsto\lambda_{j}(\mathbf{x}_{V_{j}}-\mathbf{x}_{ V_{I}})+\lambda_{k}(\mathbf{x}_{V_{k}}-\mathbf{x}_{V_{I}})+\mathbf{x}_{V_{I}}\), along with the fact that \(\mathbf{n}_{F}=\frac{(\mathbf{x}_{V_{j}}-\mathbf{x}_{V_{I}})\times(\mathbf{x}_{V_{k}}-\mathbf{x}_ {V_{I}})}{2|F_{ijk}|}\), we have \[\int_{F_{ijk}}\mathbf{\phi}_{123}\cdot\mathbf{n}_{F}\] \[\quad=\int_{\lambda_{j}=0}^{1}\int_{\lambda_{k}=0}^{1-\lambda_{2}} 2\frac{(\mathbf{x}_{V_{I}}-\mathbf{x}_{V_{0}})+\lambda_{2}(\mathbf{x}_{V_{j}}-\mathbf{x}_{V_{I }})+\lambda_{3}(\mathbf{x}_{V_{3}}-\mathbf{x}_{V_{I}})}{\det\left(\mathbf{x}_{V_{2}}-\mathbf{x }_{V_{1}},\mathbf{x}_{V_{3}}-\mathbf{x}_{V_{1}},\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}}\right)} \cdot\left[(\mathbf{x}_{V_{j}}-\mathbf{x}_{V_{I}})\times(\mathbf{x}_{V_{k}}-\mathbf{x}_{V_{I}})\right]\] \[\quad=2\int_{\lambda_{j}=0}^{1}\int_{\lambda_{k}=0}^{1-\lambda_{2}} \frac{(\mathbf{x}_{V_{I}}-\mathbf{x}_{V_{0}})\cdot\left[(\mathbf{x}_{V_{j}}-\mathbf{x}_{V_{I}}) \times(\mathbf{x}_{V_{k}}-\mathbf{x}_{V_{I}})\right]}{\det\left(\mathbf{x}_{V_{2}}-\mathbf{x}_ {V_{1}},\mathbf{x}_{V_{3}}-\mathbf{x}_{V_{1}},\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}}\right)}\] \[\quad=\frac{\det\left(\mathbf{x}_{V_{j}}-\mathbf{x}_{V_{i}},\mathbf{x}_{V_{k}}- \mathbf{x}_{V_{i}},\mathbf{x}_{V_{i}}-\mathbf{x}_{V_{0}}\right)}{\det\left(\mathbf{x}_{V_{2}}- \mathbf{x}_{V_{1}},\mathbf{x}_{V_{3}}-\mathbf{x}_{V_{1}},\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}}\right) }=\delta_{123}^{ijk}.\qed\] **Lemma 13** (Norm of the basis function).: _The functions given by (77)-(80) have the following \(L^{2}\)-norms: For all \(i\in\{0,1,2,3\}\), all \((ij)\in\{23,13,12,03,02,01\}\), and all \((ijk)\in\{123,023,013,012\}\),_ \[\|\phi_{i}\|_{L^{2}(S)}^{2} =\frac{|S|}{10}\simeq h_{S}^{3}, \tag{82}\] \[\|\mathbf{\phi}_{ij}\|_{L^{2}(S;\mathbb{R}^{3})}^{2} =\frac{|F_{012}|^{2}+|F_{013}|^{2}+c_{01}|F_{012}||F_{013}|}{90|S|} \simeq h_{S},\] (83) \[\|\mathbf{\phi}_{ijk}\|_{L^{2}(S;\mathbb{R}^{3})}^{2} =\frac{|E_{li}|^{2}+|E_{lj}|^{2}+|E_{lk}|^{2}+2|F_{ij}|+2|F_{lik}|+ 2|F_{ljk}|}{180|S|}\simeq h_{S}^{-1},\] (84) \[\|\phi_{0123}\|_{L^{2}(S)}^{2} =\frac{1}{|S|}\simeq h_{S}^{-3}, \tag{85}\] _where, for \(\{i,j,k,l\}=\{0,1,2,3\}\), \(c_{kl}=\mathbf{n}_{kli}\cdot\ \mathbf{n}_{klj}\) is the dihedral angle associated to the edge \(E_{kl}\)._ Proof.: We will only show the computation for one function of each space, the others being similar. In order to integrate over the simplex S, we consider the change of variable induced by \(\mathbf{\psi}:(\lambda_{1},\lambda_{2},\lambda_{3})\mapsto\lambda_{1}\mathbf{x}_{V_{1}}+ \lambda_{2}\mathbf{x}_{V_{2}}+\lambda_{3}\mathbf{x}_{V_{3}}+(1-\lambda_{1}-\lambda_{2}- \lambda_{3})\mathbf{x}_{V_{0}}\). Notice that \(|\det D\mathbf{\psi}|=6|S|\). Let us first consider the family given by (77). Using the orthogonality of the cross product, and the identity \((\mathbf{a}\times\mathbf{b})\cdot\mathbf{c}=\det(\mathbf{a},\mathbf{b},\mathbf{c})\), we notice that \[\phi_{3}(\mathbf{\psi})=\lambda_{3}\frac{\left[(\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}}) \times(\mathbf{x}_{V_{2}}-\mathbf{x}_{V_{0}})\right]\cdot(\mathbf{x}_{V_{3}}-\mathbf{x}_{V_{0 }})}{\det\left(\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}},\mathbf{x}_{V_{2}}-\mathbf{x}_{V_{0}}, \mathbf{x}_{V_{3}}-\mathbf{x}_{V_{0}}\right)}=\lambda_{3}.\] Hence, we have \[\int_{S}\phi_{3}^{2}=\int_{\lambda_{1}=0}^{1}\int_{\lambda_{2}=0}^{1-\lambda_ {1}}\int_{\lambda_{3}=0}^{1-\lambda_{1}-\lambda_{2}}(\lambda_{3})^{2}6|S|= \frac{|S|}{10}.\] Then, we proceed with the family (78). We have \[\mathbf{\phi}_{23}(\mathbf{\psi})= \frac{\lambda_{2}(\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}})\times(\mathbf{x}_{V_ {2}}-\mathbf{x}_{V_{0}})+\lambda_{3}(\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}})\times(\mathbf{x}_{ V_{3}}-\mathbf{x}_{V_{0}})}{\det\left(\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}},\mathbf{x}_{V_{2}}- \mathbf{x}_{V_{0}},\mathbf{x}_{V_{3}}-\mathbf{x}_{V_{2}}\right)}\] \[= \frac{1}{6|S|}\left(\lambda_{2}2|F_{012}|\mathbf{n}_{012}+\lambda_{3} 2|F_{013}|\mathbf{n}_{013}\right).\] Expanding the product, we obtain \[\int_{S}\mathbf{\phi}_{23}\cdot\mathbf{\phi}_{23}= \int_{\lambda_{1}=0}^{1}\int_{\lambda_{2}=0}^{1-\lambda_{1}}\int _{\lambda_{3}=0}^{1-\lambda_{1}-\lambda_{2}}\left(\frac{1}{3|S|}\right)^{2} \left(\lambda_{2}^{2}|F_{012}|^{2}+\lambda_{3}^{2}|F_{013}|^{2}+2\lambda_{2} \lambda_{3}c_{01}|F_{012}||F_{013}|\right)6|S|\] \[= \frac{2}{3|S|}\frac{|F_{012}|^{2}+|F_{013}|^{2}+c_{01}|F_{012}||F _{013}|}{60}\] Finally, we prove that (84) holds for \(\mathbf{\phi}_{123}\) given by (79). We have \[\mathbf{\phi}_{123}(\mathbf{\psi})=2\frac{\lambda_{1}(\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}})+ \lambda_{2}(\mathbf{x}_{V_{2}}-\mathbf{x}_{V_{0}})+\lambda_{3}(\mathbf{x}_{V_{3}}-\mathbf{x}_{ V_{0}})}{\det\left(\mathbf{x}_{V_{2}}-\mathbf{x}_{V_{1}},\mathbf{x}_{V_{3}}-\mathbf{x}_{V_{1}}, \mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}}\right)}.\] Noticing that \((\mathbf{x}_{V_{1}}-\mathbf{x}_{V_{0}})\cdot(\mathbf{x}_{V_{j}}-\mathbf{x}_{V_{0}})=2|F_{0ij}|\), we have \[\int_{S}\mathbf{\phi}_{123}\cdot\mathbf{\phi}_{123}= \int_{\lambda_{1}=0}^{1}\int_{\lambda_{2}=0}^{1-\lambda_{1}}\int _{\lambda_{3}=0}^{1-\lambda_{1}-\lambda_{2}}\left(\frac{2}{6|S|}\right)^{2} \left(\lambda_{1}^{2}|E_{01}|^{2}+\lambda_{2}^{2}|E_{02}|^{2}+\lambda_{3}^{2}| E_{03}|^{2}\right.\] \[\left.+4\lambda_{1}\lambda_{2}|F_{012}|+4\lambda_{1}\lambda_{3}| F_{013}|+4\lambda_{2}\lambda_{3}|F_{023}|\right)6|S|\] \[= \frac{1}{3|S|}\frac{|E_{01}|^{2}+|E_{02}|^{2}+|E_{03}|^{2}+2|F_{01 2}|+2|F_{013}|+2|F_{023}|}{60}\] **Lemma 14** (Link with the differential operators).: _For all face \(F\) of \(S\), we define \(\omega_{SF}\in\{-1,1\}\) such that \(\omega_{SF}\mathbf{n}_{F}\) is outward pointing. Then, the followings identities hold;_ \[\operatorname{\mathbf{grad}}\phi_{i}= \sum_{j<i}\mathbf{\phi}_{ji}-\sum_{j>i}\mathbf{\phi}_{ij} \forall i\in\{0,1,2,3\}, \tag{86}\] \[\operatorname{\mathbf{curl}}\mathbf{\phi}_{ij}= \omega_{SF_{i}}\omega_{F_{i}E_{i}E_{j}}\mathbf{\phi}_{i}+\omega_{SF_{j}} \omega_{F_{j}E_{ij}}\mathbf{\phi}_{j} \forall(ij)\in\{23,13,12,03,02,01\},\] (87) \[\operatorname{div}\mathbf{\phi}_{ijk}= \omega_{SF_{ijk}}\phi_{0123}, \forall(ijk)\in\{123,023,013,012\}, \tag{88}\] _where \(\hat{i}\) denotes the complementary of \(i\) in \(\{0,1,2,3\}\)._ Proof.: First, let us prove (88). Noticing that \(\operatorname{div}\boldsymbol{x}=3\), it only remain to check that \[\operatorname{sgn}\det\big{(}\boldsymbol{x}_{V_{j}}-\boldsymbol{x}_{V_{i}}, \boldsymbol{x}_{V_{k}}-\boldsymbol{x}_{V_{i}},\boldsymbol{x}_{V_{i}}- \boldsymbol{x}_{V_{i}}\big{)}=\omega_{SF_{ijk}}. \tag{89}\] This holds, since \((\boldsymbol{x}_{V_{j}}-\boldsymbol{x}_{V_{i}})\times(\boldsymbol{x}_{V_{k}}- \boldsymbol{x}_{V_{i}})=\|\boldsymbol{x}_{V_{j}}-\boldsymbol{x}_{V_{i}}\|\| \boldsymbol{x}_{V_{k}}-\boldsymbol{x}_{V_{i}}\|\boldsymbol{n}_{F}\), and \(\boldsymbol{x}_{V_{i}}-\boldsymbol{x}_{V_{i}}\) is always outward pointing. Then, to prove (87), we use the identity \(\operatorname{\mathbf{curl}}(\boldsymbol{A}\times\boldsymbol{B})= \operatorname{\mathbf{div}}(\boldsymbol{B}\otimes\boldsymbol{A}^{\top}- \boldsymbol{A}\otimes\boldsymbol{B}^{\top})\) in (78) to write \[\operatorname{\mathbf{curl}}\boldsymbol{\phi}_{ij}=\frac{3( \boldsymbol{x}_{V_{i}}-\boldsymbol{x}_{V_{k}})-(\boldsymbol{x}_{V_{i}}- \boldsymbol{x}_{V_{k}})}{\det\big{(}\boldsymbol{x}_{V_{i}}-\boldsymbol{x}_{V_ {k}},\boldsymbol{x}_{V_{i}}-\boldsymbol{x}_{V_{k}},\boldsymbol{x}_{V_{j}}- \boldsymbol{x}_{V_{i}}\big{)}}, \tag{90}\] where \((kl)\) is such that \(\{\,i,j,k,l\}=\{0,1,2,3\}\) and \(k<l\). Noticing that \(\boldsymbol{x}_{V_{l}}-\boldsymbol{x}_{V_{k}}\) is an inward pointing to the face \(\hat{\boldsymbol{k}}\), and \(\boldsymbol{x}_{V_{i}}-\boldsymbol{x}_{V_{k}}\) lies outward pointing in the plane of \(F_{\hat{\boldsymbol{k}}}\), we have \[\det\big{(}\boldsymbol{x}_{V_{l}}-\boldsymbol{x}_{V_{k}},\boldsymbol{x}_{V_{i }}-\boldsymbol{x}_{V_{k}},\boldsymbol{x}_{V_{j}}-\boldsymbol{x}_{V_{i}}\big{)} =\frac{1}{6|S|}\omega_{F_{\hat{\boldsymbol{k}}}}E_{ij}=-\frac{1}{6|S|}\omega_ {F_{\hat{\boldsymbol{k}}}}E_{ij}. \tag{91}\] where we inserted \(\boldsymbol{x}_{V_{k}}-\boldsymbol{x}_{V_{l}}\) in the second argument of the determinant to get the second inequality. Inserting \(\boldsymbol{x}-\,\boldsymbol{x}\) in (90) and replacing the denominator according to (91), we obtain \(\operatorname{\mathbf{curl}}\boldsymbol{\phi}_{ij}=6|S|\left(\omega_{F_{\hat{ \boldsymbol{k}}}}E_{ij}\boldsymbol{2}(\boldsymbol{x}-\boldsymbol{x}_{V_{k}})+ \omega_{F_{\hat{\boldsymbol{l}}}}E_{ij}\boldsymbol{2}(\boldsymbol{x}- \boldsymbol{x}_{V_{l}})\right)\) We infer (87) recalling (89). Finally, let us prove (86). We only prove the equality for \(\phi_{0}\), the other three being similar. By the assumption on the basis, we have \(\det\big{(}\boldsymbol{x}_{V_{2}}-\boldsymbol{x}_{V_{1}},\boldsymbol{x}_{V_{ 3}}-\boldsymbol{x}_{V_{1}},\boldsymbol{x}_{V_{3}}-\boldsymbol{x}_{V_{0}}\big{)} =6|S|\). Then, a direct computation shows that \[\operatorname{\mathbf{grad}}\boldsymbol{\phi}_{0}= -\frac{(\boldsymbol{x}_{V_{2}}-\boldsymbol{x}_{V_{1}})\times( \boldsymbol{x}_{V_{3}}-\boldsymbol{x}_{V_{1}})}{6|S|}\] \[= -\frac{(\boldsymbol{x}_{V_{2}}-\boldsymbol{x})\times(\boldsymbol {x}_{V_{3}}-\boldsymbol{x}_{V_{2}})+(\boldsymbol{x}_{V_{2}}-\boldsymbol{x}) \times(\boldsymbol{x}_{V_{2}}-\boldsymbol{x}_{V_{1}})+(\boldsymbol{x}- \boldsymbol{x}_{V_{1}})\times(\boldsymbol{x}_{V_{3}}-\boldsymbol{x}_{V_{1}}) }{6|S|}.\] Noticing that \[\boldsymbol{\phi}_{03}(\boldsymbol{x})=\frac{(\boldsymbol{x}_{V_{2} }-\boldsymbol{x}_{V_{1}})\times(\boldsymbol{x}-\boldsymbol{x}_{V_{1}})}{6|S|}, \quad\boldsymbol{\phi}_{02}(\boldsymbol{x})=-\frac{(\boldsymbol{x}_{V_{3}}- \boldsymbol{x}_{V_{1}})\times(\boldsymbol{x}-\boldsymbol{x}_{V_{1}})}{6|S|},\] \[\boldsymbol{\phi}_{01}(\boldsymbol{x})=\frac{(\boldsymbol{x}_{V_{3} }-\boldsymbol{x}_{V_{2}})\times(\boldsymbol{x}-\boldsymbol{x}_{V_{2}})}{6|S|},\] we infer that \(\operatorname{\mathbf{grad}}\boldsymbol{\phi}_{0}=-\boldsymbol{\phi}_{01}- \boldsymbol{\phi}_{03}-\boldsymbol{\phi}_{02}\). ## Acknowledgements Daniele Di Pietro acknowledges the partial support of _Agence Nationale de la Recherche_ through the grant "HIPOTHEC". Both authors acknowledge the partial support of _Agence Nationale de la Recherche_ through the grant ANR-16-IDEX-0006 "RHAMNUS".
2310.00309
An Adaptation of the AAA-Interpolation Algorithm for Model Reduction of MIMO Systems
We consider the Adaptive Antoulas-Anderson (AAA) rational interpolation algorithm recently developed by Trefethen and co-authors, which can be viewed as a type of moment-matching technique for system realization and approximation. We consider variations on this algorithm that are suitable for model reduction of linear time invariant systems while addressing some of the shortcomings of the block-AAA variant of the algorithm for MIMO systems. In particular, we develop state-space formulas and keep track of the state-space dimension at every step of the adaptive block-AAA algorithm, showing an unfavorable increase of the state dimension. We propose a new low-rank adaptive interpolation algorithm that addresses this shortcoming. Comparative computational results are included for the algorithms above, together with comparisons to balanced reduction.
Jared Jonas, Bassam Bamieh
2023-09-30T08:46:20Z
http://arxiv.org/abs/2310.00309v1
# An Adaptation of the AAA-Interpolation Algorithm for Model Reduction of MIMO Systems ###### Abstract We consider the Adaptive Antoulas-Anderson (AAA) rational interpolation algorithm recently developed by Trefethen and co-authors, which can be viewed as a type of moment-matching technique for system realization and approximation. We consider variations on this algorithm that are suitable for model reduction of linear time invariant systems while addressing some of the shortcomings of the block-AAA variant of the algorithm for MIMO systems. In particular, we develop state-space formulas and keep track of the state-space dimension at every step of the adaptive block-AAA algorithm, showing an unfavorable increase of the state dimension. We propose a new low-rank adaptive interpolation algorithm that addresses this shortcoming. Comparative computational results are included for the algorithms above, together with comparisons to balanced reduction. ## I Introduction Model order reduction is an important tool in the analysis, simulation, and control of large-scale systems [1, 2], and is particularly relevant for control applications in, for example, fluid and structural mechanics [3, 4]. In the context of linear dynamic systems, model reduction algorithms aim to produce a state-space model with fewer states that approximates the dynamics of the original system. Amongst several model-reduction techniques, moment matching constructs a reduced-order model that matches the original model's moments at a given set of points [5]. This can be interpreted as creating a rational interpolant whose value (or some derivative) matches the original transfer function at that point. Moment matching and interpolation problems are therefore intimately linked. Building on the original rational interpolation results of Antoulas and Anderson [6] that uses a pre-specified set of interpolation points, Trefethen et. al [7] developed an algorithm they termed Adaptive Antoulas-Anderson (AAA). This algorithm uses a barycentric interpolation formula [8], and "adaptively" picks points in the complex plane at which a scalar-valued function is interpolated based on a maximum error criterion. The algorithm yields a rational approximant to a given complex function. Its main advantage is the automated selection of interpolation points, and has several interesting features as discussed in [7]. Subsequently, a matrix-valued version of the algorithm, termed block-AAA [9] was developed. This algorithm interpolates the _matrix value_ of a given function at certain points that are also adaptively selected according to a maximum error criterion. Since their introduction, AAA and related algorithms have been used in a systems context for model-order reduction and also in system identification. Such "data-driven" rational approximations have been used in parametric dynamical systems [10], and in quadratic-output systems [11]. More recently, they have been used in a model-order reduction scheme [12] with a two-step method utilizing both block-AAA on a discrete set of points and Hankel norm approximation. In this paper, we propose new variants of the AAA algorithm for the purpose of model reduction of high-order LTI systems. We give state-space formulas for realizations of interpolants with real parameters. We also replace the discretized maximum criterion employed in previous algorithms by a bisection algorithm for computing \(L^{\infty}\) errors on the imaginary axis, which in turn guides the adaptive selection of interpolation points. Most importantly, we show that adapting the existing block-AAA algorithm for use on linear systems has undesirable features when used on MIMO systems, especially when the number of outputs is large, in that it leads to a rapid increase in the state dimension of the interpolant compared to other schemes. The requirement of exactly interpolating the full matrix at each point causes this increase in state dimension. We argue that matrix-valued interpolation with lower rank matrices (formed from the significant singular values/vectors at those points) rather than exact interpolation is more effective. With this motivation, we develop an algorithm and demonstrate its effectiveness with numerical examples comparing the proposed algorithms with balanced reduction. We close with a discussion of some open problems in matrix-valued interpolation, and directions for future work. ### _Notation_ We use the notation \[H(s)=C(sI-A)^{-1}B+D=\left[\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right]\] for the transfer function and state space realization of a finite-dimensional Linear Time Invariant (LTI) system. \(\overline{X}\) denotes the complex conjugate (not transpose) of a matrix \(X\), and \(X^{*}\) denotes the complex-conjugate transpose of \(X\). ## II System-AAA The block-AAA algorithm [9] is an iterative algorithm that starts with a given matrix-valued function \(G(.)\) (of possibly high order), and builds up a matrix-valued rational function approximation at step \(r\) of the form \[R_{r}(z) =\left(\sum_{k=1}^{r}\frac{W_{k}}{z-z_{k}}\right)^{-1}\left(\sum_{k= 1}^{r}\frac{W_{k}G(z_{k})}{z-z_{k}}\right) \tag{1}\] \[=:M_{r}^{-1}(z)\ N_{r}(z).\] This particular form ensures that \(R_{r}\) interpolates \(G\) exactly at the so-called support points \(\{z_{k}\}\) in the sense that \(R_{r}(z_{k})=G(z_{k})\) as matrices. The weight matrices \(\{W_{k}\}\) are free parameters chosen to minimize some measure (usually a least squares criterion) of error between \(R_{r}\) and \(G\) over (typically a large number of) points in the domain \(\Omega\). The next support point \(z_{r+1}\in\Omega\subset\mathbb{C}\) is chosen where the following error criterion is maximized \[z_{r+1}=\arg\min_{z\in\Omega}\left\|R_{r}(z)-G(z)\right\|. \tag{2}\] The rationale being that since interpolation is exact at the support points, this error will be most reduced by this choice at the next iteration. The block-AAA algorithm presented in [9] produces approximations that have complex coefficients, and only evaluates the least squares error and solves the problem (2) numerically over a large grid of points in \(\Omega\). In this section, we propose a variant we call system-AAA, which works directly with state-space realizations with real matrices, performs the support point selection step (2) using a bisection algorithm (similar to those for computing \(H^{\infty}\) norms), and selects the matrix weights \(\{W_{k}\}\) using a solution of the least squares problem without gridding. The solution of this last problem involves computing Gramians of systems and finding eigenvectors of matrices related to them. Thus gridding of the domain \(\Omega\) is completely avoided. Algorithm 1 loosely follows MATLAB notation. The subscripts \(A\), \(B\), \(C\), and \(D\) denote the corresponding state-space matrix for the system. The following subsections detail the derivation of the algorithm and its connections to AAA. The first subsection uses the block-AAA interpolating function as a basis and derives a new interpolating function that interpolates at \(\omega=j\infty\), guarantees real coefficients, and derives its associated transfer functions. The second subsection details the transformation of the block-AAA algorithm into a state-space context. The third details the derivation of the state-space representation of the interpolation function. Finally the final section shows computational results for the algorithm. ``` 0:\(G(s)\) in state space form \(k\gets 0\) \(R\leftarrow\mathrm{ss}(G_{D})\) \(NM\leftarrow\mathrm{ss}()\) repeat \(\omega_{k}\leftarrow\mathrm{hinfnorm}(G-R)\) \(G_{k}=G(\omega_{i}),\ G_{k,r}=\mathrm{real}(G_{k}),\ G_{k,i}=\mathrm{imag}(G_{k})\) if\(\omega_{k}=0\)then \(NM_{k}\leftarrow\left[\begin{array}{cc|cc}0&G_{i}&I\\ \hline I&0&0\end{array}\right]\) else \(NM_{k}\leftarrow\left[\begin{array}{cc|cc}0&\omega_{k}I&G_{k,r}&I\\ -\omega_{k}I&0&-G_{k,i}&0\\ \hline I&0&0&0\\ 0&I&0&0\end{array}\right]\) endif \(NM\leftarrow\left[\begin{array}{c}NM_{M}^{\prime}\\ H\leftarrow\mathrm{minreal}\left(NM\left[\begin{array}{c}I\\ -G\end{array}\right]\right)\\ X\gets H_{C}\left(\mathrm{lyap}(H_{A},H_{B}H_{B}^{*})\right)H_{C}^{*} \end{array}\) Construct \(\mathbb{W}\) using theorem 2.1 \(\mathcal{B}_{1}\gets NM_{B}(:,1:m)\) \(\mathcal{B}_{2}\gets NM_{B}(:,m+1:\mathrm{end})\) \(R\leftarrow\left[\begin{array}{c|c}NM_{A}-\mathcal{B}_{2}\widehat{\mathbb{W}}& \mathcal{B}_{2}G_{D}-\mathcal{B}_{1}\\ \hline-\widehat{\mathbb{W}}&G_{D}\end{array}\right]\) \(i\gets i+1\) until\(R\) approximates \(G\) sufficiently return\(R\) ``` **Algorithm 1** System-AAA ### _Interpolation function_ Consider the multi-input, multi-output (MIMO) system \[G=\left[\begin{array}{c|c}A&B\\ \hline C&D\end{array}\right],\] where \(A\in\mathbb{R}^{n\times n}\), \(B\in\mathbb{R}^{n\times q}\), \(C\in\mathbb{R}^{p\times n}\), and \(D\in\mathbb{R}^{p\times q}\). We choose support points that always lie on the imaginary axis, thus Equation (1) becomes \[R_{r}(s)=\left(\sum_{k=1}^{r}\frac{W_{k}}{s-j\omega_{k}}\right)^{-1}\left( \sum_{k=1}^{r}\frac{W_{k}G(j\omega_{k})}{s-j\omega_{k}}\right). \tag{3}\] **Remark II.1**: _The interpolating function (3) guarantees that \(R_{r}(j\omega_{i})=G(j\omega_{i})\) for any support point \(\omega_{i}\), \(1\leq i\leq r\), provided that each \(W_{k}\) is invertible._ Multiplying by \(\frac{s-j\omega_{i}}{s-j\omega_{i}}\) yields \[R_{r}(s)= \left(W_{i}+\sum_{i\neq k=1}^{r}\frac{(s-j\omega_{i})W_{k}}{s-j \omega_{k}}\right)^{-1}\] \[\left(W_{i}G(j\omega_{i})+\sum_{i\neq k=1}^{r}\frac{(s-j\omega_{i })W_{k}G(j\omega_{k})}{s-j\omega_{k}}\right)\] \[\therefore R_{r}(j\omega_{i})= W_{i}^{-1}W_{i}G(j\omega_{i})=G(j\omega_{i}).\] From here, we begin to address the issues that were outlined above. The algorithm needs the ability to interpolate at \(\omega=j\infty\). We therefore rewrite the interpolation in a more general form, yielding \[R_{\ell}(s)=M(s)^{-1}N(s),\] (4a) where \[M(s)=\mathcal{W}_{0}+\sum_{k=1}^{\ell}\mathcal{W}_{k}M_{k}(s) \tag{4b}\] \[N(s)=\mathcal{W}_{0}D+\sum_{k=1}^{\ell}\mathcal{W}_{k}N_{k}(s), \tag{4c}\] and \(M(s)\in\mathbb{C}^{p\times p}\), \(N(s)\in\mathbb{C}^{p\times q}\). All of the weights in \(M(s)\) and \(N(s)\) can be factored out to the left, meaning \(M\) and \(N\) can be written as \[N(s)=\mathbb{W}\mathcal{N}(s),\quad M(s)=\mathbb{W}\mathcal{M}(s),\] where \[\mathbb{W} =\begin{bmatrix}\mathcal{W}_{0}&\mathcal{W}_{1}&\cdots&\mathcal{ W}_{\ell}\end{bmatrix}\] \[\mathcal{N}(s) =\begin{bmatrix}N_{1(s)}^{D}\\ \vdots\\ N_{\ell(s)}\end{bmatrix},\qquad\mathcal{M}(s)=\begin{bmatrix}N_{M_{1}(s)}^{I} \\ \vdots\\ M_{\ell(s)}^{I}\end{bmatrix}.\] Note \(\mathbb{W}\in\mathbb{R}^{p\times p}\). Depending on the location of the support point, the size of \(M_{k}(s)\) or \(N_{k}(s)\) can change. To ensure the resulting interpolating function has real coefficients, it must be the case that \(G(j\omega)=\overline{G}(-j\omega)\) for any \(\omega\in\mathbb{R}\). This may be accomplished by adding pairs of complex conjugate support points with conjugate weights. Starting with \(M\), \[\mathcal{W}_{k}M_{k}(s)=\frac{W_{k,1}}{s-j\omega_{k}}+\frac{W_{k,2}}{s+j\omega _{k}}.\] Assuming \(W_{k,1}=W_{k}\) and \(W_{k,2}=\overline{W}_{k}\), \[\mathcal{W}_{k}M_{k}(s)=\frac{2s\Re(W_{k})-2\omega_{k}\Im(W_{k})}{s^{2}+ \omega_{k}^{2}}.\] Therefore \[\mathcal{W}_{k} =2\left[\Re(W_{k})\quad\Im(W_{k})\right]\] \[M_{k}(s) =\begin{bmatrix}\frac{s}{s^{2}+\omega_{k}^{2}}\\ -\frac{\omega_{k}}{s^{2}+\omega_{k}^{2}}\end{bmatrix}. \tag{5}\] Similarly for \(N\), \[\mathcal{W}_{k}N_{k}(s) =\frac{W_{k,1}G(j\omega_{k})}{s-j\omega_{k}}+\frac{W_{k,2} \overline{G}(j\omega_{k})}{s+j\omega_{k}}\] \[\therefore N_{k}(s) =\begin{bmatrix}\frac{\Re(G(j\omega_{k}))s-\Im(G(j\omega_{k})) \omega_{k}}{s+j\omega_{k}^{2}}\\ -\frac{\Im(G(j\omega_{k}))s+\Re(G(j\omega_{k}))\omega_{k}}{s^{2}+\omega_{k}^{ 2}}\end{bmatrix}. \tag{6}\] In this case \(\mathcal{W}_{k}\in\mathbb{R}^{p\times 2p}\), \(M_{k}(s)\in\mathbb{C}^{2p\times p}\), and \(N_{k}(s)\in\mathbb{C}^{2p\times q}\). When \(\omega_{k}=0\), the first order system is already real thus there is no need to add an additional complex conjugate support point. In this case, \[M_{k}(s)=\frac{I}{s},\quad N_{k}(s)=\frac{G(0)}{s},\quad\mathcal{W}_{k}=W_{k}, \tag{7}\] and \(\mathcal{W}_{k}\in\mathbb{R}^{p\times p}\), \(M_{k}(s)\in\mathbb{C}^{p\times p}\), and \(N_{k}(s)\in\mathbb{C}^{p\times q}\). ### _Algorithm reformulation_ Each step of the AAA algorithm is composed of two main parts, the first being the selection of the new support point at the beginning of each iteration. The second is the selection of the weight matrices from an optimization problem that minimizes the approximation error between the interpolating function and the input function. In this section we show that these parts can be reformulated remove the necessity of a user-defined domain and to better utilize systems machinery. The next support point is chosen at the point in the domain where the error between \(R_{r}(z)\) and \(G(z)\) is largest. The domain in this case is the imaginary line, so the next support point will be at the frequency where the \(\mathcal{H}_{\infty}\) norm occurs, i.e. \[\omega_{\ell}=\arg\min_{\omega\in\mathbb{R}_{\geq 0}}\left\|G(j\omega)-R_{\ell-1}(j \omega)\right\|_{2}. \tag{8}\] This can be efficiently calculated with a bisection algorithm [13]. After a support point is selected, the weights in the interpolating function are selected via an optimization problem. The optimization problem in block-AAA is the following: \[\min_{\mathbb{W}}\sum_{z\in\Omega}\left\|N(z)-M(z)G(z)\right\|_{F}^{2}\quad \text{ s.t. }\left\|\mathbb{W}\right\|_{F}=1.\] Since our analysis in in continuous time, the sum will be replaced with an integral over the positive imaginary axis yielding \[\mathbb{W} =\arg\min_{\mathbb{W}}\int_{0}^{\infty}\left\|\mathbb{W}\left( \mathcal{N}(j\omega)-\mathcal{M}(j\omega)G(j\omega)\right)\right\|_{F}^{2} \mathrm{d}\omega.\] Letting \[H(s)=\mathcal{N}(s)-\mathcal{M}(s)G(s),\] \[=\arg\min_{\mathbb{W}}\int_{0}^{\infty}\operatorname{tr}\left( \mathbb{W}H(j\omega)H^{*}(j\omega)\mathbb{W}^{*}\right)\mathrm{d}\omega\] \[=\arg\min_{\mathbb{W}}\operatorname{tr}\left(\mathbb{W}X\mathbb{ W}^{*}\right),\] where \[X=\int_{0}^{\infty}H(j\omega)H^{*}(j\omega)\mathrm{d}\omega=\hat{C}\hat{G}_{C} \hat{C}^{*}, \tag{9}\] where \(G_{C}\) is the controllability Gramian for \(H\). Note that \(H\) can be written as a product of two augmented systems, \[H=\begin{bmatrix}\mathcal{N}&\mathcal{M}\end{bmatrix}\begin{bmatrix}I\\ -G\end{bmatrix}, \tag{10}\] and the positive matrix \(G_{C}\) can be found via the Lyapunov equation [14, p. 112] \[\hat{A}G_{C}+G_{C}\hat{A}^{\mathsf{T}}=-\hat{B}\hat{B}^{*}, \tag{11}\] where \(\hat{A}\), \(\hat{B}\), and \(\hat{C}\) are the corresponding state space matrices of \(H\). **Remark II.2**: _In order to guarantee existence and uniqueness of \(G_{c}\), the system \(H\) must not have any marginally stable poles, i.e. \(\hat{A}\) must not have any eigenvalues on the imaginary axis. However, this system has poles at \(\pm j\omega_{k}\) for all support points \(\omega_{k}\). It can be shown that there is a pole-zero cancellation for all of these poles, thus finding a minimal realization of \(H\) will suffice to find \(G_{c}\)._ The constraint \(\left\|\mathbb{W}\right\|_{F}=1\) is modified to \(\mathbb{W}\mathbb{W}^{*}=I\) in the new problem to guarantee \(\mathbb{W}\) has full row rank. Therefore the optimization becomes: \[\mathbb{W}=\arg\min_{\mathbb{W}}\operatorname{tr}\left(\mathbb{W}X\mathbb{W}^{ *}\right),\quad\text{ s.t. }\mathbb{W}\mathbb{W}^{*}=I. \tag{12}\] The closed form for (12) may be found by finding stationary points. A necessary condition for optimality is the following: \[\mathbb{W}X-\Lambda^{*}\mathbb{W}=0. \tag{13}\] **Theorem II.1**: _A solution for equation (13) subject to \(\mathbb{W}\mathbb{W}^{*}=I\) is \(\mathbb{W}=QV^{*}\), where \(Q\) is an arbitrary real unitary matrix and the columns of \(V\) are the eigenvectors corresponding to the \(p\) smallest distinct non-zero eigenvalues of \(X\)._ _Remark 2.3_: One \(A_{k}\) matrix is appended to \(\mathcal{A}\) at every iteration, showing that the system grows by a multiple of \(p\) states each time. ### _Computational results_ In this section, we discuss some numerical results where we compare our algorithm with a baseline, i.e. balanced reduction. Balanced reduction is a standard algorithm for model reduction on LTI systems [14]. We use as a test case a 270-state, 3-input, 3-output stable dynamic system modeling the dynamics of a module on the International Space Station (ISS) [15]. The figures each show a plot of the maximum singular value of the frequency response for the reduced-order systems, and the absolute error between the reduced order systems and the full system. The model used in figure 1 is a 28-state approximation of the ISS system's first output only, while figure 2 shows the approximation on the full system. The ISS system was reduced using both system-AAA and balanced reduction, and the figures demonstrate the difference in approximation error. Figure 1 shows that the algorithm generated a stable and well-matched approximation to the system with comparable error to that of standard balanced reduction when used on a single-output system. Figure 2 shows the result when the algorithm is used on a MIMO system after two iterations. In this case, the algorithm selected two support points at \(\omega=0\) and \(\omega\approx 0.8\) Hz. Balanced reduction is able to stably replicate the dynamics at 4 peaks, while system-AAA only mirrors one peak and has unstable poles. This effect becomes more pronounced as more outputs are added. As stated in remark II.3, the number of states added per iteration is proportional to the number of outputs, suggesting that an improved algorithm would not have this dependence. As mentioned in remark II.1, invertible \(W_{k}\) matrices are required for interpolation. Numerous numerical simulations demonstrate that these \(W_{k}\) matrices are well-conditioned. These observations motivate us to propose a different algorithm as stated in the next section. ## III Low-rank approximation Though the performance of system-AAA is satisfactory with single-output systems, the results indicate that the performance degrades as the number of outputs increases. In order to rectify this, we investigated a slight change to remove the system's size dependence on the number of outputs with an algorithm we shall call low-rank approximation. ### _Interpolation function_ With low-rank approximation, we allow the approximating system to be non-full-rank at the support points. This ensures that the interpolation function will grow one state when a new support point is added. Consider the following approximating function \[R_{r}(s)=\left(\sum_{k=1}^{r}\frac{W_{k}U_{k}^{*}}{s-j\omega_{k}}\right)^{ \dagger}\left(\sum_{k=1}^{r}\frac{W_{k}\Sigma_{k}V_{k}^{*}}{s-j\omega_{k}} \right), \tag{20}\] where \(U_{k}\Sigma_{k}V_{k}^{*}=G(j\omega_{k})\) is a rank \(r_{k}\) approximation. Let \(U_{k}\in\mathbb{C}^{p\times r_{k}}\), \(V\in\mathbb{C}^{m\times r_{k}}\), and \(\Sigma_{k}\in\mathbb{R}^{r_{k}\times r_{k}}\), and assume \(U_{k}\) and \(V_{k}\) have orthonormal columns, and \(\Sigma_{k}\) is diagonal. When \(r_{k}=1\), \(U_{k}\), \(V_{k}\), \(\Sigma_{k}\), and \(W_{k}\) are all rank 1, showing that this approximation function will clearly not fully interpolate at the support point. When \(r_{k}=p\), the approximation is full rank and fully interpolates the corresponding support point. Akin to system-AAA, \(R_{r}(s)\) can be rewritten to yield a \(M(s)\) and \(N(s)\) that are in the same form as (4). Though \(R_{r}(s)\) contains a pseudoinverse, the system inverse \(M^{-1}\) is well-defined as long as \(\mathcal{W}_{0}\) is invertible like before, thus the pseudoinverse is replaced with \(M^{-1}\). Note that \(U_{k}\in\mathbb{C}^{p\times r_{k}}\) and \(W_{k}\in\mathbb{C}^{p\times r_{k}}\), so their product can only be full rank when \(r_{k}=p\). **Remark III.1**: _If \(p>q\), then when \(r_{k}=p\), \(V_{k}\) is not full rank, so the resulting \(R_{\ell}(j\omega_{k})\) will not be full rank. Therefore, we may perform model reduction on the dual of \(G(s)\), i.e._ \[\mathrm{dual}[G](s):=\left[\begin{array}{c|c}A^{\mathsf{T}}&C^{\mathsf{T}} \\ \hline B^{\mathsf{T}}&D^{\mathsf{T}}\end{array}\right].\] _After the model is satisfactory, then we may return the dual of the reduced system._ _The transfer functions of \(M_{k}\) and \(N_{k}\) are similar to the forms seen in the full interpolation algorithm, except the \(M_{k}\) systems have an added matrix \(U_{k}^{*}\), making their transfer functions more similar to \(N_{k}\). The form of \(M_{k}\) and \(N_{k}\) are the following when \(\omega_{k}=0\),_ \[M_{k}(s)=\frac{U_{k}^{\mathsf{T}}}{s},\quad N_{k}(s)=\frac{\Sigma_{k}V_{k}^{ \mathsf{T}}}{s},\quad\mathcal{W}_{k}=W_{k},\] _and when \(\omega_{k}\neq 0\),_ \[M_{k}(s)=\begin{bmatrix}\frac{U_{k,r,s}^{\mathsf{T}}+U_{k,r, \omega_{k}}^{\mathsf{T}}}{s^{2}+\omega_{k}^{2}}\\ \frac{U_{k,i}^{\mathsf{T}}-U_{k,r,\omega_{k}}^{\mathsf{T}}}{s^{2}+\omega_{k}^{ 2}}\end{bmatrix}\] \[N_{k}(s)=\begin{bmatrix}\frac{\Sigma_{k}V_{k,r}^{\mathsf{T}}+ \Sigma_{k}V_{k,r}^{\mathsf{T}}+\omega_{k}}{s^{2}+\omega_{k}^{2}}\\ \frac{\Sigma_{k}V_{k,i}^{\mathsf{T}}-\Sigma_{k}V_{k,r}^{\mathsf{T}}}{s^{2}+ \omega_{k}^{2}}\end{bmatrix}.\] Fig. 2: ISS \(n=9\) reduction Fig. 1: ISS single-output \(n=28\) reduction The state space realizations for \(\left[\begin{smallmatrix}N_{k}&M_{k}\end{smallmatrix}\right]\) for \(\omega_{k}=0\) is \[\begin{bmatrix}N_{k}&M_{k}\end{bmatrix}=\left[\begin{array}{c|cc}0&\Sigma_{k} V_{k}^{\mathsf{T}}&U_{k}^{\mathsf{T}}\\ \hline I&0&0\end{array}\right],\] and for \(\omega_{k}\neq 0\), \[\begin{bmatrix}N_{k}&M_{k}\end{bmatrix}=\left[\begin{array}{c|cc}0&\omega_{k }I&\Sigma_{k}V_{k,r}^{\mathsf{T}}&U_{k,r}^{\mathsf{T}}\\ -\omega_{k}I&0&\Sigma_{k}V_{k,i}^{\mathsf{T}}&U_{k,i}^{\mathsf{T}}\\ \hline I&0&0&0^{\mathsf{T}}\end{array}\right].\] Note that \(U_{k,r}\), \(U_{k,i}\) are the real and imaginary parts of \(U_{k}\) respectively, and similarly for \(V_{k,r}\) and \(V_{k,i}\). ### _Algorithm_ The main change between system-AAA and the low-rank approximation algorithm is the modification of the approximating function. This does not affect the majority of the algorithm. However, the rank of the approximation at each support point needs to be addressed. When a new support point is added, it will always start out as a rank 1 approximation, but the algorithm must also consider whether the improvement of the approximation at an existing support point will be more effective. To do this, after a candidate \(\omega_{\ell}\) is selected, it will be compared to the previous support points, and if it is close to an existing support point, then it will instead improve said support point. The minimum distance to a support point then is a tunable parameter. ### _Computational Results_ The ISS model was used again as a test for the partial approximation algorithm. Like before, the following figures show the maximum singular value plot and its absolute error for a various number of states in each reduced system. Figures 3 and 4 show the results for the low-rank approximation algorithm. From here it is clear the dynamics at more peaks are being incorporated compared to full interpolation. In general, the results outperform full interpolation and are close to that of balanced reduction. Figures 5 and 6 show the approximation error as the number of states increases for the two algorithms presented in this paper as well as balanced reduction. The \(\mathcal{H}_{\infty}\) norm indicated is the maximum error over the frequency domain, and the \(\mathcal{H}_{2}\) norm written is the error integrated across the domain. More precisely, in this context it has been calculated as: \[\sqrt{\left|\operatorname{tr}\left(\hat{C}P\hat{C}^{*}\right)\right|},\quad \hat{A}P+P\hat{A}^{*}=\hat{B}\hat{B}^{*},\] where \(\hat{A}\), \(\hat{B}\), and \(\hat{C}\) are the corresponding state space matrices to \(G-R_{\ell}\), the system representing the error between the input system and the reduced order system. The presence of an 'x' indicates that the resulting reduced order system had unstable poles with that number of states. The first figure shows the error for the (1, 1) channel of the ISS system, while the second figure shows the error for the entire ISS system. Fig. 4: ISS \(n=30\) reduction Fig. 5: SISO system error as number of states increases Fig. 3: ISS \(n=8\) reduction It is clear to see that in the SISO and MISO case, both algorithms perform well and match the performance of balanced reduction, yielding a stable reduced system. For MIMO systems, the results are much more interesting and indicate a few things. The error for full interpolation may increase as the number of states increases, and doesn't always yield a great result for some number of states. In addition to this, most of the resulting systems contain a number of unstable poles. In comparison to these observations, Low-rank approximation matches the performace of balanced reduction up until a certain number of states, at which point the error slightly increases. Low-rank approximation may generate systems with a few unstable poles, but does not always, indicating that the user may stop the algorithm once a satisfactorily-performing stable system is found. Overall, the low rank approximation algorithm gives better results compared to full interpolation, and can give comparable results to balanced reduction. ## IV Discussion In this paper, we adapted the AAA algorithm for use in the model order reduction of state space systems. The first algorithm, system-AAA, gives satisfactory results for single-output systems, but does not perform as strongly when compared to balanced reduction with multi-output systems. We also discussed a second algorithm, low-rank approximation, which removes the state dimension's dependence on the number of outputs. Low-rank approximation fixes some issues with full interpolation and yields improved results with MIMO systems. Numerical results show that this new algorithm performs similarly to balanced reduction with MIMO systems, and matches or exceeds its performace otherwise. For single-output systems, both system-AAA and low-rank approximation are good alternatives to balanced reduction when the user needs a minimum order system. Starting with a minimum order system and gradually increasing the order allows the user to choose the smallest system that meets certain \(\mathcal{H}_{\infty}\) or \(\mathcal{H}_{2}\) error requirements, which is an advantage over other model reduction techniques. In future work, we will investigate why both algorithms can produce unstable poles in the MIMO case. We would like to find ways to further improve the performace of low-rank system-AAA, namely by ensuring the algorithm yields a stable, well-matched result on MIMO systems.
2309.13570
Robust 6DoF Pose Estimation Against Depth Noise and a Comprehensive Evaluation on a Mobile Dataset
Robust 6DoF pose estimation with mobile devices is the foundation for applications in robotics, augmented reality, and digital twin localization. In this paper, we extensively investigate the robustness of existing RGBD-based 6DoF pose estimation methods against varying levels of depth sensor noise. We highlight that existing 6DoF pose estimation methods suffer significant performance discrepancies due to depth measurement inaccuracies. In response to the robustness issue, we present a simple and effective transformer-based 6DoF pose estimation approach called DTTDNet, featuring a novel geometric feature filtering module and a Chamfer distance loss for training. Moreover, we advance the field of robust 6DoF pose estimation and introduce a new dataset -- Digital Twin Tracking Dataset Mobile (DTTD-Mobile), tailored for digital twin object tracking with noisy depth data from the mobile RGBD sensor suite of the Apple iPhone 14 Pro. Extensive experiments demonstrate that DTTDNet significantly outperforms state-of-the-art methods at least 4.32, up to 60.74 points in ADD metrics on the DTTD-Mobile. More importantly, our approach exhibits superior robustness to varying levels of measurement noise, setting a new benchmark for the robustness to noise measurements. Code and dataset are made publicly available at: https://github.com/augcog/DTTD2
Zixun Huang, Keling Yao, Seth Z. Zhao, Chuanyu Pan, Chenfeng Xu, Kathy Zhuang, Tianjian Xu, Weiyu Feng, Allen Y. Yang
2023-09-24T07:06:45Z
http://arxiv.org/abs/2309.13570v4
Robust Digital-Twin Localization via An RGBD-based Transformer Network and A Comprehensive Evaluation on a Mobile Dataset ###### Abstract The potential of digital-twin technology, involving the creation of precise digital replicas of physical objects, to reshape AR experiences in 3D object tracking and localization scenarios is significant. However, enabling robust 3D object tracking in dynamic mobile AR environments remains a formidable challenge. These scenarios often require a more robust pose estimator capable of handling the inherent sensor-level measurement noise. In this paper, recognizing the challenges of comprehensive solutions in existing literature, we propose a transformer-based 6DoF pose estimator designed to achieve state-of-the-art accuracy under real-world noisy data. To systematically validate the new solution's performance against the prior art, we also introduce a novel RGBD dataset called Digital Twin Tracking Dataset v2 (DTTD2), which is focused on digital-twin object tracking scenarios. Expanded from an existing DTTD v1 (DTTD1), the new dataset adds digital-twin data captured using a cutting-edge mobile RGBD sensor suite on Apple iPhone 14 Pro, expanding the applicability of our approach to iPhone sensor data. Through extensive experimentation and in-depth analysis, we illustrate the effectiveness of our methods under significant depth data errors, surpassing the performance of existing baselines. Code and dataset are made publicly available at: [https://github.com/augcog/DTTD2](https://github.com/augcog/DTTD2). ## I Introduction Digital twin is a problem of virtually augmenting real objects with their digital models. In the context of augmented reality (AR), utilizing digital-twin representation presents challenges and complexities in real-time, accurate 3D object tracking. In contrast to the more matured technology of camera tracking in static settings known as visual odometry or simultaneous localization and mapping [7, 24, 39, 42], identifying the relative position and orientation of one or more objects with respect to the user's ego position is a core function that would ensure the quality of user experience in digital-twin applications. In the most general setting, each object with respect to the ego position may undergo independent rigid-body motion, and the combined effect of overlaying multiple objects in the scene may also cause parts of the objects to be occluded from the measurement of the ego position. The main topic of our investigation in this paper is to study the digital-twin localization problem under the most general motion, occlusion, color, and lighting conditions. Recent advancements in the field of 3D object tracking have primarily been motivated by deep neural network (DNN) approaches that advocate end-to-end training to carry out crucial tasks such as image semantic segmentation, object classification, and object pose estimation. Notable studies [16, 17, 23, 35, 46] have demonstrated the effectiveness of these pose estimation algorithms using established real-world 3D object tracking datasets [20, 21, 31, 32, 34, 49]. However, it should be noted that these datasets primarily focus on robotic grasping tasks, and applying these solutions to mobile AR applications introduces a fresh set of challenges. Our previous work that introduced the _Digital Twin Tracking Dataset_ v1 (DTTD1) [10] first studied this gap in the context of 3D object localization for mobile AR applications. DTTD1 aims to replicate real-world digital-twin scenarios by expanding the capturing distance, incorporating diverse lighting conditions, and introducing varying levels of object occlusions. It is important to mention that, however, this dataset was collected using Microsoft Azure Kinect, which may not be the most suitable camera platform for mobile AR applications. Alternatively, Apple has emerged as a strong proponent of utilizing RGB+Depth (RGBD) spatial sensors for mobile AR applications with the design of their iPhone Pro camera suite, such as on the latest iPhone 14 Pro model. This particular smartphone has been given a back-facing LiDAR depth sensor, a critical addition for mobile and wearable AR applications. LiDAR (Light Detection and Ranging) technology has revo Fig. 1: _Left_: Shadow plot of the relation between the **depth noise** (depth-ADD) and the **max inference error** (ADD) of considered state-of-the-art methods and proposed DTTDNet. _Right_: Visualization of pose estimation results of those methods and proposed DTTDNet. lutionized the field of 3D perception and spatial understanding, enabling machines to perceive their surroundings with exceptional accuracy and detail [2, 22, 28]. It has found a particularly significant application in the realm of 3D object tracking [11, 48, 50] as well. Six degrees-of-freedom (6DoF) pose estimation involves determining the precise position and orientation of an object in 3D space relative to a reference coordinate system. This task is of paramount importance in various fields, including robotics [46], augmented reality [43], and autonomous driving [11, 48, 50]. Accurate and robust 6DoF pose estimation is crucial for enabling machines to interact seamlessly and safely with the physical world. However, one distinguishing drawback of the iPhone LiDAR depth is the low resolution of the depth map provided by the iPhone ARKit [39], a \(256\times 192\) resolution compared to a \(1280\times 720\) depth map provided by the Microsoft Azure Kinect. This low resolution is exacerbated by large errors in the retrieved depth map. The large amounts of error in the iPhone data also pose challenges for researchers to develop a pose estimator that can correctly predict object poses that rely heavily on the observed depth map. To investigate the digital-twin localization problem under the most popular mobile depth sensor, namely, the Apple iPhone 14 Pro LiDAR, we propose an RGBD-based transformer model for 6DoF object pose estimation, which is designed to effectively handle inaccurate depth measurements and noise. As shown in Figure 1, our method shows robustness against noisy depth input, while other baselines failed in such conditions. Meanwhile, we introduce DTTD v2 (DTTD2), a novel RGBD dataset captured by iPhone 14 Pro, to bridge the gap of digital-twin pose estimation for mobile AR applications, allowing research into extending algorithms to iPhone data and analyzing the unique nature of iPhone depth sensors. Our contributions are summarized as follows: * We propose a new transformer-based 6DoF pose estimator with depth-robust designs on modality fusion and training strategies, called DTTDNet. The new solution outperforms other state-of-the-art methods by a large margin in noisy depth conditions. * We introduce DTTD2 as a novel digital-twin pose estimation dataset for mobile AR applications. We provide in-depth LiDAR depth analysis and evaluation metrics to illustrate the unique properties and complexities of mobile LiDAR data while being used in mobile AR environments. * We conducted extensive experiments and ablation studies to demonstrate the efficacy of DTTDNet and shed light on how the depth-robustifying module works. ## II Related Work ### _6DoF Pose Estimation Algorithms_ The majority of data-driven approaches for object pose estimation revolve around utilizing either RGB images [26, 38, 49, 51] or RGBD images [16, 17, 23, 35, 46] as their input source. **RGB-only approach.** Studies following the RGB-only approach often rely on incorporating additional prior information and inductive biases during the inference process. These requirements impose additional constraints on the application of 3D object tracking on mobile devices. Their inference process can involve utilizing more viewpoints for similarity matching [27, 33] or geometry reconstruction [43], employing rendering techniques [3, 27, 36] based on precise 3D model or leveraging an additional database for viewpoint encoding retrieval [3]. During the training phase, these approaches typically draw upon more extensive datasets, such as synthetic datasets, to facilitate effective generalization within open-set scenarios. However, when confronted with a limited set of data samples, their performance does not surpass that of closed-set algorithms in cases where there is a surplus of prior information available and depth map loss. **RGBD approach.** On the other hand, methods [16, 17, 23, 35, 46] that relied on depth maps advocated for the modality fusion of depth and RGB data to enhance their inference capabilities. To effectively fuse multi-modalities, Wang et al. [46] introduced a network architecture capable of extracting and integrating dense feature embedding from both RGB and depth sources. Due to its simplicity, this method achieved high efficiency in predicting object poses. In more recent works [16, 17, 18], performance improvements were achieved through more sophisticated network architectures. For instance, He et al. [16] proposed an enhanced bidirectional fusion network for key-point matching, resulting in high accuracy on benchmarks such as YCB-Video [49] and LINEMOD [20]. However, these methods exhibited reduced efficiency due to the complex hybrid network structures and processing stages. Addressing symmetric objects, Mo et al. [35] proposed a symmetry-invariant pose distance metric to mitigate issues related to local minima. On the other hand, Jiang et al. [23] proposed an L1-regularization loss named abc loss, which enhanced pose estimation accuracy for non-symmetric objects. ### _3D Object Tracking Datasets_ Existing object pose estimation algorithms are predominantly tested on a limited set of real-world 3D object tracking datasets [49, 5, 6, 10, 20, 31, 34, 4, 31], which often employ depth-from-stereo sensors or time-of-flight (ToF) sensors for data collection. Datasets like YCB-Video [49], LINEMOD [20], Stereo0BJ-1M [31], and TOD [32] utilize depth-from-stereo sensors, while TLess [21] and our prior work DTTD1 [10] deploy ToF sensors, specifically the Microsoft Azure Kinect, to capture meter-scale RGBD data. However, the use of cameras with depth-from-stereo sensors may not be an optimal platform for deploying AR software, because stereo sensors may degrade rapidly in longer-distance [13] and may encounter issues with holes in the depth map when stereo matching fails. However, it is essential to note that our choice of the iPhone 14 Pro, while different from the Azure Kinect, presents its own unique advantages and challenges. In our pursuit of addressing the limitations of existing datasets and ensuring a more realistic dataset for AR applications in household scenarios, particularly for mobile devices, we opt to collect RGBD data using the iPhone 14 Pro. By leveraging the iPhone 14 Pro's LiDAR ToF sensor, our dataset can provide more accurate and reliable depth information while catering to a broader range of real-world occlusion and lighting conditions, thereby enhancing the robustness and practicality of data-driven object pose estimation algorithms. ### _iPhone-based Datasets for 3D Applications_ There are several datasets that utilize the iPhone as their data collection device for 3D applications, such as ARKitScenes [1], MobileBrick [29], ARKitTrack [52], and RGBD Dataset [15]. These datasets were constructed to target applications from 3D indoor scene reconstruction, 3D ground-truth annotation, depth-map pairing from different sensors, to RGBD tracking in both static and dynamic scenes. However, most of these datasets did not specifically target the task of 6DoF object pose estimation. Our dataset provides a distinct focus on this task, offering per-pixel segmentation and pose labels. This enables researchers to delve into the 3D localization tasks of objects with a dataset specifically designed for this purpose. The most relevant work is from OnePose [43], which is an RGBD 3D dataset collected by iPhone. However, their dataset did not provide 3D models for close-set settings, and they utilized automatic localization provided by ARKit for pose annotation, which involved non-trivial error for high-accuracy 6DoF pose estimation. On the other hand, we achieve higher localization accuracy with OptiTrack professional motion capture system to track the iPhone camera's real-time positions as it moves in 3D. ## III Methods In this section, we will elaborate on the specific details of our methods. The objective is to estimate the 3D location and pose of a known object in the camera coordinates from the RGBD images. This position can be represented using homogeneous transformation matrix \(p\in SE(3)\), which consists of a rotation matrix \(R\in SO(3)\) and a translation matrix \(t\in\mathbb{R}^{3}\), \(p=[R|t]\). Section III-A describes our transformer-based model architecture. Section III-B introduces two depth robustifying modules on depth feature extractions, dedicated to geometric feature reconstruction and filtering. Section III-C illustrates our modality fusion design for the model to disregard significant noisy depth feature. Finally, Section III-D describes our final learning objective. ### _Architecture Overview_ Figure 2 illustrates the overall architecture of the proposed DTTDNet. The DTTDNet pipeline takes segmented depth maps and cropped RGB images as input. It then obtains feature embedding for both RGB and depth images through separate CNN and point-cloud encoders on cropped RGB images and reconstructed point cloud corresponding to the cropped depth images.1 For RGB feature extraction, the image embedding network comprises a ResNet-18 encoder, which is then followed by 4 up-sampling layers acting as the decoder. It translates an image of size \(H\times W\times 3\) into a \(H\times W\times d_{rgb}\) Fig. 4: **Diagram of attention mechanism in the fusion process in DTTDNet.** Modality fusion module utilizes the embedding from unimodal encoders and feeds them into a transformer encoder in parallel for cross-modal fusion. The latter point-wise fusion module relies on similarity scores among points. Fig. 3: **Detailed model architecture for point-cloud auto-encoder in DTTDNet.** The 128-D point-wise feature undergoes max pooling to obtain a global feature representation. Subsequently, these two features are aggregated and further encoded to yield a 256-D representative embedding. embedding space. For depth feature extraction, we take segmented depth pixels and transform them into 3D point clouds with the camera intrinsics. The 3D point clouds are initially processed using an auto-encoder (Figure 3) inspired by the PointNet [40]. The PointNet-style encoding step aims to capture geometric representations in latent space in \(\mathbb{R}^{d_{1}}\). In this context, the encoder component produces two sets of features: early-stage point-wise features in \(\mathbb{R}^{N\times d_{2}}\) and global geometric features in \(\mathbb{R}^{d_{3}}\). Subsequently, we add a decoder which is guided by a reference point set \(P\) to generate the predicted point cloud \(\hat{P}\). Features extracted from the encoder are subsequently combined with the learned representations to create a new feature sequence with a dimension of \(\mathbb{R}^{N\times d_{geo}}\), where \(d_{geo}=d_{1}+d_{2}+d_{3}\). This results in a sequence of geometric tokens with a length equal to the number of points \(N\). Extracted RGB and depth features are then fed into a two-stage attention-based fusion block, which consists of modality fusion and point-wise fusion. Finally, the pose predictor produces point-wise predictions with both rotation and translation. The predictions are then voted based on unsupervised confidence scoring to get the final 6DoF pose estimate. ### _Depth Data Robustifying_ In this section, we will introduce two modules, Chamfer Distance Loss (CDL) and Geometric Feature Filtering (GFF), that enable the point-cloud encoder in DTTDNet to better handle noisy and low-resolution LiDAR data in a robust way. **Chamfer Distance Loss (CDL).** Past methods either treated the depth information directly as image channels [35] or directly extracted features from a point cloud for information extraction [46]. These methods underestimated the corruption of the depth data caused by noise and error during the data collection process. To address this, we first introduce a downstream task for point-cloud reconstruction and utilize the Chamfer distance as a loss function to assist our feature embedding in filtering out noise. The Chamfer distance loss (CDL) is widely used for denoising in 3D point clouds [9, 19], and it is defined as the following equation between two point clouds \(P\in\mathbb{R}^{N\times 3}\) and \(\hat{P}\in\mathbb{R}^{N\times 3}\): \[L_{CD}(\hat{P},P)=\frac{1}{N}(\sum\limits_{\hat{x}_{i}\in P}\min\limits_{\hat{ x}_{j}\in P}\|x_{i}-\hat{x_{j}}\|_{2}^{2}+\sum\limits_{\hat{x}_{i}\in P}\min \limits_{\hat{x}_{j}\in P}\|x_{i}-\hat{x_{j}}\|_{2}^{2}) \tag{1}\] where \(\hat{P}\) denotes the decoded point set from the embedding, and \(P\) denotes the reference point set employed to guide the decoder's learning. We present two distinct alternatives for supervising the decoder: The first option involves the point cloud extracted from the depth map, whereas the second option involves using the point cloud sampled from the object model. While the former is tailored to reduce the noise of the depth map, the latter focuses on representing the object's geometry, ensuring a robust depth data representation. **Geometric Feature Filtering (GFF).** Due to the non-Gaussian noise distribution in iPhone LiDAR data (Figure 7), which should be assumed for most depth camera data, normal estimators might either get perturbed by such noisy features or interpret wrong camera-object rotations. To deal with this sensor-level error, we advocate for the integration of a Geometric Feature Filtering (GFF) module prior to the modality fusion module. Drawing inspiration from the Filter-Enhanced MLPs used in the sequential recommendation [53], our approach incorporates the Fast Fourier Transform (FFT) into the geometric feature encoding. Specifically, the GFF module includes an FFT, a subsequent single layer of MLP, and finally, an inverse-FFT. By leveraging FFT, we are able to transpose the input sequence of geometric signals to the frequency domain, which selects significant features from noisy input signals. After that, we obtain a more refined geometric embedding that is resilient to the non-Gaussian iPhone LiDAR noise. ### _Attention-based RGBD Fusion_ Previous papers have emphasized the importance of modality fusion [16, 46] and the benefits of gathering nearest points from the point cloud [16, 35] in RGBD-based pose estimation tasks. While the feature extractor widens each point's receptive field, we aim for features to interact beyond their corresponding points [46] or neighboring points [16]. In predicting the 6DoF pose of a cuboid based on multiple feature descriptors, our focus is on attending to various corner points, rather than solely those in close proximity to each other. To this end, inspired by recent transformer-based models used for modality fusion [14, 25, 30, 37, 41, 47], we leverage the self-attention mechanism [45] to amplify and integrate important features while disregarding the significant LiDAR noise. Specifically, our fusion part is divided into two stages: modality fusion and point-wise fusion (Figure 4). Both of our fusion modules consist of a standard transformer encoder with linear projection, multi-head attention and layer norm. The former module utilizes the embedding from single-modal encoders and feeds them into a transformer encoder in parallel for cross-modal fusion. The latter fusion module relies on similarity scores among points. It merges all feature embedding in a point-wise manner before feeding them into a transformer encoder. **Modality Fusion.** The objective of this module is to combine geometric embedding \(g\) and RGB embedding \(c\) produced by single-modal encoders in a cross-modal fashion. Drawing inspiration from ViLP [25], both types of embedding are linearly transformed into a token sequence (\(\in\mathbb{R}^{N\times d_{emb}}\)). Before entering the modality fusion module \(E_{1}\), these features are combined along the sequence length direction, i.e., all feature embedding is concentrated into a single combined sequence, where the dimension remains \(d_{emb}\), and the sequence length becomes twice the original length (Figure 4). \[f_{1}=E_{1}\left[c\oplus g\right]\in\mathbb{R}^{d_{f_{1}}\times 2N},\] where the operation symbol "\(\oplus\)" denotes concentrating along the row direction. It is then reshaped into the sequence \(f_{1}^{\prime}\) with the length of N and dimension of \(2d_{f_{1}}\) in order to adapt the point-wise transformer encoder in the next fusion stage. This step enables the model's attention mechanism to effectively perform cross-modal fusion tasks. **Point-Wise Interaction.** The goal of this stage is to enhance the integration of information among various points. The primary advantage of our method over the previous work [16] is that our model can calculate similarity scores not only with the nearest point but also with all other points, allowing for more comprehensive interactions. In order to enable the point-wise fusion to effectively capture the similarities between different points, we merge the original RGB token sequence \(c\) and the geometric token sequence \(g\) together with the output embedding sequence \(m^{\prime}\) from the modality fusion module along the feature dimension direction. The combined sequence input \(\left[c^{T}\oplus g^{T}\oplus(f_{1}^{\prime})^{T}\right]^{T}\in\mathbb{R}^{(2d _{emb}+2d_{f_{1}})\times N}\) is then fed into the point-wise transformer encoder \(E_{2}\) to acquire the final fusion: \[f_{2}=E_{2}\left[c^{T}\oplus g^{T}\oplus(f_{1}^{\prime})^{T}\right]^{T}\in \mathbb{R}^{d_{f_{2}}\times N}.\] **Attention Mechanism.** For both modality fusion and point-wise fusion stage, the scaled dot-product attention is utilized in the self-attention layers: \[s_{i,j} =\mathbf{q}_{i}^{T}\mathbf{k}_{j}/\sqrt{d_{\text{head}}};\] \[a_{i,j} =\frac{\exp(s_{i,j})}{\sum_{k}\exp(s_{i,k})};\] \[\mathrm{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})_{i}=\sum_{j }a_{i,j}\mathbf{v}_{j},\] where query, key, value, and similarity score are denoted as \(q\), \(k\), \(v\), and \(s\). The distinction between two fusion stages lies in the token preparation prior to the linear projection layer. It results in varying information contained within the query, key, and value. The key idea in the first fusion stage is to perform local per-point fusion in a cross-modality manner so that we can make predictions based on each fused feature. Each key or query carries only one type of modal information before fusion, allowing different modalities to equally interact with each other through dot-product operations. It exerts a stronger influence when the RGB and geometric representations produce higher similarity. In the second stage, where we integrate two original single-modal features with the first-stage feature into each point, we calculate similarities solely among different points. The key idea is to enforce attention layers to further capture potential relationships among multiple local features. A skip connection is employed in a concentrating manner between two fusion outputs so that we can make predictions based on per-point features generated in both the first and second stages. ### _Learning Objective_ Based on the overall network structure, our learning objective is to perform 6DoF pose regression, which measures the disparity between points sampled on the object's model in its ground truth pose and corresponding points on the same model transformed by the predicted pose. Specifically, the pose estimation loss is defined as: \[(L_{ADD})_{i,p}=\frac{1}{m}\sum_{x\in M}\|(Rx+t)-(\hat{R}_{i}x+\hat{t}_{i})\| \tag{2}\] where \(M\in\mathbb{R}^{m\times 3}\) represents the randomly sampled point set from the object's 3D model, \(p=[R|t]\) denotes the ground truth pose, and \(\hat{p_{i}}=[\hat{R}_{i}|\hat{t_{i}}]\) denotes the predicted pose generated from the fused feature of the \(i^{th}\) point. Our objective is to minimize the sum of the losses for each fusion point, which can be expressed as \(L_{ADD}=\frac{1}{N}\sum_{i}^{N}(L_{ADD})_{i,p}\), where \(N\) is the number of randomly sampled points (token sequence length in the point-wise fusion stage). Meanwhile, we introduce a confidence regularization score (\(c_{i}\)) along with each prediction \(\hat{p_{i}}=[\hat{R}_{i}|\hat{t_{i}}]\), which denotes confidence among the predictions for each fusion point: \[L_{ADD}=\frac{1}{N}\sum_{i}^{N}(c_{i}(L_{ADD})_{i,p}-wlog(c_{i})) \tag{3}\] Predictions with low confidence will lead to a low ADD loss, but this will be balanced by a high penalty from the second term with hyper-parameter \(w\). Finally, the CDL loss, as outlined in Section III-B, undergoes joint training throughout the training process, leading us to derive our ultimate learning objective as follows: \[L=L_{ADD}+\lambda L_{CD} \tag{4}\] where \(\lambda\) denotes the weight of the CDL loss. ## IV Dataset Description DTTD2 data contain 18 rigid objects along with their textured 3D models. The data are generated from 100 scenes, each of which features one or more of the objects in various orientations and occlusion. The dataset offers ground-truth labels for 3D object poses and per-pixel semantic segmentation. Additionally, it provides detailed camera specifications, pinhole camera projection matrices, and distortion coefficients. Detailed features and statistics are presented in Table I. The fact that DTTD2 dataset includes multiple sets of geometrically similar objects, each having distinct color textures, poses challenges to existing digital-twin localization solutions. In order to ensure compatibility with other existing datasets, some of the collected objects partially overlap with the YCB-Video [49] and DTTD1 [10] datasets. ### _Data Acquisition_ Apple's ARKit framework2 enables us to capture RGB images from the iPhone camera and scene depth information from the LiDAR scanner synchronously. We leverage ARKit APIs to retrieve \(1920\times 1440\) RGB images and \(256\times 192\) depth maps at a capturing rate of 30 frames per second. Despite the resolution difference, both captured RGB images and depth maps match up in the aspect ratio and describe the same scene. Alongside each captured frame, DTTD2 stores the camera intrinsic matrix and lens distortion coefficients, and also stores a 2D confidence map describing how the iPhone depth sensor is confident about the captured depth at the pixel level. In practice, we disabled the auto-focus functionality of the iPhone camera during data collection to avoid drastic changes in the camera's intrinsics between frames, and we resized the depth map to the RGB resolution using nearest neighbor interpolation to avoid depth map artifacts. To track the iPhone's 6DoF movement, we did not use the iPhone's own world tracking SDK. Instead, we follow the same procedure as in [10] and use the professional OptiTrack motion capture system for higher accuracy. For label generation, we also use the open-sourced data annotation pipeline provided by [10] to annotate and refine ground-truth poses for objects in the scenes along with per-pixel semantic segmentation. Some visualizations of data samples are illustrated in Figure 5. Notice that the scenes cover various real-world occlusion and lighting conditions with high-quality annotations. Following previous dataset protocols [10, 49], we also provide synthetic data for scene augmentations used for training. The dataset also provides 3D models of the 18 objects as illustrated in Figure 6. These models are reconstructed using the iOS Polycam app via access to the iPhone camera and LiDAR sensors. To enhance the models, Blender 3 is employed to repair surface holes and correct inaccurately scanned texture pixels. Footnote 3: [https://www.blender.org/](https://www.blender.org/) Footnote 4: [https://www.blender.org/](https://www.blender.org/) ### _Benchmark and Evaluation_ **Train/Test Split.** DTTD2 offers a suggested train/test partition as follows. The training set contains 8622 keyframes extracted from 88 video sequences, while the testing set contains 1239 keyframes from 12 video sequences. To ensure a representative distribution of scenes with occluded objects and varying lighting conditions, we randomly allocate them across both the training and testing sets. Furthermore, for training purposes of scene augmentations, we provide 20,000 synthetic images by randomly placing objects in scenes using the data synthesizer provided in [10]. **Evaluation Metrics.** We evaluate baseline methods with the average distance metrics ADD and ADD-S according to previous protocols [10, 49]. Suppose \(R\) and \(t\) are ground Fig. 5: Sample visualizations of our dataset. _First row:_ Annotations for 3D bounding boxes. _Second row:_ Corresponding semantic segmentation labels. _Third row:_ Zoomed-in LiDAR depth visualizations. Fig. 6: _Left:_ 3D models of the 18 objects in DTTD2. _Right:_ Quantity of scenes in which objects appear in DTTD2. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c} \hline \hline Dataset & Modality & iPhone Camera & Texture & Occlusion & Light variation & \# of frames & \# of scenes & \# of objects & \# of annotations \\ \hline Stereo0B1-M [31] & RGB & \(\times\) & ✓ & ✓ & ✓ & 393.612 & 182 & 18 & 1,508,327 \\ LINEMOD [20] & RGBD & \(\times\) & ✓ & ✓ & \(\times\) & 18,000 & 15 & 15 & 15,784 \\ YCB-Video [49] & RGBD & \(\times\) & ✓ & ✓ & \(\times\) & 133.936 & 92 & 21 & 613,917 \\ DTTD [10] & RGBD & \(\times\) & ✓ & ✓ & ✓ & 55,691 & 103 & 10 & 136,226 \\ TOD [32] & RGBD & \(\times\) & ✓ & \(\times\) & \(\times\) & 64,000 & 10 & 20 & 64,000 \\ LabelFusion [34] & RGBD & \(\times\) & ✓ & ✓ & ✓ & 352,000 & 138 & 12 & 1,000,000 \\ T-LESS [21] & RGBD & \(\times\) & \(\times\) & ✓ & \(\times\) & 47,762 & - & 30 & 47,762 \\ **DTD2 (Ours)** & RGBD & ✓ & ✓ & ✓ & ✓ & 47,668 & 100 & 18 & 114,143 \\ \hline \hline \end{tabular} \end{table} TABLE I: Features and statistics of different datasets. truth rotation and translation and \(\tilde{R}\) and \(\tilde{t}\) are the predicted counterparts. The ADD metric computes the mean of the pairwise distances between the 3D model points using ground truth pose \((R,t)\) and predicted pose \((\tilde{R},\tilde{t})\): \[\mathrm{ADD}=\frac{1}{m}\sum_{x\in M}\|(Rx+t)-(\tilde{R}x+\tilde{t})\|, \tag{5}\] where \(M\) denotes the point set sampled from the object's 3D model and \(x\) denotes the point sampled from \(M\). The ADD-S metric is designed for symmetric objects when the matching between points could be ambiguous: \[\mathrm{ADD-S}=\frac{1}{m}\sum_{x_{1}\in M}\min_{x_{2}\in M}\|(Rx_{1}+t)-( \tilde{R}x_{2}+\tilde{t})\|. \tag{6}\] Following previous protocols [27, 31, 33, 46, 49], a 3D pose estimation is deemed accurate if the average distance error falls below a predefined threshold. Two widely-used metrics are employed in our work, namely ADD/ADD-S AUC and ADD/ADD-S(1cm). For commonly used ADD/ADD-S AUC, we calculate the Area Under the Curve (AUC) of the success-threshold curve over different distance thresholds, where the threshold values are normalized between 0 and 1. On the other hand, ADD/ADD-S(1cm) is defined as the percentage of pose error smaller than the 1cm threshold. ### _iPhone 14 Pro LiDAR Analysis_ Compared to dedicated depth cameras such as the Microsoft Azure Kinect or Intel Realsense, iPhone 14 Pro LiDAR exhibits more noise and lower resolution at \(256\times 192\) depth maps, which leads to high magnitudes of distortion on objects' surfaces. Additionally, it introduces long-tail noise on the projection edges of objects when performing interpolation operations between RGB and depth features. Figure 7 demonstrates one such example of iPhone 14 Pro's noisy depth data. To further quantitatively assess the depth noise of each object from the iPhone's LiDAR, we analyze the numerical difference between _LiDAR-measured depth map_, which is acquired directly from iPhone LiDAR, and _reference depth map_, which is derived through ground truth pose annotations. Specifically, to obtain the reference depth map, we leverage ground truth annotated object poses to render the depth projections of each object. We then apply the segmentation mask associated with each object to filter out depth readings that might be compromised due to occlusion. To measure the difference between ground truth and reference depth map, we introduce the _depth-ADD_ metric, which calculates the average of pixel-wise L1 distance between the ground truth depth map and the reference depth map in each frame. The depth-ADD value of each object at frame \(n\) is calculated as follows: \[\mathrm{depth-ADD}_{n}=\frac{1}{d}\sum_{i\in D}|\mathrm{depth}_{\mathrm{LiDAR }i}-\mathrm{depth}_{\mathrm{ref}i}|\,, \tag{7}\] where \(D\) denotes the LiDAR depth map and \(i\) denotes the index of pixels on it. \(\mathrm{depth}_{\mathrm{LiDAR}i}\) and \(\mathrm{depth}_{\mathrm{ref}i}\) represent the depth values from \(D\) and the corresponding depth value from the reference depth map. The set \(D\) encompasses all indices \(i\) under an object's segmentation mask where both \(\mathrm{depth}_{\mathrm{LiDAR}i}\) and \(\mathrm{depth}_{\mathrm{ref}i}\) yield values greater than zero. The final depth-ADD value of each object is the average of such measurement across all \(N\) frames: \[\mathrm{depth-ADD}=\frac{1}{N}\sum_{n\in N}\mathrm{depth-ADD}_{n} \tag{8}\] Figure 8 shows the _depth-ADD_ evaluation of each sampled object. Greater depth-ADD values indicate increased distortions and the presence of long-tail noise in the depth data. Our analysis indicates that the mean depth-ADD across all objects is around 0.25m. It is worth noticing that the depth quality varies significantly and could potentially be affected by outliers. For example, there are three objects: _black_marker_, _blue_marker_ and _pink_marker_ exhibiting greater errors in comparison with the other objects. ## V Experiments ### _Experimental Results_ In this section, we compare the performance of our method DTTDNet with three other 6DoF pose estimators, namely, DenseFusion [46], MegaPose [27], ES6D [35]. While all four methods leverage the benefits of multimodal data from both Fig. 8: **Box Plot of Depth Analysis Results of 18 Objects.** Evaluating depth quality for objects in our iPhone dataset, the redness signifies the average depth-ADD for each item. The rectangular box denotes the interquartile range (IQR, spanning 2-75%). Whistness highlight the typical range, either \(Q1-1.5\times IQR\) or \(Q3+1.5\times IQR\). Points beyond the whiskers are identified as outliers. Notably, the object _black_marker_ exhibits poor depth qualities. Fig. 7: Visualization of an iPhone LiDAR depth scene that shows distortion and long-tail non-Gaussian noise (highlighted inside the red box). (a) Front view. (b) Left view. (c) Right view. RGB and depth sources, they differ in the extent to which they emphasize the depth data processing module. Quantitative experimental results are shown in Table II. Qualitative examples are shown in Figure 9. DenseFusion [46] treats both modalities equally and lacks a specific design for the depth module, whereas ES6D [35] heavily relies on depth data during training, using grouped primitives to prevent point-pair mismatch. However, due to potential interpolation errors in the depth data, this additional supervision can introduce erroneous signals to the estimator, resulting in inferior performance compared to DenseFusion [46]. DenseFusion [46] achieves 69.67 ADD AUC and 85.88 ADD-S AUC, whereas ES6D [35] only achieves 13.25 ADD AUC and 37.38 ADD-S AUC. MegaPose [27] employs a coarse-to-fine process for pose estimation. The initial "coarse" module leverages both RGB and depth data to identify the most probable pose hypothesis. Subsequently, a more precise pose inference is achieved through the "render-and-compare" technique. Disregarding the noise in the depth data can also impair the effectiveness of their coarse module, consequently leading to failure in their refinement process. Even with the assistance of a refiner, MegaPose \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{DenseFusion [46]} & \multicolumn{2}{c|}{MegaPose-RGBD [27]} & \multicolumn{2}{c|}{ES6D [35]} & \multicolumn{2}{c|}{DTTDNet (Ours)} \\ \hline Object & ADD AUC & ADD-S AUC & ADD AUC & ADD-S AUC & ADD AUC & ADD-S AUC & ADD AUC & ADD-S AUC \\ \hline mac\_cheese & 88.10 & 93.17 & 78.98 & 87.94 & 28.29 & 57.06 & **94.06** & **97.02** \\ tomato\_can & 69.10 & 93.42 & 68.85 & 84.48 & 19.07 & 56.17 & **74.23** & **94.01** \\ tuna\_can & 42.90 & 79.94 & 8.90 & 22.11 & 10.74 & 26.86 & **62.98** & **87.05** \\ cereal\_box & 75.20 & 88.12 & 59.89 & 71.53 & 10.09 & 53.92 & **86.55** & **92.74** \\ clam\_can & **90.49** & 96.32 & 74.11 & 90.45 & 17.75 & 35.92 & 88.15 & **96.92** \\ spam & **53.29** & 91.14 & 72.35 & 86.16 & 3.17 & 13.74 & 52.81 & 90.83 \\ cheez-it\_box & 82.73 & 92.10 & **89.18** & **94.83** & 7.81 & 37.14 & 87.03 & 93.91 \\ mustard & 78.41 & 91.31 & 76.08 & 85.38 & 21.89 & 52.56 & **84.06** & **92.15** \\ pop-tarts\_box & 82.94 & 92.58 & 44.36 & 58.97 & 3.44 & 35.26 & **84.55** & **92.65** \\ black\_marker & 32.22 & 38.72 & 17.38 & 34.15 & 2.12 & 3.72 & **44.08** & **53.50** \\ blue\_marker & **66.06** & **74.80** & 6.87 & 12.46 & 16.88 & 41.46 & 50.88 & 61.69 \\ pink\_marker & 56.46 & 67.86 & 47.84 & 58.59 & 1.59 & 7.01 & **64.18** & **73.00** \\ itoen\_green\_rea & 64.37 & **93.10** & 48.43 & 70.50 & 8.80 & 32.86 & **64.59** & 92.31 \\ apple & 68.97 & 91.13 & 32.85 & 76.43 & 31.65 & 58.46 & **82.45** & **94.80** \\ pear & **65.66** & **91.31** & 35.80 & 56.73 & 16.93 & 32.57 & 47.83 & 88.11 \\ pink\_pocky & 50.64 & 67.17 & 8.69 & 18.25 & 0.77 & 1.93 & **61.40** & **82.33** \\ red\_pocky & 88.14 & 93.76 & 76.49 & 84.56 & 25.32 & 51.16 & **90.00** & **95.24** \\ white\_pocky & 89.55 & 94.27 & 42.83 & 54.65 & 17.19 & 47.45 & **90.83** & **94.70** \\ \hline Average & 69.67 & 85.88 & 49.02 & 62.44 & 13.25 & 37.38 & **73.99** & **88.10** \\ \hline \end{tabular} \end{table} TABLE II: **Comparison with diverse 6DoF pose estimation baselines on DTTD2 dataset.** We showcase AUC results of ADD-S and ADD on all 18 objects, higher is better. Based on considered 3 baselines, our model significantly improve the accuracy and robustness on most objects. Fig. 9: **Qualitative evaluation of different methods.** To further validate our approach, we provide visual evidence of our model’s effectiveness in challenging occlusion scenarios and varying lighting conditions, where other models’ predictions fail but ours remain reliable. RGBD [27] only manages to attain an ADD AUC of 49.02 and an ADD-S AUC of 62.44. Its damage and susceptibility to depth noise falls somewhere between DenseFusion [46] and ES6D [35]. In contrast, our approach harnesses the strengths of both RGB and depth modalities while explicitly designing robust depth feature extraction and selection. In comparison with the above baselines, our method achieves 73.31 ADD AUC and 87.82 ADD-S AUC, surpassing the state of the art with improvements of 1.94 and 3.64 percent in terms of ADD AUC and ADD-S AUC, respectively. ### _Implementation Details_ We extracted \(1000\) of pixels from the decoded RGB representation corresponding to the same number of points in the LiDAR point set. Both extracted RGB and geometric features are linear projected to 256-D before fused together. In the above experiment results, we utilized an 8-layer transformer encoder with 4 attention heads for the modality fusion stage and a 4-layer transformer encoder with 8 attention heads for the point-wise fusion stage. In addition, a filter-enhanced MLP layer and the CDL with objects' CAD model were considered in the above experiment results. ### _Training Strategy_ For our DTTDNet, learning rate warm-up schedule is used to ensure that our transformer-based model can overcome local minima in early stage and be more effectively trained. By empirical evaluation, in the first epoch, the learning rate \(lr\) linearly increases from \(0\) to \(1e{-}5\). In the subsequent epochs, it is decreased using a cosine scheduler to the end learning rate \(min\_lr=1e{-}6\). Additionally, following the approach of DenseFusion [46], we also decay our learning rate by a certain ratio when the average error is below a certain threshold during the training process. Detailed code and parameters will be publicly available in our code repository. Moreover, we set the importance factor \(\lambda\) of CDL to \(0.3\) and the initial balancing weight \(w\) to \(0.015\) by empirical testing. ### _Robustness to LiDAR Depth Error_ To answer the question of whether our method exhibits robustness in the presence of significant LiDAR sensor noise when compared to other approaches, we further assess the depth-ADD metric, as discussed in Section IV-C, on DTTDNet versus the three baseline algorithms. Figure 10 illustrates the correlation between the model performance (ADD) of four methods and the quality of depth information (depth-ADD) across various scenes, frames, and 1239 pose prediction outcomes for the 18 objects. Our approach ensures a stable pose prediction performance, even when the depth quality deteriorates, maintaining consistently low levels of ADD error overall. However, other methods experience deterioration in model prediction results with increasing LiDAR noise, resulting in an increase in ADD error. This is particularly evident in the case of ES6D [35], where there is a linearly increasing relationship between prediction error and LiDAR measurement error. ## VI Ablation Studies In this section, we further delve into a detailed analysis of our own model, highlighting the utility of our depth robustifying module in handling challenging scenarios with significant LiDAR noise. Specifically, we are concerned with the following questions: 1. Throughout the entire fusion process, what did the multi-head attention layers learn at different stages? 2. Will the increase in the layer number and parameter number of the transformer encoder affect the performance of our model? 3. What role does our depth robustifying module play? ### _Attention Map Visualization_ To visualize what our fusion module learns during the training process, we draw on previous studies [12, 44] and represent our attention map as \(a_{i,j}\) described in section III-C. Taking two objects (_toen_green_tea_ and _black_marker_) as examples, Figure 11 displays the attention maps produced by different attention heads in the two fusion stages. We showcase the attention maps generated by the modality fusion and point-wise fusion at their respective final layers. The modality fusion part reveals distinct quadrant-like patterns, reflecting differences in how the two modalities fuse. The lower-left and upper-right quadrants offer insights into the degree of RGB and geometric feature fusion. The point-wise fusion part exhibits a striped pattern and shows that it attends to the significance of specific tokens during training. ### _Layer Number Variation in Fusion Stages._ Table III and Figure 12 display the variations brought about by increasing the number of layers at different fusion \begin{table} \begin{tabular}{c c|c c|c c} \hline \hline \multicolumn{2}{c|}{**Layer Num of \(\mathbf{a}\)**} & \multicolumn{3}{c}{**Motions**} \\ \hline **Modality Fusion** & **Putter-wise Fusion** & **ADD ARC** & **ADD-S AUC** & **ADD-(1cm)** & **ADD-S(1cm)** \\ \hline \(00\) & \(0\) & \(70.32\) & 85.42 & 22.91 & 67.78 \\ \(00\) & \(0\) & \(71.37\) & 86.69 & 15.74 & 64.44 \\ \(00\) & \(000\) & \(72.76\) & 86.37 & 19.57 & 68.37 \\ \hline \(00\) & \(0\) & \(70.12\) & 85.52 & 22.91 & 67.75 \\ \(000\) & \(00\) & \(71.26\) & 88.23 & 20.01 & 66.03 \\ \( stages. Overall, adding layer number increases the model's performance in terms of ADD AUC. As we proportionally increase the total number of layers in the modality fusion or point-wise fusion, we witness a sustained improvement in model performance. Furthermore, across all the combinations we presented, our method outperforms the current state-of-the-art approaches [27, 35, 46] in terms of ADD AUC. ### _Depth Feature Filtering_ The primary goal for our depth feature filtering module is to eliminate problematic tokens that could lead to misleading inferences or excessive reliance on certain elements, such as long-tail-shaped noise, in order to reduce the impact of excessive token focus. Table IV illustrates the improvement in ADD AUC metrics achieved by our method when integrating geometric feature filtering (GFF) module. To provide a detailed insight into the impact of the GFF module, we conducted principal component analysis (PCA) on both the initial geometric tokens encoded by the PointNet and the filtered version after applying the GFF module, i.e., projected the embedding to a 1-D array with its dominant factor (the left singular vector of the geometric embedding that corresponds to the largest singular value). Following that, we visualize the geometric embedding both before and after the application of the GFF module by generating histograms of the dimensionally reduced geometric tokens, as shown in Figure 13. The left subplot displays the probability distribution of 1000 geometric tokens extracted from the LiDAR point cloud. The majority of tokens are concentrated in one location, with a few outliers exhibiting distinct characteristics compared to the majority. On the contrary, the distribution of these tokens, as shown in the right subplot, becomes more balanced and uniform when filtered through the GFF module. The enhanced ADD AUC performance can be attributed to the balanced distribution achieved through the use of the depth robustifying module. ### _Geometry Augmented CDL_ As previously discussed in Section III-B, the reference point set for CDL can be selected from LiDAR depth data or from object 3D models. In Table V, we conduct a performance comparison of our approach with these two reference point choices, and we observe enhancements in our methods when employing geometry augmentation techniques. To provide further clarity, the first method calculates the Chamfer distance by comparing the decoded point set to the input LiDAR-derived point set in a self-supervised fashion. The encoder-decoder structure in this method compels the embedding to acquire a refined depth representation, which fulfills a distinct denoising objective when compared to the previously mentioned GFF module. This depth representation aids in mitigating subtle disturbances but may not effectively address substantial noise that alters the overall shape of an object. On the other hand, in the second approach, we calculate the CDL by comparing the decoded point set with the point set generated from the 3D model of the corresponding object. In this method, we utilize object models that are not transformed with the ground truth poses. Consequently, this compels the \begin{table} \begin{tabular}{c|c c|c c} \hline \hline Methods & **Depth-count** & **ADD-AUC** & **ADD-AUC** & **ADD-AUC** & **ADD-S (1cm)** \\ \hline \multirow{2}{*}{MSP4} & ✓ & ✗ & 27.31 & 87.02 & 24.35 & 66.16 \\ & ✗ & ✓ & **72.89** & **80.16** & **25.85** & **67.78** \\ \hline \hline \end{tabular} \end{table} TABLE V: Effect of Object Geometry Augmented CDL. MSP4-F denotes MSP4 enhanced by feature filter modules. This table depicts the enhancement in model performance when switching the reference point set from being reliant on the depth map to being augmented by the object model. Fig. 11: Examples of attention map output visualize of both modality fusion stage (the larger maps in the first row) and point-wise fusion stage (the smaller ones in the second row) on two objects (_tugen_green_red_and _black_marker_). Due to the different ways we concentrate features in the two fusion stages, the token sequence length in modality fusion is twice that in the point-wise fusion process. For the attention maps produced in the final layer of modality fusion and point-wise fusion, they are of sizes \(2000\times 2000\) and \(1000\times 1000\), respectively. Fig. 12: Effect of Fusion Layer Number on Performance. **Left**: the ADD AUC results of M2P4, M4P4, MSP4, i.e., modality fusion scaling up; **Middle**: the ADD AUC results of M2P1, M2P2, M2P4, i.e., point-wise fusion scaling up; **Right**: the ADD AUC results of M2P1, M2P4, MSP4, i.e., total fusion layer scaling up. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline Methods & **ADD AUC** & ADD-S AUC & ADD(1cm) & ADD-S (1cm) \\ \hline MSP4 & 72.03 & 86.44 & 19.86 & **70.50** \\ + GFF & **73.31** & **87.82** & **24.35** & 66.16 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Effect of Depth Feature Filtering. MSP4 denotes our model with a fusion stage consisting of 8-layer modality fusion and 4-layer point-wise fusion modules. This table shows the improvement of MSP4 with further incorporation of geometric feature filtering (GFF). Fig. 13: Probability Distribution of Reduced Geometric Features. _Left_: before the GFF module. _Right_: after the GFF module. embedding to acquire a pure geometric representation, unaffected by rotation and translation. It is worth highlighting that the 3D model is only required during the training process for loss computation. In contrast, during inference, our method operates without the necessity of a 3D model. This sets our approach apart from previously discussed RGB-only methods [3, 27, 36]. The object model-augmented CDL outperforms the basic setup, specifically CDL with LiDAR depth, across all metrics (Table V). ## VII Conclusion We have presented DTTDNet as a novel digital-twin localization algorithm to bridge the performance gap for 3D object tracking in mobile applications and with critical requirements of accuracy. At the algorithm level, DTTDNet is a transformer-based 6DoF pose estimator, specifically designed to navigate the complexities introduced by noisy depth data, which is a common issue in mobile AR applications. At the experiment level, we have expanded the scope of our previous work DTTD1 [10] by introducing DTTD2, a new RGBD dataset captured using iPhone 14 Pro, whose RGBD camera is of lower quality and cheaper than dedicated depth cameras. Through extensive experiments and ablation analysis, we have examined the effectiveness of our method in being robust to erroneous depth data. Additionally, our research has brought to light new complexities associated with object tracking in dynamic AR environments.
2309.10079
Low-Frequency Internal Gravity Waves are Pseudo-incompressible
Starting from the fully compressible fluid equations in a plane-parallel atmosphere, we demonstrate that linear internal gravity waves are naturally pseudo-incompressible in the limit that the wave frequency $\omega$ is much less than that of surface gravity waves, i.e., $\omega \ll \sqrt{g k_h}$ where $g$ is the gravitational acceleration and $k_h$ is the horizontal wavenumber. We accomplish this by performing a formal expansion of the wave functions and the local dispersion relation in terms of a dimensionless frequency $\varepsilon = \omega / \sqrt{g k_h}$. Further, we show that in this same low-frequency limit, several forms of the anelastic approximation, including the Lantz-Braginsky-Roberts (LBR) formulation, poorly reproduce the correct behavior of internal gravity waves. The pseudo-incompressible approximation is achieved by assuming that Eulerian fluctuations of the pressure are small in the continuity equation. Whereas, in the anelastic approximation Eulerian density fluctuations are ignored. In an adiabatic stratification, such as occurs in a convection zone, the two approximations become identical. But, in a stable stratification, the differences between the two approximations are stark and only the pseudo-incompressible approximation remains valid.
Bradley W. Hindman, Keith Julien
2023-09-18T18:46:38Z
http://arxiv.org/abs/2309.10079v1
# Low-Frequency Internal Gravity Waves are Pseudo-incompressible ###### Abstract Starting from the fully compressible fluid equations in a plane-parallel atmosphere, we demonstrate that linear internal gravity waves are naturally pseudo-incompressible in the limit that the wave frequency \(\omega\) is much less than that of surface gravity waves, i.e., \(\omega\ll\sqrt{gk_{h}}\) where \(g\) is the gravitational acceleration and \(k_{h}\) is the horizontal wavenumber. We accomplish this by performing a formal expansion of the wave functions and the local dispersion relation in terms of a dimensionless frequency \(\varepsilon=\omega/\sqrt{gk_{h}}\). Further, we show that in this same low-frequency limit, several forms of the anelastic approximation, including the Lantz-Braginsky-Roberts (LBR) formulation, poorly reproduce the correct behavior of internal gravity waves. The pseudo-incompressible approximation is achieved by assuming that Eulerian fluctuations of the pressure are small in the continuity equation. Whereas, in the anelastic approximation Eulerian density fluctuations are ignored. In an adiabatic stratification, such as occurs in a convection zone, the two approximations become identical. But, in a stable stratification, the differences between the two approximations are stark and only the pseudo-incompressible approximation remains valid. ## 1 Introduction Numerical simulations of convection in low-mass stars, the Earth's atmosphere, giant planets, and many other astrophysical objects all must face the tyranny of sound. Generally, sound waves propagate quickly and have high frequencies; thus, the typical timescale associated with acoustics is far shorter than those arising from convection and large-scale circulations. In a numerical simulation, this short timescale ensures through the CFL condition that sound waves control the size of the timestep that can be taken while still maintaining numerical stability. The difference can be dramatic. For example, at the base of the Sun's convection zone, the speed of sound is roughly \(200\ \mathrm{km\,s^{-1}}\) while the convective flow speed is on the order of \(20\ \mathrm{m\,s^{-1}}\)(e.g. Miesch et al., 2012). A numerical simulation that is forced to track sound waves for stability will need to take \(10^{4}\) times as many time steps to evolve the solution for the same duration as a simulation that could ignore the acoustic wave field. This inflation of the necessary computational work is particular onerous since the immense timescale difference between the deep convection and the sound waves indicates that the two phenomena are essentially decoupled. A variety of methods have been proposed to mitigate this dilemma; almost all involve modifications to the fluid equations to either temper the impact of sound waves or to remove sound altogether. One way to reduce the influence of sound on the time step is to artificially lower the speed at which sound waves propagate (e.g., Rempel, 2005, 2006; Hotta et al., 2012; Kapyla et al., 2016; Iijima et al., 2019). Successful application of such Reduced Speed of Sound Techniques (RSST) requires that the sound speed be reduced sufficiently to make sound waves tractable, but to maintain enough celerity in the sound waves such that they do not interact strongly with the convective motions. A more common solution is to surgically remove terms from the continuity equation such that sound waves are no longer a permissible solution to the fluid equations. These "sound-proofed" equation sets typically apply to low-Mach number motions with small thermodynamic fluctuations about a hydrostatic background atmosphere. The most venerable of these techniques is the Boussinesq approximation, whereby the fluid is assumed to be incompressible with constant density. In the highly stratified atmospheres of stars and giant planets where the mass density can vary by orders of magnitude, treatments that can account for the stratification are necessary. In these stratified systems, the fundamental presumption is that for sedate motions a displaced parcel of fluid quickly equilibrates thermodynamically with its new surroundings. In astrophysics the most common of these extensions to the Boussinesq framework is the anelastic approximation (e.g., Batchelor, 1953; Ogura and Phillips, 1962; Gough, 1969; Gilman & Glatzmaier, 1981; Bannon, 1996), which removes all density fluctuations that appear in the continuity equation. A similar technique called the pseudo-incompressible approximation is a bit subtler, removing only the influence of Eulerian pressure fluctuations from the continuity equation (e.g., Durran, 1989; Klein, 2009; Vasil et al., 2013). Such sound-proofing techniques have been used extensively in stellar and planetary convection simulations where the convecting layer spans many density scale heights. In regions of efficient convection, where the redistribution of heat and mass by the convective motions efficiently drives the atmosphere towards an adiabatic stratification, the most common forms of the anelastic and pseudo-incompressible equations are identical and either approximation works well. However, in a stably stratified fluid, the two approximations differ to the extent that they may violate their underlying assumptions, leading to different dynamics. Specifically, Klein et al. (2010), Brown et al. (2012) and Vasil et al. (2013) have demonstrated that anelastic formulations do a disservice to internal gravity waves leading to a loss of energy conservation and to large errors in the wave frequencies. Further, Klein et al. (2010) and Vasil et al. (2013) have demonstrated that although the pseudo-incompressible approximation does far better in preserving the properties of internal gravity waves, it too evinces discrepancies from the fully compressible wave forms. Here, we demonstrate that internal gravity waves naturally approach the pseudo-incompressible limit as their frequency becomes very low. The discrepancies noted by Klein et al. (2010) and Vasil et al. (2013) arise only when the wave frequencies become large and the assumption of sedate motions in a state of pressure-equilibrium is lost. We accomplish this by deriving internal gravity waves in a plane-parallel atmosphere with a general stratification and subsequently performing a low-frequency expansion of the local dispersion relation and of the wave functions. We find that, to lowest-order in the frequency, internal gravity waves are incompressive. To the next order in the frequency, they become pseudo-incompressible. All forms of the anelastic approximation fail to produce the correct behavior for both the dispersion relation and the wave functions. In the next section we formulate the anelastic and pseudo-incompressible approximations. Section 3 derives the governing equation for internal gravity waves in a general stratification for a fully compressible fluid. We explore the low-frequency limit of these waves in Section 4, deriving the magnitude and ordering of terms in the continuity and momentum equations. In Section 5 we rederive internal gravity waves using three different sound-proofed equation sets and discuss the integrity of each approximation. Finally, in Section 6 we summarize and discuss the implications of our results. ## 2 Sound-proofing formulations ### The Anelastic Approximation The anelastic condition is a relatively simple replacement for the continuity equation that captures significant density variation in the mean properties of the fluid. For instance, in a gravitationally stratified fluid with velocity, \(\mathbf{u}\), and time-averaged density that varies with height, \(\rho_{0}(z)\), the continuity equation is replaced with \[\mathbf{\nabla}\cdot(\rho_{0}\mathbf{u})=0\;. \tag{1}\] This expression can be derived from the full continuity equation, \[\frac{\partial\rho}{\partial t}+\mathbf{\nabla}\cdot(\rho\mathbf{u})=0\;, \tag{2}\] by making two assumptions that are often appropriate for flows of low Mach number: 1) the time derivative of the mass density \(\rho\) is inconsequential and 2) the fractional fluctuations of the density around the background density are small, i.e., \(|\rho_{1}/\rho_{0}|\ll 1\) where \(\rho=\rho_{0}+\rho_{1}\). The popularity of the anelastic approximation arises from two important properties. When the continuity equation is replaced by the anelastic condition, Equation (1), sound waves are removed as a permissible solution to the fluid equations and the mass flux \(\rho_{0}\mathbf{u}\) can be written using stream functions. Brown et al. (2012) and Vasil et al. (2013) both remarked that when the anelastic form of the continuity equation is employed, the fluid equations are no longer energy conserving without modifications to the momentum equation. To enforce conservation of energy, an otherwise unmotivated change to the buoyancy force is required. For an inviscid fluid, the vertical momentum equation can be written in the following form, \[\rho_{0}\frac{Dw}{Dt}=-\rho_{0}\frac{d}{dz}\left(\frac{P_{1}}{\rho_{0}}\right) +\frac{g\rho_{0}}{c_{p}}s_{1}+\frac{N^{2}}{g}P_{1}\;, \tag{3}\] with the pressure \(P\) and specific entropy density \(s\) decomposed into a steady hydrostatic background and a fluctuation about that background, \(P=P_{0}+P_{1}\) and \(s=s_{0}+s_{1}\). The vertical velocity is \(w\), \(c_{p}\) is the specific heat capacity at constant pressure, and \(z\) is the height within the atmosphere with concomitant unit vector \(\mathbf{\hat{z}}\) anti-aligned with gravity, \(\mathbf{g}=-g\mathbf{\hat{z}}\). Further, the quantity \(N^{2}=gc_{p}^{-1}ds_{0}/dz\) is the square of the atmosphere's buoyancy or Brunt-Vaisala frequency. In Equation (3), we have ignored the density fluctuation in the inertial term on the left-hand side, subtracted the steady hydrostatic component from the force balance, and used the ideal gas law to rewrite the density fluctuation in terms of the pressure and entropy fluctuations. To ensure energy conservation, the term involving the buoyancy frequency must be discarded or be physically subdominant. In a convection zone, where efficient heat transport drives the atmosphere towards an adiabatic gradient with \(N^{2}\approx 0\), this approximation is completely justified and has been coined the Lantz-Braginsky-Roberts (LBR) formulation of the anelastic approximation (Lantz, 1992; Braginsky & Roberts, 1995). Conversely, in a stably stratified region, the term is not small and cannot generally be self-consistently ignored. We will examine two distinct formulations of the anelastic approximation. Both replace the continuity equation with the anelastic condition (1). One of these approximations--which we will dub the "fiducial" anelastic approximation--will make no further assumptions, leaving the momentum equation unmodified. The other formulation will be the LBR anelastic approximation as discussed above, which ensures energy conservation by excising a specific term from the momentum equation. ### The Pseudo-Incompressible Approximation The pseudo-incompressible approximation as proposed by Durran (1989) modifies the continuity equation under the assumption that Eulerian fluctuations of the gas pressure can be ignored. Following Durran (2008), we start by defining the potential density \(\rho_{*}\) for an ideal gas, \[\rho_{*}\equiv\rho\,e^{s/c_{p}}\;. \tag{4}\] If we take the convective derivative of the potential density and utilize the continuity equation (2) and the thermal energy equation, \[\rho T\frac{Ds}{Dt}=Q\;, \tag{5}\] we obtain a prognostic equation for the potential density \[\frac{1}{\rho_{*}}\left(\frac{\partial\rho_{*}}{\partial t}+\mathbf{u}\cdot\mathbf{ \nabla}\rho_{*}\right)=-\mathbf{\nabla}\cdot\mathbf{u}+\frac{Q}{c_{p}\rho T}\;, \tag{6}\] where \(T\) is the temperature and \(Q\) represents all irreversible thermodynamic processes, such as thermal diffusion, viscous heating, radiative transfer, etc. Finally, by invoking Equation (4) and the equation of state for an ideal gas, \[\frac{1}{\rho_{*}}\frac{\partial\rho_{*}}{\partial t}=\frac{1}{\rho}\frac{ \partial\rho}{\partial t}+\frac{1}{c_{p}}\frac{\partial s}{\partial t}=\frac{ 1}{\gamma P}\frac{\partial P}{\partial t}\;, \tag{7}\] we replace the time derivative of the potential density with the time derivative of the gas pressure, \[\mathbf{\nabla}\cdot(\rho_{*}\mathbf{u})=\frac{\rho_{*}}{\rho}\left(\frac{Q}{c_{p}T}- \frac{1}{c^{2}}\frac{\partial P}{\partial t}\right)\;. \tag{8}\] In the preceding equations, \(\gamma\) is the gas's adiabatic exponent and \(c\) is the sound speed given by \(c^{2}=\gamma P/\rho\). Equation (8) is an exact form of the continuity equation for which no approximation has been made other than the gas being ideal. The pseudo-incompressible approximation is achieved by assuming that the term involving the time derivative of the gas pressure is negligible, \[\mathbf{\nabla}\cdot(\rho_{*}\mathbf{u})=\frac{\rho_{*}}{\rho}\frac{Q}{c_{p}T}\;. \tag{9}\] Such an approximation is valid in the limit of infinite sound speed and is consistent with slow motions of low Mach number for which a displaced parcel of fluid rapidly reaches pressure equilibration with its new surroundings. Most importantly, making this approximation removes sound waves from the fluid equations in the same way that anelasticity does. Durran's form of the pseudo-incompressible approximation (Durran, 2008) involves replacing the continuity equation by the preceding equation, but otherwise leaving the other fluid equations unmodified--specifically, the momentum equation remains the same. For isentropic motion, the pseudo-incompressible condition reduces to a form that is reminiscent of the anelastic relation \[\mathbf{\nabla}\cdot(\rho_{*}\mathbf{u})=0\;, \tag{10}\] with the mass density replaced by the potential density. However, for flows with low Mach number, thermodynamic fluctuations are small and we can safely linearize Equation (10), replacing the potential density by the potential density of the hydrostatic background atmosphere (denoted by '0' subscripts), \[\rho_{*0}\approx\rho_{0}e^{s_{0}/c_{p}}=\left(\frac{\hat{\rho}}{\hat{P}^{1/ \gamma}}\right)\,P_{0}^{1/\gamma}\;. \tag{11}\] The last equivalency in Equation (11) arises by noting that the potential density is the density that a fluid parcel would possess if displaced adiabatically to a fiducial height in the atmosphere where \(P_{0}=\hat{P}\), \(\rho_{0}=\hat{\rho}\), and \(s_{0}=0\). Like the anelastic approximation, the flow field can be expressed using streamfunctions when Equations (10) and (11) are valid, \[\mathbf{\nabla}\cdot\left(P_{0}^{1/\gamma}\mathbf{u}\right)=0\;, \tag{12}\] We remind the reader that these two equations were derived using two assumptions: 1) the advective time scales are fast compared to diffusion times--i.e., isentropic motion, and 2) thermodynamic fluctuations are small compared to the background atmosphere. ## 3 Internal Gravity Waves in a General Stratification Consider a plane-parallel atmosphere with a gas pressure \(P_{0}\) and mass density \(\rho_{0}\) related through hydrostatic balance, \(dP_{0}/dz=-g\rho_{0}\). Further, let the thermal structure of the atmosphere be general and specified by the vertical variation of the specific entropy density, \(s_{0}\). We start with the linearized fluid equations for a fully-compressible ideal gas, \[\rho_{0}\frac{\partial\mathbf{u}}{\partial t} = -\nabla P_{1}+\mathbf{g}\rho_{1}\;, \tag{13}\] \[\frac{\partial s_{1}}{\partial t} = -\mathbf{u}\cdot\nabla s_{0}\;,\] (14) \[\frac{\partial\rho_{1}}{\partial t} = -\nabla\cdot\left(\rho_{0}\mathbf{u}\right)\;.\] (15) \[\frac{\rho_{1}}{\rho_{0}} = \frac{P_{1}}{\gamma P_{0}}-\frac{s_{1}}{c_{p}}\;. \tag{16}\] We have ignored rotation, magnetism, and all dissipative mechanisms, including viscosity, thermal conduction, and radiative transfer. The thermodynamic variables \(s_{1}\), \(\rho_{1}\), and \(P_{1}\) are the Eulerian fluctuations of the specific entropy density, the mass density, and the gas pressure respectively. Since gravity provides the only preferred direction, internal gravity waves can be treated as a 2D phenomenon that propagates vertically and in a single horizontal direction. Let \(\mathbf{\hat{z}}\) be the unit vector that is antiparallel to the constant gravitational acceleration, \(\mathbf{g}=-g\mathbf{\hat{z}}\). Further, let \(\mathbf{\hat{x}}\) be the horizontal unit vector that is aligned with the wave's horizontal direction of propagation. Finally, seek plane-wave solutions with the form \[\sim f(z)\,e^{ik_{h}x}\,e^{-i\omega t}\;, \tag{17}\] where \(k_{h}\) is the horizontal wavenumber, \(\omega\) is the temporal frequency, and \(f(z)\) is a vertical wave function. The transformed set of equations can be manipulated to express the velocity and its divergence solely in terms of the Lagrangian pressure fluctuation, \(\delta P\). The resulting equations are a coupled system of ODEs, \[\rho_{0}u = -\frac{\omega gk_{h}}{g^{2}k_{h}^{2}-\omega^{4}}\left(\frac{d}{dz }+\frac{\omega^{2}}{g}\right)\delta P\;, \tag{18}\] \[\rho_{0}w = \frac{i\omega^{3}}{g^{2}k_{h}^{2}-\omega^{4}}\left(\frac{d}{dz} +\frac{gk_{h}^{2}}{\omega^{2}}\right)\delta P\;,\] (19) \[\nabla\cdot\mathbf{u} = \frac{i\omega}{\rho_{0}c^{2}}\delta P\;, \tag{20}\] with the vertical coordinate \(z\) as the independent variable and \(u\) and \(w\) being the horizontal and vertical velocity components, \(\mathbf{u}=u\mathbf{\hat{x}}+w\mathbf{\hat{z}}\). The Lagrangian pressure fluctuation is related to the Eulerian pressure fluctuation and the vertical velocity, \[\frac{\partial}{\partial t}\delta P\equiv \frac{\partial P_{1}}{\partial t}+\mathbf{u}\cdot\nabla P_{0}\;,\] \[\delta P= P_{1}+\frac{g\rho_{0}w}{i\omega}\;. \tag{21}\] The denominator of Equations (18) and (19) is spatially constant and will appear later. Therefore for convenience we make the definition, \[\alpha\equiv g^{2}k_{h}^{2}-\omega^{4}\;. \tag{22}\] Equations (18)-(20) can be combined to produce a single stand-alone ODE with \(\delta P\) as the dependent variable, \[\left\{\frac{d^{2}}{dz^{2}}+\frac{1}{H}\frac{d}{dz}+\frac{\omega^{2}}{c^{2}}-k _{h}^{2}\left(1-\frac{N^{2}}{\omega^{2}}\right)\right\}\delta P=0\;, \tag{23}\] where \(N\) is the buoyancy frequency and \(H\) is the density scale height, \[N^{2}(z) \equiv g\left(\frac{1}{H}-\frac{g}{c^{2}}\right)=\frac{g}{c_{p}}\frac{ ds_{0}}{dz}\;, \tag{24}\] \[\frac{1}{H(z)} \equiv -\frac{1}{\rho_{0}}\frac{d\rho_{0}}{dz}\;. \tag{25}\] In Equation (23), the term that involves the sound speed is responsible for the propagation of high-frequency acoustic waves and the term with the buoyancy frequency leads to internal gravity waves. As we will see in the following subsection, the first-derivative term ensures energy conservation for both varieties of wave. Once one has solved for the Lagrangian pressure fluctuation by applying boundary conditions to Equation (23), the velocity components, \(u\) and \(w\), can be found directly through the use of Equations (18) and (19). Subsequently, all of the thermodynamic fluctuations can then be derived through Equations (14), (16), and (21), \[\begin{split} P_{1}&=\frac{k_{h}}{\omega}\rho_{0}u\;, \quad s_{1}=\frac{c_{p}N^{2}}{i\omega g}w\;,\\ \rho_{1}&=\frac{\omega}{k_{h}c^{2}}\rho_{0}u-\frac{N^ {2}}{i\omega g}\rho_{0}w\;.\end{split} \tag{26}\] All of the thermodynamic fluctuations appear as linear combinations of the two velocity components. ### Energy Conservation and the First Derivative Here we demonstrate that any viable sound-proofing technique must produce an appropriate coefficient for the first-derivative term that appears in Equation (23). This term is crucial for energy conservation. To see this, consider the vertical energy flux for an acoustic-gravity wave, \(F(z)=\langle w\,P_{1}\rangle\), where angular brackets \(<>\) indicate a temporal average over a wave period. Since, the second term on the right-hand side of Equation (21) is 90 degrees out of phase with the vertical velocity, in a time average the second term's contribution vanishes and the energy flux can be written just in terms of the Lagrangian pressure fluctuation, \[F(z)=\langle w\,\delta P\rangle=\frac{1}{4}\left(w\,\delta P^{*}+w^{*}\,\delta P \right)\;, \tag{27}\] where the superscript asterisks denote complex conjugation. By employing Equation (19), one can demonstrate that this flux is inversely proportional to the mass density and proportional to the Wronskian of the Lagrangian pressure fluctuation and its complex conjugate, \[F(z)=-\frac{i\omega^{3}}{4\alpha\rho_{0}}\left(\delta P\frac{d\,\delta P^{*}}{ dz}-\delta P^{*}\frac{d\,\delta P}{dz}\right)\;. \tag{28}\] Abel's Identity tells us that to within an unknown multiplicative constant, \(C\), the Wronskian depends only on the coefficient of the first derivative term in the ODE. For the ODE here, the necessary integration is trivial to perform, \[\mathcal{W}\left\{\delta P,\delta P^{*}\right\}(z)=C\ \exp\left(-\int\frac{dz}{H} \right)=C\ \rho_{0}\;. \tag{29}\] Hence, the energy flux is constant with height even though the coefficients of the ODE are vertically variable, \[F(z)=-\frac{i\omega^{3}C}{4\alpha}=\text{constant}\;. \tag{30}\] The constancy of the energy flux with height in the atmosphere is one way to characterize the conservation of energy by acoustic-gravity waves. From this analysis, we can deduce that any approximation that incorrectly reproduces the first derivative term, may produce wave solutions with energy fluxes that vary with height. Consequently, such approximations will fail to conserve energy. For example, if the first derivative term is artificially set to zero, the flux will be inversely proportional to the mass density and \(F(z)\) will spuriously increase with height. This is the fundamental reason why Brown et al. (2012) and Vasil et al. (2013) found a lack of energy conservation when applying a variety of anelastic approximations to an isothermal atmosphere. Those approximations failed to correctly reproduce the first-derivative term of the ODE. Here we show that it is a general property for any stratification, not just an isothermal one. ### Local Dispersion Relation For a general stratification, the coefficients of the ODE (23) are functions of height and the solutions will not be sinusoidal. However, by making a change of variable that converts the ODE into standard form (i.e., a Helmholtz equation that lacks a first-derivative term), a local dispersion relation can be generated which is appropriate in a WKB framework (e.g., Bender and Orszag, 1999). The required change of variable involves the square root of the mass density, \(\delta P=\left(\alpha\rho_{0}\right)^{1/2}\psi\). We include the constant \(\alpha\) inside the square root purely for the sake of symmetry in later sections when we explore various sound-proofing techniques. Here, its inclusion is unnecessary and only introduces a multiplicative constant which factors out of the resulting ODE, \[\frac{d^{2}\psi}{dz^{2}}+k_{z}^{2}\psi=0\;, \tag{31}\] \[k_{z}^{2}(z)=\frac{\omega^{2}-\omega_{c}^{2}}{c^{2}}-k_{h}^{2} \left(1-\frac{N^{2}}{\omega^{2}}\right)\;. \tag{32}\] In the preceding equations, \(k_{z}(z)\) is a local vertical wavenumber and \(\omega_{c}(z)\) is the acoustic-cutoff frequency which depends on the stratification through the density scale height \(H\), \[\frac{\omega_{c}^{2}}{c^{2}}\equiv\frac{1-2H^{\prime}}{4H^{2}}\;. \tag{33}\] We denote vertical derivatives of atmospheric quantities using a superscript prime, i.e., the vertical derivative of the density scale height is given by \(H^{\prime}\equiv dH/dz\). From the preceding analysis, we see that acoustic-gravity waves vary over two relevant vertical spatial scales: a local vertical wavelength and an envelope scale. The wavelength is given by the local dispersion relation (32) and hence depends on the wave frequency as well as the characteristic frequencies of the atmosphere--i.e., the buoyancy frequency \(N\), the acoustic cut-off frequency \(\omega_{c}\), and the Lamb frequency \(k_{h}c\). The envelope scale is associated with vertical variation of the envelope function \(\left(\alpha\rho_{0}\right)^{1/2}\) that appears in the change of variable above. This function provides a local amplitude of the wave function (in a WKB sense). Since the envelope function only depends on the mass density, the envelope scale is solely determined by the atmospheric stratification through the density scale height \(H\). For later convenience, we choose to define the envelope scale \(\Lambda\) as twice the scale length associated with the envelope function such that \(\Lambda=H\), \[\Lambda^{-1}\equiv-\frac{2}{\left(\alpha\rho_{0}\right)^{1/2}}\frac{d\left( \alpha\rho_{0}\right)^{1/2}}{dz}=H^{-1}\;. \tag{34}\] ## 4 Internal Gravity Waves in the Low-frequency Limit Our primary goal is to see how each wave variable scales with frequency and to therefore determine which terms are important in the fluid equations in the low-frequency limit. We start by non-dimensionalizing, using the reciprocal of the horizontal wavenumber \(k_{h}^{-1}\) and the frequency of surface gravity waves \(\sqrt{gk_{h}}\) for the characteristic length and frequency. We choose \(c_{p}\) and \(\hat{\rho}\) to be typical values of the entropy and mass density, respectively. Of particular importance is the non-dimensional wave frequency, \[\varepsilon\equiv\frac{\omega}{\sqrt{gk_{h}}} \tag{35}\] which will serve as a small parameter in our low-frequency expansions. Thus, when we speak of low frequencies we are considering frequencies that are small compared to those of surface gravity waves, \(\omega^{2}\ll gk_{h}\) or equivalently \(\varepsilon\ll 1\). This assumption will assure that the acoustic waves and the internal gravity waves decouple cleanly. In combination, Equations (31) and (32) indicate that the vertical wavelength of an internal gravity wave becomes very short as the frequency vanishes. To leading order in the frequency, the vertical wavenumber is determined by the ratio of the buoyancy frequency to the wave frequency, \[k_{z}^{2}\approx k_{h}^{2}\frac{N^{2}}{\omega^{2}}\;. \tag{36}\] Hence, in the low-frequency limit the vertical wavelength becomes a short spatial scale, whereas the envelope or atmospheric scale remains long. This scale separation dictates that we must define a non-dimensional height \(\zeta\) that appropriately rescales the vertical derivatives in the fluid equations to respect the short scale, \[\frac{d}{dz}\equiv\frac{k_{h}}{\varepsilon}\frac{d}{d\zeta}\;. \tag{37}\] If we denote the non-dimensional forms of the wave variables and atmospheric profiles using a tilde, the wave equation (23) becomes, \[\left\{\frac{d^{2}}{d\zeta^{2}}+\frac{\varepsilon}{\tilde{H}}\frac{d}{d\zeta }+\left[\tilde{N}^{2}-\varepsilon^{2}+\frac{\varepsilon^{4}}{\tilde{c}^{2}} \right]\right\}\delta\tilde{P}=0\;, \tag{38}\] where the non-dimensional atmospheric profiles are given by \[\tilde{H}=k_{h}H,\qquad\tilde{c}^{2}=\left(\frac{k_{h}}{g}\right)c^{2},\qquad \tilde{N}^{2}=\frac{N^{2}}{gk_{h}} \tag{39}\] and the non-dimensional Lagrangian pressure fluctuation is defined as follows: \[\delta\tilde{P}=\frac{k_{h}}{g\hat{\rho}}\delta P\;. \tag{40}\] Similarly, the non-dimensional form for the local dispersion relation is given by \[\frac{k_{z}^{2}(z)}{k_{h}^{2}}=\varepsilon^{-2}\tilde{N}^{2}-\left(1+\tilde{k }_{c}^{2}\right)+\frac{\varepsilon^{2}}{\tilde{c}^{2}}\;, \tag{41}\] where \(\tilde{k}_{c}\) is a nondimensional wavenumber that is the ratio of the acoustic cutoff frequency to the Lamb frequency, \[\tilde{k}_{c}^{2}\equiv\frac{\omega_{c}^{2}}{k_{h}^{2}c^{2}}=\frac{1-2H^{ \prime}}{4\tilde{H}^{2}}\;. \tag{42}\] As expected, the leading order behavior of the local vertical wavenumber in Equation (41) demonstrates that the vertical wavelength becomes very short in the low-frequency limit, \(k_{z}^{2}/k_{h}^{2}\sim\varepsilon^{-2}\tilde{N}^{2}\). Modifications to the vertical wavenumber arising from a finite frequency first appear at order unity, \(\mathcal{O}(1)\), whereas the term in the dispersion relation responsible for the propagation of high-frequency acoustic waves appears at \(\mathcal{O}(\varepsilon^{2})\). ### Frequency Dependence of the Other Wave Variables The non-dimensional forms of the other fluid variables can be generated through Equations (18), (19), and (26) and are related to the Lagrangian pressure fluctuation through differential operators, \[\tilde{u}=\left(\frac{k_{h}}{g}\right)^{1/2}u=-\frac{\bar{\rho}_{0}^{ -1}}{1-\varepsilon^{4}}\left(\frac{d}{d\zeta}+\varepsilon^{3}\right)\delta \tilde{P} \sim\mathcal{O}(1)\;, \tag{43}\] \[\tilde{w}=\left(\frac{k_{h}}{g}\right)^{1/2}w=\frac{i\varepsilon \ \bar{\rho}_{0}^{-1}}{1-\varepsilon^{4}}\left(1+\varepsilon\frac{d}{d\zeta} \right)\delta\tilde{P} \sim\mathcal{O}(\varepsilon)\;,\] (44) \[\tilde{P}_{1}=\left(\frac{k_{h}}{g\hat{\rho}}\right)P_{1}=- \frac{\varepsilon}{1-\varepsilon^{4}}\left(\frac{d}{d\zeta}+\varepsilon^{3} \right)\delta\tilde{P} \sim\mathcal{O}(\varepsilon)\;,\] (45) \[\tilde{s}_{1}=\frac{s_{1}}{c_{p}}=\frac{\tilde{N}^{2}\tilde{\rho }_{0}^{-1}}{1-\varepsilon^{4}}\left(1+\varepsilon\frac{d}{d\zeta}\right) \delta\tilde{P} \sim\mathcal{O}(1)\;,\] (46) \[\tilde{\rho}_{1}=\frac{\rho_{1}}{\hat{\rho}}=-\frac{1}{1- \varepsilon^{4}}\left(\tilde{N}^{2}+\frac{\varepsilon}{\tilde{H}}\frac{d}{d \zeta}+\frac{\varepsilon^{4}}{\tilde{c}^{2}}\right)\delta\tilde{P} \sim\mathcal{O}(1)\;, \tag{47}\] where \(\tilde{\rho}_{0}=\rho_{0}/\hat{\rho}\) is the non-dimensional atmospheric density. We can immediately see that internal gravity waves possess motions that are nearly horizontal for low frequencies. The vertical velocity component \(w\) is small by a factor of \(\varepsilon\). Furthermore, while the Lagrangian pressure fluctuation remains order unity in size, \(\delta P\sim\mathcal{O}(1)\), the Eulerian pressure fluctuation becomes small, \(P_{1}\sim\mathcal{O}(\varepsilon)\). Both the entropy and density fluctuations remain order unity. The fact that the Eulerian pressure fluctuation vanishes in the limit of low frequency is consistent with the pseudo-incompressible approximation and ensures that the internal gravity waves and acoustic waves decouple in that limit. However, since the mass density fluctuation does not vanish, these limits further suggest that this decoupling is **not** accomplished through the anelastic limit. We explore this result fully in the next subsection. In order to make obvious the relative magnitude of terms in subsequent equations, we define alternate dimensionless variables for the vertical velocity and Eulerian pressure fluctuation, \[\tilde{w}\equiv\varepsilon\tilde{W}\;,\qquad\tilde{P}_{1}\equiv\varepsilon \tilde{\Theta}\;. \tag{48}\] Both \(\tilde{W}\) and \(\tilde{\Theta}\) are order unity because the prefactors in their definitions absorb the leading-order behavior as the frequency becomes small. ### Low-Frequency Limit of the Continuity Equation Consider the dimensional form of the continuity equation (15), where the equation of state (16) is used to replace the density fluctuation \[i\omega\frac{\rho_{1}}{\rho_{0}}=i\omega\left(\frac{P_{1}}{\rho_{0}c^{2}}- \frac{s_{1}}{c_{p}}\right)=ik_{h}u+\frac{dw}{dz}-\frac{w}{H}\;. \tag{49}\] In order to sound proof the equation set, we need to eliminate the term involving the Eulerian pressure fluctuation. This term is responsible for producing the pressure fluctuations that generate the restoring force for acoustic oscillations. The anelastic approximation does indeed eliminate this pressure term, but it is overkill and removes the entire left-hand side of the continuity equation above. In particular, the term involving the entropy fluctuation is also thrown away. For low-frequency internal gravity waves, this is inconsistent. If the continuity equation is non-dimensionalized it becomes obvious that the entropy term is the same order as other terms that are retained by the anelastic approximation, \[i\varepsilon^{2}\frac{\tilde{\Theta}}{\tilde{\rho}_{0}\tilde{c}^{2}}-i \varepsilon\tilde{s}_{1}=\left[i\tilde{u}+\frac{d\tilde{W}}{d\zeta}\right]- \varepsilon\frac{\tilde{W}}{\tilde{H}}\;. \tag{50}\] The leading-order behavior consists of the two order-unity terms that appear in square brackets on the right-hand side of Equation (50). The first correction for nonzero frequency is comprised of the two first-order terms, \(\mathcal{O}(\varepsilon)\); one of these is the foresaid entropy term. The term involving the Eulerian pressure fluctuation is second order, \(\mathcal{O}(\varepsilon^{2})\). The lowest-order self-consistent approximation that one could make would be to keep just the leading-order terms, resulting in an assumption of incompressibility, \(\nabla\cdot\mathbf{u}\approx 0\). The next self-consistent approximation would be the retention of all zero-order and first-order terms. As we will show next, this approximation is equivalent to the pseudo-incompressible condition. We demonstrate pseudo-incompressibility by using the energy equation (14) to replace the entropy fluctuation in Equation (49) with the vertical velocity and then combining the two first-order terms using the definition of the buoyancy frequency, \(N^{2}=g/H-g^{2}/c^{2}\), \[i\omega\frac{P_{1}}{\rho_{0}c^{2}}=\left[ik_{h}u+\frac{dw}{dz}\right]-\frac{ gw}{c^{2}}\;. \tag{51}\] The last term on the right-hand side is equal to the vertical velocity divided by the scale height for the potential density, i.e., the density scale height for an adiabatic density stratification, \[\frac{1}{H_{*}}\equiv-\frac{1}{\rho_{*0}}\frac{d\rho_{*0}}{dz}=\frac{g}{c^{2}}\;. \tag{52}\] Hence, the terms on the right-hand side of Equation (51) can be cleanly combined, \[\nabla\cdot(\rho_{*0}\mathbf{u})=i\omega\frac{\rho_{*0}P_{1}}{\rho_{0}c^{2}}\sim \mathcal{O}(\varepsilon^{2})\;. \tag{53}\] A self-consistent low-frequency approximation is to discard all second-order terms, leading to the pseudo-incompressible approximation, \(\mathbf{\nabla}\cdot(\rho_{*0}\mathbf{u})=0\). ### Low-Frequency Limit of the Momentum Equation When transformed into the spectral representation, the vertical component of the momentum equation (13) is given by \[-i\omega\rho_{0}w=-\frac{dP_{1}}{dz}-g\rho_{1}\;, \tag{54}\] and non-dimensionalization of this equation yields, \[-i\varepsilon^{2}\tilde{\rho}_{0}\tilde{W}=-\frac{d\tilde{\Theta}}{d\zeta}- \tilde{\rho}_{1}\;. \tag{55}\] It is now obvious from the preceding equation that the inertial term on the left-hand side becomes the smallest term in the low-frequency limit; it is a second-order correction. The right-hand side consists solely of terms that are zero order in the dimensionless frequency. Hence, to first order, the balance is simply the hydrostatic relation between the perturbed pressure and the perturbed density, \[-\frac{dP_{1}}{dz}-g\rho_{1}\approx 0\;. \tag{56}\] The pseudo-incompressible and fiducial anelastic approximations both leave the vertical momentum equation unmodified. But, the LBR formulation of the anelastic approximation drops a term whose removal is formally valid only in an adiabatic (or near-adiabatic) stratification. The vertical momentum equation (54) can be rewritten in the following manner \[-i\omega w=-\frac{d}{dz}\left(\frac{P_{1}}{\rho_{0}}\right)+\frac{gs_{1}}{c_{ p}}+\frac{N^{2}}{g}\frac{P_{1}}{\rho_{0}}\;. \tag{57}\] either by linearizing Equation (3) or by dividing the vertical momentum equation (54) by the mass density and pulling the density into the gradient operator that appears in the pressure force by use of the chain rule. The LBR formulation of the anelastic approximation removes the term involving the buoyancy frequency, even in stable stratifications where the buoyancy frequency is not small. We demonstrate that this approximation is inconsistent with low-frequency gravity waves by considering the nondimensional form of Equation (57), \[-i\varepsilon^{2}\tilde{W}=-\frac{d}{d\zeta}\left(\frac{\tilde{\Theta}}{\tilde {\rho}_{0}}\right)+\tilde{s}_{1}+\varepsilon\tilde{N}^{2}\frac{\tilde{\Theta} }{\tilde{\rho}_{0}}\;. \tag{58}\] The LBR approximation inconsistently ignores the first-order term while retaining the inertial term (which is second-order). ## 5 The Integrity of Three Sound-Proofing Treatments In this section we examine the success or failure of a variety of sound-proofing methods in reproducing the appropriate behavior of low-frequency internal gravity waves. We have already discussed how all anelastic formulations inconsistently reject terms in the continuity equation and how the LBR anelastic formulation is further inconsistent with its treatment of the vertical momentum equation. Here we will examine how these inconsistencies propagate and produce errors in the dispersion relation and wave functions. To ease comparison, here we provide the local dispersion relation for a fully compressible fluid in both its dimensional and nondimensional forms--i.e., Equations (32) and (41), \[k_{z}^{2}(z) = k_{h}^{2}\left(\frac{N^{2}}{\omega^{2}}-1\right)-\frac{\omega_{c }^{2}}{c^{2}}+\frac{\omega^{2}}{c^{2}}\;, \tag{59}\] \[\frac{k_{z}^{2}(z)}{k_{h}^{2}} = \varepsilon^{-2}\tilde{N}^{2}-\left(1+\tilde{k}_{c}^{2}\right)+ \frac{\varepsilon^{2}}{\tilde{c}^{2}}\;. \tag{60}\] Further, in Table 1, we summarize the function \(\alpha(z)\), the local wavenumber \(k_{z}\), and the envelope scale \(\Lambda\) in the low-frequency limit for a fully compressible fluid and for all three sound-proofing treatments. We retain terms only up to first-order in the dimensionless frequency \(\varepsilon\). ### Pseudo-incompressible Approximation Since the pseudo-incompressible approximation is self-consistent in its treatment of the continuity equation and correct to first order in the frequency, we expect that this approximation should produce low-frequency internal gravity waves that are correct to first order in the dispersion relation and in the wave functions. To demonstrate that this expectation is true we rederive the wave equation for internal gravity waves but with the continuity equation (15) replaced by the pseudo-incompressible condition, \(\mathbf{\nabla}\cdot(\rho_{*0}\mathbf{u})=0\). We simply present the result, \begin{table} \begin{tabular}{|l|c|c|c|} \hline & & **Envelope** \\ **Equation Set** & \(\alpha(z)\) & **Square of the Vertical Wavenumber**\(k_{z}^{2}\) & **Scale \(\Lambda\)** \\ \hline \hline Fully-Compressible & \(g^{2}k_{h}^{2}-\omega^{4}\) & \(k_{h}^{2}\left(\dfrac{N^{2}}{\omega^{2}}-1\right)-\dfrac{\omega_{c}^{2}}{c^{2} }+\mathrm{O}(\varepsilon^{2})\) & \(H\) \\ \hline Pseudo-incompressible & \(g^{2}k_{h}^{2}-\omega^{4}+\omega^{2}\dfrac{g}{H_{*}}\) & \(k_{h}^{2}\left(\dfrac{N^{2}}{\omega^{2}}-1\right)-\dfrac{\omega_{c}^{2}}{c^{2} }-\dfrac{N^{2}}{c^{2}}+\mathrm{O}(\varepsilon^{2})\) & \(H+\mathrm{O}(\varepsilon^{2})\) \\ \hline Fiducial Anelastic & \(g^{2}k_{h}^{2}-\omega^{4}+\omega^{2}\dfrac{g}{H}\) & \(k_{h}^{2}\left(\dfrac{N^{2}}{\omega^{2}}-1\right)-\dfrac{\omega_{c}^{2}}{c^{2 }}+\dfrac{N^{2}}{4g}\dfrac{H+H_{*}}{HH_{*}}+\dfrac{1}{2g}\dfrac{dN^{2}}{dz}+ \mathrm{O}(\varepsilon^{2})\) & \(H_{*}+\mathrm{O}(\varepsilon^{2})\) \\ \hline LBR Anelastic & \(g^{2}k_{h}^{2}-\omega^{4}+\omega^{2}\left(N^{2}+\dfrac{g}{H}\right)\) & \(k_{h}^{2}\left(\dfrac{N^{2}}{\omega^{2}}-1\right)-\dfrac{\omega_{c}^{2}}{c^{2 }}-\dfrac{1}{g}\dfrac{dN^{2}}{dz}+\mathrm{O}(\varepsilon^{2})\) & \(H+\mathrm{O}(\varepsilon^{2})\) \\ \hline \end{tabular} \end{table} Table 1: Wave properties achieved under various sound-proofing approximations as indicated in the first column. The second column indicates the function \(\alpha(z)\). The third and fourth columns provide the square of the local vertical wavenumber \(k_{z}^{2}\), and the scale length \(\Lambda\) of the amplitude envelope for internal gravity waves in the low-frequency limit. The wave frequency and horizontal wavenumber are indicated by \(\omega\) and \(k_{h}\), respectively. The atmosphere is characterized by the vertical profiles of the sound speed \(c\), the density scale height \(H\), the scale height for an adiabatic stratification (i.e., the scale height for the potential density) \(H_{*}=c^{2}/g\), the buoyancy frequency \(N\), and the acoustic cutoff frequency \(\omega_{c}\). For the vertical wavenumber and envelope scale, all terms with a magnitude \(\mathrm{O}(\varepsilon^{2})\) or smaller have been neglected. Since, the leading-order terms in the vertical wavenumber are \(\mathrm{O}(\varepsilon^{-2})\), the neglected terms are small by a factor of \(\varepsilon^{4}\), i.e., they are fourth order. \[\begin{split}\left\{\frac{d^{2}}{dz^{2}}+\left(\frac{1}{H}+\frac{ \omega^{2}}{g}\theta_{\rm PI}\right)\frac{d}{dz}-k_{h}^{2}\left(1-\frac{N^{2}}{ \omega^{2}}\right)\\ +\left[\frac{N^{2}}{c^{2}}+\frac{\omega^{4}}{g^{2}}\theta_{\rm PI }\right]\right\}\delta P=0\;.\end{split} \tag{61}\] In this expression, \(\theta_{\rm PI}(z)\) is a dimensionless function that depends on the temporal frequency \(\omega\), the horizontal wavenumber \(k_{h}\), and the potential density \(\rho_{*0}\) through the following definitions \[\alpha_{\rm PI}(z) \equiv g^{2}k_{h}^{2}-\omega^{4}+\omega^{2}\frac{g}{H_{*}}\;, \tag{62}\] \[\theta_{\rm PI}(z) \equiv -\frac{g}{\omega^{2}}\frac{\alpha_{\rm PI}^{\prime}}{\alpha_{ \rm PI}}=\frac{g^{2}}{\alpha_{\rm PI}}\,\frac{H_{*}^{\prime}}{H_{*}^{2}}\;. \tag{63}\] Compared to the fully-compressible equations, the quantity \(\alpha_{\rm PI}\) has been augmented by \(\omega^{2}g/H_{*}\), and is therefore no longer a constant function of height. A direct comparison of Equation (61) with the wave equation for a fully compressible fluid (23) reveals that there are three spurious terms: both of the terms involving \(\theta_{\rm PI}\), as well as the term \((N^{2}/c^{2})\delta P\). To demonstrate that all of these spurious terms are small in magnitude and can be safely ignored in the low-frequency limit, we nondimensionalize Equation (61), \[\begin{split}\left\{\frac{d^{2}}{d\zeta^{2}}+\left(\frac{\varepsilon }{\tilde{H}}+\varepsilon^{3}\theta_{\rm PI}\right)\frac{d}{d\zeta}+\tilde{N}^{ 2}-\varepsilon^{2}\right.\\ +\left.\left[\varepsilon^{2}\frac{\tilde{N}^{2}}{\tilde{c}^{2}}+ \varepsilon^{6}\theta_{\rm PI}\right]\right\}\delta\tilde{P}=0\;,\end{split} \tag{64}\] and we recognize that the function \(\theta_{\rm PI}\) is an order-unity quantity for low frequencies, \[\theta_{\rm PI}=\frac{1}{1-\varepsilon^{4}+\varepsilon^{2}\tilde{H_{*}}^{-1} }\,\frac{H_{*}^{\prime}}{\tilde{H}_{*}^{2}}\quad\sim\mathcal{O}(1)\;. \tag{65}\] Thus, all of the spurious terms are second-order or higher in the dimensionless frequency \(\varepsilon\) and the Lagrangian pressure fluctuation that is generated by Equation (61) is correct to first-order. Based on this result we should expect the local dispersion relation to also be correct to first order and this is indeed the case. The transformation that converts the ODE into a Helmholtz equation has the same form as we found for the fully-compressible equations, \[\delta P=\left(\alpha_{\rm PI}\,\rho_{0}\right)^{1/2}\psi\;, \tag{66}\] but now the function \(\alpha=\alpha_{\rm PI}(z)\) varies with height. This change of variable leads to the following local dispersion relation, \[\begin{split} k_{z}^{2}(z)=k_{h}^{2}\left(\frac{N^{2}}{\omega^{2} }-1\right)-\frac{\omega_{\rm c}^{2}}{c^{2}}\\ +\left[\frac{N^{2}}{c^{2}}-\frac{\omega^{2}}{2g}\left(\theta_{ \rm PI}^{\prime}+\frac{\theta_{\rm PI}}{H}\right)+\frac{\omega^{4}}{g^{2}} \left(\theta_{\rm PI}-\frac{\theta_{\rm PI}^{2}}{4}\right)\right]\,,\end{split} \tag{67}\] with a nondimensional form given by \[\begin{split}&\frac{k_{z}^{2}(z)}{k_{h}^{2}}=\varepsilon^{-2} \tilde{N}^{2}-\left(1+\tilde{k}_{c}^{2}\right)\\ &+\left[\frac{\tilde{N}^{2}}{\tilde{c}^{2}}-\frac{\varepsilon^{2 }}{2}\left(\frac{\theta_{\rm PI}^{\prime}}{k_{h}}+\frac{\theta_{\rm PI}}{ \tilde{H}}\right)+\varepsilon^{4}\left(\theta_{\rm PI}-\frac{\theta_{\rm PI }^{2}}{4}\right)\right]\;.\end{split} \tag{68}\] All of the terms contained by the error term, \(E_{\rm PI}\), in the preceding equations are spurious and do not appear in the local dispersion relation for a fully compressible fluid. However, all spurious terms appear as a correction that is smaller than the leading order behavior by a factor of \(\varepsilon^{2}\) or smaller. Hence, the pseudo-incompressible approximation leads to a local dispersion relation that is correct to first order. Finally, the envelope scale can be read directly from the coefficient of the first-derivative term in the ODE, \(\Lambda^{-1}=H^{-1}+\omega^{2}\theta_{\rm PI}/g\). To first order in the frequency, the envelope scale is simply the density scale height. ### Fiducial Anelastic For the fiducial anelastic approximation, where the only modification to the fully-compressible fluid equations is made to the continuity equation, the resulting ODE for the Lagrangian pressure fluctuation is as follows, \[\begin{split}\left\{\frac{d^{2}}{dz^{2}}+\left(\frac{1}{H_{*}}+ \frac{\omega^{2}}{g}\theta_{\rm FA}\right)\frac{d}{dz}-k_{h}^{2}\left(1-\frac{ N^{2}}{\omega^{2}}\right)\right.\\ \left.+\left[\left(\frac{\omega^{2}}{c^{2}}+k_{h}^{2}\right)\theta _{\rm FA}-\frac{H_{*}^{\prime}}{H_{*}^{2}}\right]\right\}\delta P=0\;,\end{split} \tag{69}\] where the \(\alpha\) and \(\theta\) functions take on subtly but crucially different forms, \[\alpha_{\rm FA}(z) \equiv g^{2}k_{h}^{2}-\omega^{4}+\omega^{2}\frac{g}{H}\;, \tag{70}\] \[\theta_{\rm FA}(z) \equiv -\frac{g}{\omega^{2}}\frac{\alpha_{\rm FA}^{\prime}}{\alpha_{\rm FA }}=\frac{g^{2}}{\alpha_{\rm FA}}\,\frac{H^{\prime}}{H^{2}}\;. \tag{71}\] Here, \(\alpha_{\rm FA}\) and \(\theta_{\rm FA}\) differ from the pseudo-incompressible case, Equations (62) and (63), by the appearance of \(H\) instead of \(H_{*}\). A direct comparison of Equation (69) with the ODE (23) appropriate for a fully compressible fluid illustrates that fiducial anelastic generates a variety of spurious and incorrect terms. Specifically, the terms in the square brackets are spurious and the entire coefficient of the first-derivative term is incorrect. To ascertain the magnitude of these mistakes, we nondimensionalize, \[\begin{split}\left\{\frac{d^{2}}{d\zeta^{2}}+\left(\frac{\varepsilon }{\tilde{H}_{*}}+\varepsilon^{3}\theta_{\text{FA}}\right)\frac{d}{d\zeta}+ \tilde{N}^{2}-\varepsilon^{2}\\ +\ \varepsilon^{2}\left(\theta_{\text{FA}}-\frac{H_{*}^{\prime}}{ \tilde{H}_{*}^{2}}\right)+\varepsilon^{4}\frac{\theta_{\text{FA}}}{\tilde{c}^ {2}}\right\}\delta\tilde{P}=0\;,\end{split} \tag{72}\] Fiducial anelastic performs rather poorly in reproducing the behavior of low-frequency internal gravity waves. The ODE is correct only to leading order in \(\varepsilon\) with inconsistencies appearing at first-order in the coefficient of the first derivative. The first term in this coefficient contains the reciprocal of the scale height of the potential density, where it should instead possess the reciprocal of the density scale height--see Equation (38). Interestingly, conversion of the ODE to standard form--via the change of variable \(\delta P=\left(\alpha_{\text{FA}}\,\rho_{*}\right)^{1/2}\psi\)--results in a local dispersion relation that is correct to first order, \[\begin{split} k_{z}^{2}(z)&=k_{h}^{2}\left(\frac{N^ {2}}{\omega^{2}}-1\right)-\frac{1+2H_{*}^{\prime}}{4H_{*}^{2}}\\ &+\left[k_{h}^{2}\theta_{\text{FA}}-\frac{\omega^{2}}{2g}\left( \theta_{\text{FA}}^{\prime}-\frac{\theta_{\text{FA}}}{H_{*}}\right)-\frac{ \omega^{4}}{4g^{2}}\theta_{\text{FA}}^{2}\right]\;,\end{split} \tag{73}\] or \[\begin{split}\frac{k_{z}^{2}(z)}{k_{h}^{2}}&= \varepsilon^{-2}\tilde{N}^{2}-\left(1+\frac{1+2H_{*}^{\prime}}{4\tilde{H}_{*} ^{2}}\right)\\ &+\left[\theta_{\text{FA}}+\frac{\varepsilon^{2}}{2}\left(k_{h} ^{-1}\theta_{\text{FA}}^{\prime}-\frac{\theta_{\text{FA}}}{\tilde{H}_{*}} \right)-\frac{\varepsilon^{4}}{4}\theta_{\text{FA}}^{2}\right]\;.\end{split} \tag{74}\] In addition to all of the spurious terms that appear in the square brackets, the acoustic cut-off frequency is incorrect, \[\frac{1+2H_{*}^{\prime}}{4\tilde{H}_{*}^{2}}\neq\tilde{k}_{c}^{2}=\frac{1-2H ^{\prime}}{4\tilde{H}^{2}}\;. \tag{75}\] For ease of comparison, in Table 1 we have reworked the right-hand side of Equation (73) to extract the correct form of the acoustic cutoff frequency. Despite these issues, the errors all appear at second order or higher in the dimensionless frequency \(\varepsilon\), meaning that the erroneous terms divided by the leading order behavior are small by a factor of \(\varepsilon^{2}\). The fact that the ODE itself is incorrect at first order manifests in the envelope function, \(\left(\alpha_{\text{FA}}\,\rho_{*}\right)^{1/2}\), which is wrong at all orders. As we will see in a subsequent section this results in first-order errors in the wave functions even though the dispersion relation is correct to first order. ### LBR Anelastic In the framework of the LBR anelastic approximation, in addition to the anelastic treatment of the continuity equation, i.e., \(\mathbf{\nabla}\cdot\left(\rho_{0}\mathbf{u}\right)\approx 0\), a term in the vertical momentum equation is removed. When these two modification to the fluid equations are adopted, the resulting ODE that describes internal gravity waves becomes, \[\begin{split}\left\{\frac{d^{2}}{dz^{2}}&+\left( \frac{1}{H}+\frac{\omega^{2}}{g}\theta_{\text{LBR}}\right)\frac{d}{dz}-k_{h}^ {2}\left(1-\frac{N^{2}}{\omega^{2}}\right)\\ &+\left[\left(k_{h}^{2}+\frac{\omega^{2}}{gH}\right)\theta_{\text {LBR}}-\frac{H^{\prime}}{H^{2}}\right]\right\}\delta P=0\;,\end{split} \tag{76}\] where \(\alpha\) and \(\theta\) are now \[\alpha_{\text{LBR}}(z) \equiv g^{2}k_{h}^{2}-\omega^{4}+\omega^{2}\left(N^{2}+\frac{g}{H} \right)\;, \tag{77}\] \[\theta_{\text{LBR}}(z) \equiv\frac{g^{2}}{\alpha_{\text{LBR}}}\left(\frac{H^{\prime}}{H^ {2}}-\frac{1}{g}\frac{dN^{2}}{dz}\right)\;. \tag{78}\] The non-dimensional form of the ODE becomes \[\begin{split}\left\{\frac{d^{2}}{d\zeta^{2}}&+\left( \frac{\varepsilon}{\tilde{H}}+\varepsilon^{3}\theta_{\text{LBR}}\right)\frac{d }{d\zeta}+\tilde{N}^{2}-\varepsilon^{2}\\ &+\left[\varepsilon^{2}\left(\theta_{\text{LBR}}-\frac{H^{\prime }}{\tilde{H}^{2}}\right)+\varepsilon^{4}\frac{\theta_{\text{LBR}}}{\tilde{H}} \right]\right\}\delta\tilde{P}=0\;.\end{split} \tag{79}\] Despite the inconsistent treatment of the vertical momentum equation, the LBR form of the anelastic approximation generates an ODE that is correct to first order in \(\varepsilon\). The spurious terms that appear in the square brackets are second order or higher and the coefficient of the first derivative is correct to first order. As expected the local dispersion relation--once again achieved by the change of variable \(\delta P=\left(\alpha_{\text{LBR}}\,\rho_{0}\right)^{1/2}\psi\), is correct to first order, \[\begin{split} k_{z}^{2}(z)&=k_{h}^{2}\left(\frac{N^ {2}}{\omega^{2}}-1\right)-\frac{\omega_{c}^{2}}{c^{2}}-\frac{H^{\prime}}{H^{2}} \\ &+k_{h}^{2}\theta_{\text{LBR}}-\frac{\omega^{2}}{2g}\left(\theta_{ \text{LBR}}^{\prime}-\frac{\theta_{\text{LBR}}}{H}\right)-\frac{\omega^{4}}{4g^ {2}}\theta_{\text{LBR}}^{2}\;,\end{split} \tag{80}\] and \[\begin{split}\frac{k_{z}^{2}(z)}{k_{h}^{2}}&= \varepsilon^{-2}\tilde{N}^{2}-\left(1+\tilde{k}_{c}^{2}+\frac{H^{\prime}}{ \tilde{H}^{2}}\right)\\ &+\theta_{\text{LBR}}-\frac{\varepsilon^{2}}{2}\left(k_{h}^{-1} \theta_{\text{LBR}}^{\prime}-\frac{\theta_{\text{LBR}}}{\tilde{H}}\right)- \frac{\varepsilon^{4}}{4}\theta_{\text{LBR}}^{2}\;.\end{split} \tag{81}\] ### Comparison of the Vertical Wavelengths In the previous subsections we demonstrated that the three approximations generate errors to the vertical wavelength of internal gravity waves that are second order in the dimensionless frequency \(\varepsilon\). Hence, if the only test of fidelity was to reproduce the local dispersion relation, all of the sound-proofing treatments would fair equally well. This is born out by a comparison of the vertical wavenumber that is achieved in an isothermal atmosphere by each treatment. This type of atmosphere is one of the most lenient of all potential atmospheres because all of the characteristic frequencies, i.e., \(N\), \(\omega_{c}\), and \(k_{h}c\), become constant functions of height, as do the scale heights \(H\) and \(H_{*}\). As a consequence, the vertical wavenumber \(k_{z}\) becomes a constant and the quantity \(\theta\) vanishes identically for all approximations. When \(\theta\) is zero, many of the spurious terms disappear from the local dispersion relations. Figure 1 shows the performance of the three approximations in an isothermal atmosphere. The leftmost panel illustrates the isocontours of the vertical wavenumber achieved in a fully-compressible fluid as a function of horizontal wavenumber \(k_{h}\) and temporal frequency \(\omega\). The remaining three panels provide the same isocontours for the sound-proofing treatment indicated at the top of the panel. The solid black contours in each panel are for the fully-compressible fluid, while the dashed red curves show the same contours under the relevant approximation. The value of each contour is marked in the left-most panel. In each panel, four isocontours of the nondimensional frequency \(\varepsilon=\omega/\sqrt{gk_{h}}\) are overlayed for reference and appear as dotted orange curves. To see how well an approximation reproduces the correct behavior, one should compare the red and black curves within a panel. We would expect that the differences should be small for low values of the nondimensional frequency, i.e., in the lower-right portion of the diagram, and large for high values of \(\varepsilon\) (upper left). From the four panels, it is clear that all three approximations reproduce the vertical wavenumber well as long as the dimensionless frequency is small, i.e., \(\varepsilon\lesssim 0.3\). ### Comparison of Wave Cavity Boundaries Since an isothermal atmosphere is so special (because many of the spurious terms in the dispersion relation vanish), it is wise to examine the behavior of the local dispersion relation in a more complicated atmosphere. We have chosen to examine the vertical wavenumber in a polytropically stratified atmosphere. Such atmospheres have thermodynamic profiles that are power laws in the depth, \[\rho_{0}(z) = A(-z)^{m}\,\quad P_{0}(z)=\frac{Ag}{m+1}(-z)^{m+1}\, \tag{82}\] \[H(z) = \frac{(-z)}{m}\,\quad N^{2}(z)=\frac{m(\gamma-1)-1}{\gamma} \frac{g}{(-z)}\,\] (83) \[H_{*}(z) = \frac{m+1}{m}(-z)\,\quad c^{2}(z)=\frac{\gamma g}{m+1}(-z). \tag{84}\] In the expressions above, \(A\) is an arbitrary constant and \(m\) is the polytropic index. Polytropes can be stably or unstably stratified depending on the values of the adiabatic index \(\gamma\) and the polytropic index \(m\); if \(m>(\gamma-1)^{-1}\), the atmosphere is stable to convective overturning. A convenient feature of a polytropic atmosphere is that it is self-similar, lacking an intrinsic spatial scale (see Hindman & Jain, 2022). Therefore, the local dispersion relation becomes independent of the horizontal wavenumber if we express the frequency in terms of our nondimensional frequency \(\varepsilon=\omega/\sqrt{gk_{h}}\) and we write all of the atmospheric profiles using a nondimensional depth \(-k_{h}z\). Because of this property, we can generate a single dispersion diagram that illustrates the vertical wavenumber as a function of dimensionless depth and frequency that is valid for all horizontal wavenumbers. Figure 2 presents the resulting dispersion diagram for each treatment of the fluid equations for a polytropic atmosphere with a polytropic index of \(m=3\) (which is stably stratified for an adiabatic index of \(\gamma=5/3\)). The left-most panel is for a fully-compressible fluid and the right three panels are for the three sound-proofing formalisms. The blue region in each diagram corresponds to those depths in the atmosphere where a wave of the given frequency is vertically evanescent and the black and red contours have the same meaning as in Figure 1. The upper panels show a range of dimensionless frequency that is wide enough to contain both the branch of low-frequency internal gravity waves and the branch of high-frequency acoustic waves (if present). The lower panels show a zoom-in view at low-frequencies that focuses on the gravity waves. Note, at a given frequency, there are two turning points where the local vertical wavenumber vanishes. Hence, the internal gravity waves are vertically trapped in a wave cavity for \(g\) modes. The turning points are indicated by the thick curves. Similarly, in the fully-compressible fluid, the acoustic waves are trapped in a \(p\) mode cavity. The pseudo-incompressible condition does minimal damage to the \(g\)-mode cavity, see Figure 2b. The boundaries move only slightly even for the highest-frequency waves. Further, the vertical wavenumbers within the cavity are weakly affected even for frequencies that are large enough that we might suspect that the low-frequency limit is invalid. The anelastic models fare poorly, however. The fiducial anelastic approximation does a horrendous job of reproducing the wave cavity boundaries. In fact, there appears to be a residual of the acoustic cavity that is highly distorted and appears at frequencies halfway between the acoustic and gravity wave branches. While, the LBR form of the anelastic approximation does not have spurious wave cavities at high frequency, it fails to reproduce the boundaries of the \(g\)-mode cavity with fidelity. The highest-frequency gravity waves that are vertically propagating have frequencies that are too high by a factor of about one-third. Further, errors in the vertical wavenumber become noticably large for relatively low values of the dimensionless frequency, \(\varepsilon>0.1\). ### Errors in the Wave Functions In sections 5.1-5.3, we found that the pseudo-incompressible approximation and the LBR formulation of the anelastic approximation both introduced errors in the Lagrangian pressure fluctuation that appeared at second order. The fiducial anelastic approximation produced errors at first-order. So at first glance the LBR approximation seems to fare well. However, as we shall soon see, when we consider other wave variables, such as the fluid velocity components, the pseudo-incompressible approximation becomes the clear winner. In the same manner that one derives equations (18) and (19) for a fully compressible fluid, similar equations can be derived for each of the approximations. When pseudo-incompressibility is adopted, one obtains the following: \[\rho_{0}u = -\frac{\omega gk_{h}}{\alpha_{\rm PI}}\left(\frac{d}{dz}+\frac{ \omega^{2}}{g}\right)\delta P\;, \tag{85}\] \[\rho_{0}w = \frac{i\omega^{3}}{\alpha_{\rm PI}}\left(\frac{d}{dz}+\frac{gk_{ h}^{2}}{\omega^{2}}+\frac{1}{H_{*}}\right)\delta P\;. \tag{86}\] To see the magnitude of the spurious terms, we nondimensionalize, \[\tilde{u} = -\frac{\tilde{\rho}_{0}^{-1}}{1-\varepsilon^{4}+\varepsilon^{2} \tilde{H}_{*}^{-1}}\left(\frac{d}{d\zeta}+\varepsilon^{3}\right)\delta\tilde {P}\;, \tag{87}\] \[\tilde{w} = \frac{i\varepsilon\ \tilde{\rho}_{0}^{-1}}{1-\varepsilon^{4}+ \varepsilon^{2}\tilde{H}_{*}^{-1}}\left(1+\varepsilon\frac{d}{d\zeta}+\frac{ \varepsilon^{2}}{\tilde{H}_{*}}\right)\delta\tilde{P}\;, \tag{88}\] If one compares these expressions with Equations (43) and (44), it is clear that all spurious terms appear at second order in the dimensionless frequency. Since all of the other fluid variables (i.e., \(\rho_{1}\), \(P_{1}\), and \(s_{1}\)) are linear combinations of the two velocity components--see Figure 1: Propagation diagrams for an isothermal atmosphere for four treatments of the fluid equations: (a) a fully compressible fluid—i.e., no approximation, (b) the pseudo-incompressible condition, (c) the fiducial anelastic approximation, and (d) the LBR formulation of the anelastic approximation (see Table 1 for a summary). In each panel, the solid black curves correspond to the isocontours of the square of the dimensionless vertical wavenumber \((k_{z}H)^{2}\) for a fully compressible atmosphere (where the density scale height \(H\) is a constant function of height for an isothermal atmosphere). The value of each contour is indicated by a black label in panel a. Further, the thick black contour corresponds to the zero contour that separates domains of vertical wave propagation \((k_{z}^{2}>0)\) and evanescece \((k_{z}^{2}<0)\). In panels \(b\)–\(d\), the dashed red curves indicate the same contours but for the approximation indicated at the top of the panel. In each panel, the domain of evanescent waves is indicated by the blue shading, while the region of vertical propagation is unshaded. The dotted curves in each panel are isocontours of the dimensionless frequency. Since the dimensionless frequency is a function of wavenumber, \(\varepsilon=\omega/\sqrt{gk_{h}}\), isocontours are curved lines with low values in the lower-right portion of the diagram and high values in the upper left. All approximations reproduce the correct vertical wavenumber when the dimensionless frequency \(\varepsilon\) is small. Differences between the approximations begin to appear for moderate to large values of the dimensionless frequency \(\varepsilon>0.3\). Equation (26), the wave functions for all of the fluid variables are correct to first order when the pseudo-incompressible approximation is utilized. Both of the anelastic approximations falter. For the fiducial anelastic approximation the nondimensional forms for the two velocity components are \[\tilde{u} = -\frac{\tilde{\rho}_{0}^{-1}}{1-\varepsilon^{4}+\varepsilon^{2} \tilde{H}^{-1}}\left(\frac{d}{d\zeta}-\varepsilon\tilde{N}^{2}+\varepsilon^{3} \right)\delta\tilde{P}\, \tag{89}\] \[\tilde{w} = \frac{i\varepsilon\ \tilde{\rho}_{0}^{-1}}{1-\varepsilon^{4}+ \varepsilon^{2}\tilde{H}^{-1}}\left(1+\varepsilon\frac{d}{d\zeta}+\frac{ \varepsilon^{2}}{\tilde{H}_{*}}\right)\delta\tilde{P}\, \tag{90}\] and for the LBR anelastic formulation we obtain \[\tilde{u} = -\tilde{\rho}_{0}^{-1}\tilde{\alpha}_{\rm LBR}^{-1}\left(\frac{d }{d\zeta}-\varepsilon\tilde{N}^{2}+\varepsilon^{3}\right)\delta\tilde{P}\, \tag{91}\] \[\tilde{w} = i\varepsilon\ \tilde{\rho}_{0}^{-1}\tilde{\alpha}_{\rm LBR}^{-1} \left(1+\varepsilon\frac{d}{d\zeta}+\frac{\varepsilon^{2}}{H}\right)\delta \tilde{P}\, \tag{92}\] with \(\tilde{\alpha}_{\rm LBR}\equiv 1-\varepsilon^{4}+\varepsilon^{2}\tilde{N}^{2}+ \varepsilon^{2}\tilde{H}^{-1}\). Both have errors in the horizontal velocity that appear at first order (i.e., the term involving \(\varepsilon\tilde{N}^{2}\)). The fiducial anelastic approximation has the added shame that the Lagrangian pressure fluctuation itself is only correct to zero order and hence all fluid variables suffer from the same deficiency. For the LBR approximation, the first order error in the horizontal velocity \(u\) propagates to errors of similar size in the fluctuations of the Eulerian pressure \(P_{1}\) and density \(\rho_{1}\). ## 6 Discussion We have demonstrated that internal gravity waves within a fully-compressible fluid become pseudo-incompressible in the low-frequency limit. Discrepancies from the solutions for a fully compressible fluid appear at second order in the non-dimensional frequency, i.e., the relative errors are \(\mathcal{O}(\omega^{2}/gk_{h})\). Conversely, the two anelastic approximations that we consider are inconsistent in the terms they neglect or retain in the continuity Figure 2: Propagation diagrams for a polytropic atmosphere under different approximations to the fluid equations. In each panel, the solid black curves correspond to the isocontours of the square of the dimensionless vertical wavenumber \((k_{z}/k_{h})^{2}\) for a fully compressible atmosphere. These contours are plotted versus a non-dimensional depth, \(-k_{h}z\), and the dimensionless frequency, \(\varepsilon=\omega/\sqrt{gk_{h}}\). The thick black contour corresponds to the zero contour that separates domains of vertical propagation (\(k_{z}^{2}>0\)) and evanescence (\(k_{z}^{2}<0\)). The dashed red curves indicate the same contours but for the approximation indicated at the top of the column. The background colors have the same meaning as in Figure 1. The upper panels illustrate a larger range of frequency and capture the high-frequency acoustic branch. The pseudo-incompressible and LBR anelastic approximations eliminate all such acoustic waves. The fiducial anelastic approximation leaves a highly distorted residual domain of propagating acoustic waves. In general, all three approximations do well in reproducing the correct vertical wavenumber when the dimensionless frequency is small \(\varepsilon\lesssim 0.1\). However, the pseudo-anelastic approximation has the least distortion to the spatial extent of the wave cavity even for frequencies as large as \(\varepsilon\approx 0.3\). equation and vertical momentum equation. This inconsistency leads to errors in the wave functions that appear at first order, \(\mathcal{O}(\omega/\sqrt{gk_{h}})\). A summary of the fractional errors in the vertical wavenumber, envelope scale length, and in the eigenfunctions appears in Table 2. These errors in the eigenfunctions arise from errors in either the local vertical wavenumber (the short spatial scale) or in the amplitude envelope of the oscillations (the long spatial scale)--see Tables 1 and 2. Many of the errors in the local dispersion relation explicitly require vertical variation in the atmospheric profiles of the density scale height and buoyancy frequency. Both Brown et al. (2012) and Vasil et al. (2013) explicitly considered isothermal atmospheres for which the scale heights and the characteristic frequencies are constants. So many of the errors identified here failed to materialize in those previous studies. Brown et al. (2012) examined the behavior of internal gravity waves under the influence of three distinct anelastic treatments (including the LBR and fiducial anelastic formulations), and found that the LBR formulation suffered from the least deviation from the fully compressible result. Here we have demonstrated that the apparent success of the LBR approximation is only in reproducing the local dispersion relation. If one considers the wave functions directly, the LBR anelastic approximation fails at first order, just like fiducial anelastic. ### Conservation of Energy We can explore conservation of energy under each approximation by computing the vertical energy flux \(F(z)\). Using Abel's Identity, as we did for a fully-compressible fluid in section 3.1, we find a general expression for the energy flux that is valid for all three sound-proofing treatments, \[F(z)=-\frac{i\omega^{3}}{4\rho_{0}\alpha}\mathcal{W}\left\{\delta P,\delta P^{ *}\right\}(z)\;. \tag{93}\] Each approximation generates a distinct form for \(\alpha\) and has a different Wronskian because the coefficients of the first-derivative term in the respective ODEs differ. For the pseudo-incompressible equations, using Equations (62) and (63), we find that the vertical energy flux is a constant function of height, \[\mathcal{W}\left\{\delta P,\delta P^{*}\right\}(z) = C\exp\left\{\int\frac{1}{\alpha_{\text{PI}}\rho_{0}}\frac{d\left( \alpha_{\text{PI}}\rho_{0}\right)}{dz}dz\right\} \tag{94}\] \[= C\alpha_{\text{PI}}\rho_{0}\;,\] \[F_{\text{PI}}(z) = -\frac{i\omega^{3}\,C}{4}=\text{constant}\;. \tag{95}\] Hence, energy is conserved. It is interesting to note that we have not utilized the small parameter in this derivation of the energy flux. So, energy is conserved even when the low-frequency expansions have questionable validity because the dimensionless frequency is not small. Performing the same calculations for the two anelastic treatments reveals that the LBR formulation conserves energy (for the same reasons that the pseudo-incompressible equations do) and the fiducial anelastic equations lack energy conservation, \[F_{\text{FA}}(z) = -\frac{i\omega^{3}\,C}{4}\frac{\rho_{*0}}{\rho_{0}}=\frac{i \omega^{3}\,C}{4}e^{s_{0}(z)/c_{p}}\;. \tag{96}\] \[F_{\text{LBR}}(z) = -\frac{i\omega^{3}\,C}{4}=\text{constant}\;. \tag{97}\] The vertical energy flux \(F_{\text{FA}}\) derived from the fiducial anelastic equations depends on the atmosphere's specific entropy density and, thus, in an atmosphere without adiabatic stratification the wave will deposit or extract energy as it travels. ### Applicability in Numerical Simulations In numerical simulations, it is hard to overstate the utility in converting the continuity equation from a parabolic prognostic equation to an elliptic PDE constraint, as is accomplished by both the anelastic and pseudo-incompressible approximations, \[\frac{\partial\rho}{\partial t}=-\mathbf{\nabla}\cdot(\rho\mathbf{u}) \longrightarrow\left\{\begin{array}{c}\text{anelasticity}\\ \mathbf{\nabla}\cdot(\rho_{0}\mathbf{u})=0\;,\\ \text{pseudo-incompressibility}\\ \mathbf{\nabla}\cdot\left(P_{0}^{1/\gamma}\mathbf{u}\right)=0\;.\end{array}\right.\] In addition to removing sound waves and hence unhrottling the simulation's timestep, the imposition of constraints with this form allow the fluid velocity to be expressed using stream functions. Of course, this reduces the number of variables that must be evolved from one time step to the next. However, this is done at the expense of increasing the spatial order of the now reformulated momentum equations in stream function form that is now devoid of any elliptic constraints. This may demand auxiliary boundary conditions on the stream-functions that are not readily available. Moreover, if linear coupling in the system is treated as explicit in numerical time-stepping algorithms, it is known, specifically for spectral schemes, that the numerical accuracy of the scheme can be degraded at high resolutions. Fortunately, recent advances have shown that this degradation is avoided if linear couplings remain implicit at \begin{table} \begin{tabular}{|l|c|c|c|} \hline & **Errors in the** & **Errors in the** & **Errors in the** \\ **Equation Set** & **Vertical Wavenumber \(k_{z}\)** & **Envelope Scale \(\Lambda\)** & **Wave Functions \(\delta P\)**, \(u\)** \\ \hline \hline Pseudo-incompressible & O(\(\varepsilon^{2}\)) & O(\(\varepsilon^{2}\)) & O(\(\varepsilon^{2}\)), O(\(\varepsilon^{2}\)) \\ \hline Fiducial Anelastic & O(\(\varepsilon^{2}\)) & O(1) & O(\(\varepsilon\)), O(\(\varepsilon\)) \\ \hline LBR Anelastic & O(\(\varepsilon^{2}\)) & O(\(\varepsilon^{2}\)) & O(\(\varepsilon^{2}\)), O(\(\varepsilon\)) \\ \hline \end{tabular} \end{table} Table 2: Magnitude of the fractional errors that are introduced in internal gravity waves by three different sound-proofing techniques. Each column lists the size of the error divided by the leading order behavior for the wave property indicated at the top of the column. The size of each error is presented in terms of the dimensionless frequency \(\varepsilon=\omega/\sqrt{gk_{h}}\). The pseudo-incompressible approximation evinces the smallest errors, all appearing at second order. Both of the anelastic approximations have errors that appear at first order or larger. \begin{table} \begin{tabular}{|l|c|c|c|} \hline & **Errors in the** & **Errors in the** & **Errors in the** \\ **Equation Set** & **Vertical Wavenumber \(k_{z}\)** & **Envelope Scale \(\Lambda\)** & **Wave Functions \(\delta P\)**, \(u\)** \\ \hline \hline Pseudo-incompressible & O(\(\varepsilon^{2}\)) & O(\(\varepsilon^{2}\)) & O(\(\varepsilon^{2}\)), O(\(\varepsilon^{2}\)) \\ \hline Fiducial Anelastic & O(\(\varepsilon^{2}\)) & O(1) & O(\(\varepsilon\)), O(\(\varepsilon\)) \\ \hline LBR Anelastic & O(\(\varepsilon^{2}\)) & O(\(\varepsilon^{2}\)) & O(\(\varepsilon^{2}\)), O(\(\varepsilon\)) \\ \hline \end{tabular} \end{table} Table 2: Magnitude of the fractional errors that are introduced in internal gravity waves by three different sound-proofing techniques. Each column lists the size of the error divided by the leading order behavior for the wave property indicated at the top of the column. The size of each error is presented in terms of the dimensionless frequency \(\varepsilon=\omega/\sqrt{gk_{h}}\). The pseudo-incompressible approximation evinces the smallest errors, all appearing at second order. Both of the anelastic approximations have errors that appear at first order or larger. the expense of using fully coupled implicit time-stepping schemes (Julien and Watson, 2009; Marti et al., 2016; Burns et al., 2020; Miquel, 2021). In the derivation of the pseudo-incompressible condition above, two related assumptions are made. First, the Mach number, Ma, of the flows is small such that the advection timescale is much longer than a sound-crossing time for a typical flow structure. This ensures that fluid motions are in a constant state of pressure equilibration--i.e., the Eulerian pressure fluctuation is small. Second, we have assumed that fluctuations in the potential density are small compared to that of the background state. This later assumption is self-consistent with low-Mach number flows. Notably, unlike the anelastic approximation discussed below, it does not restrict density fluctuations to be small compared to that of the background state. Finally, since we have ignored diffusive effects in the derivation of the pseudo-incompressible constraint, i.e., we have ignored \(Q\) in Equation (8), we have made the further assumption that the Peclet number is large, \(\mathrm{Pe}\gg 1\), such that the thermal diffusion timescale is long compared to the advective time scale. To summarize, for the pseudo-incompressible constraint to be valid, we must have the following ordering of timescales, \[\tau_{\mathrm{sound}}\ll\tau_{\mathrm{adv}}\ll\tau_{\mathrm{diff}}\;, \tag{98}\] or equivalently in terms of nondimensional numbers \[\mathrm{Ma} \equiv \frac{\tau_{\mathrm{sound}}}{\tau_{\mathrm{adv}}}=\frac{U}{c} \ll 1\;, \tag{99}\] \[\mathrm{Pe} \equiv \frac{\tau_{\mathrm{diff}}}{\tau_{\mathrm{adv}}}=\frac{UL}{ \kappa}\gg 1\;, \tag{100}\] where \(U\) is a typical flow speed, \(L\) is a typical length scale, and \(\kappa\) is the thermal diffusivity. The validity of the anelastic constraint requires the same assumption of low Mach number, \(\mathrm{Ma}\ll 1\), but makes a different stricture on the effectiveness of thermal diffusion. Since, we must ignore Eulerian fluctuations of the mass density in the continuity equation, the equation of state dictates that, in addition to small pressure fluctuations, we must have small entropy or temperature fluctuations. In the convection zone of a star or planet, where the stratification is essentially adiabatic, entropy fluctuations are naturally small; anelasticity holds; and the anelastic and pseudo-incompressible conditions are equivalent. However, in a region of stable stratification, the only way that the entropy or temperature fluctuations can remain small is if temperature homogeneity is diffusively maintained across flow structures (see Bannon, 1996). This requires that the thermal diffusion time is short compared to the advective time scale. Summarizing, anelasticity requires \[\tau_{\mathrm{sound}},\tau_{\mathrm{diff}}\ll\tau_{\mathrm{adv}}\;, \tag{101}\] or equivalently \[\mathrm{Ma}\ll 1\;,\qquad\mathrm{Pe}\ll 1\;. \tag{102}\] The limitation of low Mach number is easily met in many astrophysical and geophysical applications. Convection is sedate in the Jovian planets, in the Earth's interior, and in the deep layers of low-mass stars. Wave motions and circulations in the stably stratified regions of stars and planets are similarly often low Mach number. The requirements on the Peclet number are usually the more restrictive of the two assumptions. For example, the thermal diffusion time in the Sun is typically millions of years; using the solar radius as the length scale, \(L=R_{\odot}\approx 700\) Mm, and a thermal diffusivity appropriate for photon diffusion, \(\kappa\sim 10^{7}\) cm\({}^{2}\) s\({}^{-1}\), we obtain \(\tau_{\mathrm{diff}}\sim 16\) Myr. If we consider the meridional circulation at the base of the Sun's convection zone and adopt a typical flow speed of 1 m s\({}^{-1}\), we obtain an advective timescale of 20 years, leading to a Peclet number of \(\mathrm{Pe}\sim 10^{6}\). Clearly, these motions are not anelastic; thermal diffusion cannot act rapidly enough to eliminate the temperature functions generated by advection. However, the motions do satisfy both of the requirements for pseudo-incompressibility, \(\mathrm{Ma}\ll 1\) and \(\mathrm{Pe}\gg 1\). Although large Peclet numbers are often true from an astrophysical perspective, numerical simulations are often performed in regimes where \(\mathrm{Pe}\sim\mathcal{O}(1)\). The anelastic approximation offers no resolution to this problem, but the pseudo-incompressible equations do. The restriction on the Peclet number \(\mathrm{Pe}\) can be relaxed if the irreversible thermodynamic terms are retained, \[\mathbf{\nabla}\cdot(\rho_{*0}\mathbf{u})=\frac{\rho_{*0}}{\rho_{0}}\frac{Q}{c_{p}T_{0} }\;, \tag{103}\] where \(\rho_{*0}\) is the potential density of the background state. Of course, the retention of \(Q\) will usually render a stream function formalism without the requirement of an elliptic constraint impossible. Finally, we wish to note a final advantage of the pseudo-incompressible approximation over anelasticity. While both sound-proofing schemes are well justified in a convection zone where the stratification is nearly adiabatic, if one wishes to simulate both stable and unstable regions in the same computational domain, the pseudo-incompressible approximation allows one to do so smoothly with a uniform treatment. The anelastic approximation will result in flows that violate the underlying assumptions of the approximation. We thank Lydia Korre and Rafael Fuentes for enlightening conversations about the pseudo-incompressible approximation. This work was supported by NASA through grants 80NSSC18K1125, 80NSSC19K0267 and 80NSSC20K0193 (BWH) and by NSF by grant DMS 2308338 (KJ).
2309.08636
ChatGPT v Bard v Bing v Claude 2 v Aria v human-expert. How good are AI chatbots at scientific writing?
Historical emphasis on writing mastery has shifted with advances in generative AI, especially in scientific writing. This study analysed six AI chatbots for scholarly writing in humanities and archaeology. Using methods that assessed factual correctness and scientific contribution, ChatGPT-4 showed the highest quantitative accuracy, closely followed by ChatGPT-3.5, Bing, and Bard. However, Claude 2 and Aria scored considerably lower. Qualitatively, all AIs exhibited proficiency in merging existing knowledge, but none produced original scientific content. Inter-estingly, our findings suggest ChatGPT-4 might represent a plateau in large language model size. This research emphasizes the unique, intricate nature of human research, suggesting that AI's emulation of human originality in scientific writing is challenging. As of 2023, while AI has transformed content generation, it struggles with original contributions in humanities. This may change as AI chatbots continue to evolve into LLM-powered software.
Edisa Lozić, Benjamin Štular
2023-09-14T14:04:03Z
http://arxiv.org/abs/2309.08636v3
Preprint (Please cite the published version at [https://doi.org/10.3390/ffi15100336](https://doi.org/10.3390/ffi15100336)) ###### Abstract Historical emphasis on writing mastery has shifted with advances in generative AI, especially in scientific writing. This study analysed six AI chatbots for scholarly writing in humanities and archaeology. Using methods that assessed factual correctness and scientific contribution, ChatGPT-4 showed the highest quantitative accuracy, closely followed by ChatGPT-3.5, Bing, and Bard. However, Claude 2 and Aria scored considerably lower. Qualitatively, all AIs exhibited proficiency in merging existing knowledge, but none produced original scientific content. Interestingly, our findings suggest ChatGPT-4 might represent a plateau in large language model size. This research emphasizes the unique, intricate nature of human research, suggesting that AIs emulation of human originality in scientific writing is challenging. As of 2023, while AI has transformed content generation, it struggles with original contributions in humanities. This may change as AI chatbots continue to evolve into LLM-powered software. Keywords:generative AI, large language model (LLM), ChatGPT, Bard, Bing, scientific writing, digital humanities, archaeology + Footnote †: journal: _Preprint_ (Please cite the published version at [https://doi.org/10.3390/ffi15100336](https://doi.org/10.3390/ffi15100336)) ## Highlights * The article evaluates the scientific writing skills of six AI chatbots in the humanities and archaeology. * The AI chatbots compared are: ChatGPT-3.5, ChatGPT-4, Bard, Bing Chatbot, Aria, and Claude 2. We also tested two ChatGPT-4 plugins: Bing and Scholar. * ChatGPT-4 outperforms the other chatbots in quantitative accuracy, but is unable to "pass an undergraduate exam" in humanities. * The study demonstrates the _limited potential of AI in generating original scientific contributions_, underscoring the unique value of human researchers. * The growth in the size of large language models appears to have reached a plateau. * As the size of language models like ChatGPT stabilises, it is important to understand their capabilities and limitations in the academic environment. ## Graphic extract The race for parameters: LLMs grow exponentially, but after GPT-3 "jump" the content is only marginally improved. ## 1 Introduction: AI's great inflection point In recent human history, the ability to write well was considered essential to human progress and professionalism. Creative expression was traditionally seen as a defining characteristic of humanity and the pinnacle of human achievement. This view is still reflected in the way universities cultivate their students' writing skills. Until recently, novel artifacts such as literary works, scientific texts, art, and music were difficult to create and only attainable by talented experts [1; 2; 3]. We must reckon with that changing! The current generation of openly available generative AI has rightly been called AI's great inflection point. Generative AI is shaping to become a general purpose technology [4], a "fundamental, horizontal technology that will touch everything in our lives" (Tim Cook, Apple CEO, speaking at Universita Degli Studi di Napoli Federico II in Naples, Italy, 29.9.2022). The most recent and disruptive advance in generative AI has been a leap-frog development in the field of large language models (hereafter LLMs). LLMs are based on deep neural networks and self-supervised learning that have been around for decades, but the amount of data that the current models were trained with lead to an unprecedented and, to some extent, unexpected performance leap. Current LLMs belong to foundation models that are pre-trained on a large datasets using self-supervision at scale and then adapted to a wide range of downstream tasks. This centralisation is crucial for harnessing the enormous computing power required to create them, but it also replicates all potential problems such as security risks and biases [3; 5]. Currently, the most powerful LLMs are generative pretrained transformers (hereafter GPTs), which are based on the Transformer, a type of neural network architecture. The Transformer uses a mechanism called attention to weigh the influence of different input words on each output word. As a result, instead of processing words in a sentence sequentially, it constructs relationships between all words in a sentence at once [6]. Additional key advantage of GPTs over earlier models is that the learning process can be parallelised and the models can be trained on an unprecedented scale. The scale of an LLM depends on the size of ingested datasets, the amount of training compute, and the number of parameters it can support [7; 8]. Parameters are numerical values that determine how a neural network processes and generates natural language. The more parameters a model has, the more data it can learn from and the more complex tasks it can perform. GPT-3 from 2020, for example, supports 175 billion parameters and has been trained on 45 TB of text data, including almost the entire public web [9]. PaLM, the 2022 LLM from Google Research, is a 540-billion-parameter GPT model trained with the Pathways system [10], and GPT-4 launched in 2023 supports an estimated 1.8 trillion parameters [11]. That is 1,800,000,000,000 parameters with which the model interacts to generate each individual token (a word or a part of a word). Multiplied by ChatGPT's 100,000,000 monthly users each processing just one prompt with 100 tokens daily brings us to a staggering 18,000,000,000,000,000,000,000 or 18 * 10\({}^{21}\) computations, which explains the daily costs of running ChatGPT at 700,000 $ [12]. One can only imagine the environmental costs of the operation. When given an input or prompt, GPT LLMs are able to predict the next word in the context of all previous content and can thus generate creative outputs such as complete sentences and answers or even essays and poems. In essence, they generate a pattern of words based on the word patterns they have been trained by applying attention to context and a controlled amount of randomness. But because they have been trained on such a large amount of text, the quality of the text is such that GPT-4, for example, has been able to pass or even ace some standardised academic and professional tests [13]. Figure 1: A high level overview of an AI chatbot and an LLM-powered software (constructed based on information from [14; 15]). Users interact with a LLM via AI conversational agents or AI chatbots. An AI chatbot is a software application that uses AI techniques to simulate human conversation. In the past, AI chatbots did not use LLMs and in the future, AI chatbots are likely to use other resources besides LLMs, such as curated structured data, computational algorithms, and real-time data. Currently, however, AI chatbots are mostly used as conduits to GPT LLMs and thus the terms AI chatbot and LLM are sometimes used interchangeably (Fig. 1). The most talked about AI chatbot in history was ChatGPT, built on the GPT LLM called GPT-3.5 [16] ([https://chat.openai.com](https://chat.openai.com)). It was released to the public in November 2022 and had 1 million active users in its first week ([https://twitter.com/sama/status/1599668808285028353](https://twitter.com/sama/status/1599668808285028353)) and over 100 million after two months [17], comfortably beating the likes of Facebook, TikTok, Instagram and Uber to that milestone. Since then, Google has launched Bard [18] ([https://bard.google.com](https://bard.google.com)), Microsoft the Bing chatbot [19]([https://www.bing.com](https://www.bing.com)), OpenAI ChatGPT Plus [20] ([https://chat.openai.com](https://chat.openai.com)), Opera Aria [21] ([https://www.opera.com/features/browser-ai](https://www.opera.com/features/browser-ai)), Anthropic Claude 2 [22,23] ([https://claude.ai](https://claude.ai)), and more. AI chatbots have implications for a range of practices and disciplines, and for many facets of our daily lives. They can be used to compose emails, essays, computer code, and speeches or translate text into another language or simply give it a different tone. They can be empowering because they lower the barrier to entry, seamlessly complement human work, and make us more productive and creative [3,24,25]. Many (but mostly proponents of entities with a stake in AI chatbots) tout that AI chatbots will take the drudgery out of everyday office work by automating various tasks and ultimately increase productivity across the economy, e.g., [26,27]. But AI chatbots can also be terrifying and are seen by many as potentially threatening and having far-reaching negative consequences. They could reinforce the biases we already experience, undermine our trust in information, and take away our ability to determine what is real and what is not. Equally important, they are likely to upend the creative and knowledge industries and many types of work that have been primarily done by well-compensated professionals, such as artists, writers, executives, and programmers [3,24,25]. A recent study of the potential impact on the labour market found that LLMs and LLM-powered software will affect between 47% and 56% of all work-related tasks, with higher-income jobs being more affected. Of particular interest to our article is that roles that rely heavily on science and critical thinking show a negative correlation with exposure to LLMs, yet there is some exposure [4]. Thus, AI experts, journalists, policymakers, and the public are increasingly discussing a wide range of important and urgent risks of AI, such as AI race, organizational risks, rogue Als, the reinforcement of social inequalities, remaking labour and expertise, and the exacerbation of environmental injustices [28, 29, 30, 31, 32, 33]. Questions are being asked over safety [34,35], capabilities [36], massive workforce redundancy [37], and legality [38,39]. This has led to calls for a pause in AI development [40], although there are doubts that such attempts would have any impact [41]. In short, the opportunities that generative AI presents for our lives, our communities and our society are as great as the risks it poses [42]. There are those who believe that generative AI will change our work and our lives in general for the better. And there are others who believe that generative AI will disastrously encroach on areas best navigated by sentient beings. Regardless, all agree on an urgent need for safeguards [43, 44, 45, 46] and the first steps have already been taken [47]. All this is even more true for the planned next step, artificial general intelligence (hereafter AGI). The term AGI describes AI systems that will generally be more intelligent than humans and are planned to become a reality within the next 10 years. After the introduction of the first AGI, the world could be extremely different from how it is today [48]. And how does academia and the way we create and write research and scholarly articles fit in? AI chatbots have the potential to revolutionise academia and scholarly publishing [49]. In fact, it seems that academia will be among the first industries to go through this process, since academics and students represented two of the top three occupational groups among the early adopters of ChatGPT [50]. AI chatbots\(-\)most of the attention to date was directed to ChatGPT\(-\)have already been recognised as a powerful tool for scientific writing that can help organise material, proofread, draft and generate summaries [51; 52; 53]. The scientific community is also actively testing their ability to generate entire papers with minimal human input. The consensus is that AI chatbots are able to create scientific essays and reports of scientific experiments that appear credible but are a combination of true and entirely fabricated information [49; 51; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63]. Unfortunately, there is a public perception that ChatGPT is already capable of generating academic papers that get peer-reviewed and published, e.g., [64; 65], which may add to public scepticism about science. This is not the case. For example, the ArXiv repository of pre-prints ([https://arxiv.org](https://arxiv.org)), the AI community's most popular publication forum, shows no results for ChatGPT (in any variation) as a (co-)author (tested on 16 August 2023). We are aware of a single such attempt that has attracted a lot of public attention, but did not get peer reviewed and has no notable scientific value [55]. Regardless, there is a clear consensus among researchers that AI chatbots will be widely adopted in scientific writing in the near future, and it is thus crucial to reach an accord on how to regulate their use [63; 66]. However, to successfully discuss regulation, a better understanding of AI chatbots is needed. This requires more systematic testing of their capabilities, which will provide a more robust understanding of the strengths and weaknesses of the technology. This process has been likened to the processes drugs go through to gain approval. Assessment of AI systems could allow them to be deemed safe for certain applications and explain to users where they might fail [66]. To this end, there is a growing body of testing and benchmarking of generative AI models, e.g., [13; 67; 68]. The standard methodology in machine learning is to evaluate the system against a set of standard benchmark datasets, ensuring that these are independent of the training data and span a range of tasks and domains. This strategy aims to distinguish real learning from mere memorisation. However, this approach is not ideally suited to our needs and to the study of LLM-based AI chatbots for three reasons. First, only the creators of proprietary LLMs have access to all the training details needed for detailed benchmark results. Second, one of the key aspects of the intelligence of LLMs is their generality and ability to perform tasks that go beyond the typical scope of narrow AI systems. Metric of evaluation benchmarks designed for such generative or interactive tasks remain a challenge. The third and perhaps most important reason is that in this article we are interested in how well AI chatbots perform in human tasks. To evaluate this, methods closer to traditional psychology leveraging human creativity and curiosity are needed [69]. Such an approach has already been taken, and there are several evaluations of the performance of AI chatbots in scientific writing, but most of them focus on medicine and similar fields [49; 51; 56; 57; 58; 59; 60; 61; 62; 63; 67; 63; 68; 69; 70; 71; 72]. We are not aware of any such test designed specifically for humanities. Therefore, more tests and, we believe, more types of tests on the performance of AI chatbots in scientific writing are urgently needed. With this in mind, the aim of this article was to design and conduct a test of AI chatbots' abilities in scientific writing in the humanities. First, we were interested in their ability to generate correct answers to complex scientific questions. Second, we tested their capacity to generate original scientific contributions in humanities research. Since AI chatbots are developing at a staggering pace, our results apply to the state of affairs in the third quarter od 2023 (23Q3). To achieve this, we used an interdisciplinary case study combining archaeology, historiography, linguistics and genetic history. We created two one-shot prompts to ask complex scientific questions and we fed them to each of the six AI chatbots tested: ChatGPT-3.5, ChatGPT-4, Bard, Bing Chatbot, Aria, and Claude 2. We also tested two ChatGPT-4 plugins: Bing and ScholarAI. The generated content was evaluated against each other and against the human-generated content. The main reason for the comparison between the AI chatbots was to create a baseline for the staggering speed at which they evolve. Comparison with human-generated responses served as a starting point for the discussion on how upending, transformative, and disruptive generative AI models will be in the humanities in the future. ## 2 Materials and methods ### AI Chatbots This section describes the AI chatbots that were tested. As they are all proprietary commercial products, there is often not much detail available, let alone in the form of peer-reviewed articles. Our descriptions are therefore based on various sources such as blogs, social media and help pages. We do not go into the specifics of the underlying LLMs, as this is a specialised topic on which there is an extensive literature, e.g., [73; 74]. First, the criteria for selecting the six AI chatbots should be elucidated. As mentioned earlier, most of the previous studies have only analysed ChatGPT-3.5. Its inclusion, as well as that of its successor ChatGPT-4, was therefore a given. The Bing chatbot was included because it was arguably the most advanced freely available AI chatbot at the time of the test. Bard was included because it is seen by many as the only challenger to ChatGPT's negernomy. We also wanted to include two chatbots that use an application programming interface (hereafter API) to access LLM. APIs are the only available means for "smaller" developers, i.e., anyone other than OpenAI/Microsoft or Google, to access state-of-the-art LLMs. We chose Aria and Claude 2, which use APIs from OpenAI and Google, respectively. If Aria and Claude 2 performed on par with ChatGPT and Bard, it would signal that generative AI technology is indeed being developed openly "for all humanity", and vice versa. The two plugins, ChatGPT with Bing and ScholarAI, were chosen from the rapidly growing selection as the two most relevant to the task of scientific writing. Baidu's ERNIE bot ([https://yiyyan.baidu.com](https://yiyyan.baidu.com)), on the other hand, was not considered because at the time, it was only available with a Chinese interface and required a Baidu login and the Baidu app (also only available in Chinese) to use. _ChatGPT-3.5_, sometimes called ChatGPT or GPT-3.5, is an AI chatbot offered as a free service by OpenAI ([https://chat.openai.com](https://chat.openai.com); accessed on 11 October 2023). It was fine-tuned from a model in the GPT-3.5 series, more specifically gpt-3.5-turbo, in 2022. This autoregressive language model has the same number of parameters as the largest model from the 2020 GPT-3 series, namely 175 billion. The model was trained using the same methods as InstructGPT-3, but with slight differences in the data collection setup and by using supervised fine-tuning. To predict the next token in the text, it was pre-trained on approximately half a trillion words and improved by task-specific fine-tuning datasets with thousands or tens of thousands of examples that primarily used reinforcement learning from human feedback [75]. ChatGPT-3.5 achieved strong performance on many NLP datasets, including translation and question answering, as well as on several tasks requiring on-the-fly reasoning or domain adaptations. Its breakthrough results paved the way for a new generation of LLMs by demonstrating that scaling language models exponentially increases performance. GPT-3.5 is also available as an API [9]. It may come as a surprise that the core team that developed ChatGPT initially consisted of only approximately 100+ experts, although crowdworkers were also involved as a so-called Short term alignment team [76]. _ChatGPT-4_, also known as ChatGPT Plus or GPT-4, is an evolution of ChatGPT-3.5. It is based on a new model with 10 times the number of parameters estimated at 1.8 trillion [11]. The chatbot is available as a paid service ChatGPT Plus ([https://chat.openai.com](https://chat.openai.com)) and it is based on a large-scale multimodal model that can accept both image and text input and produce text output, but at the time of our test (August 2023) only text input was possible. Another core component compared to ChatGPT-3.5 was the development of an infrastructure and optimisation methods that behave predictably across a wide range of scales. Reinforcement learning from human feedback is also an ongoing process. ChatGPT-4 demonstrates human-level performance on various standardised professional and academic benchmarks, including passing a simulated bar exam with a score near the top 10% of test takers [13]. The ChatGPT service has opened up to third-party plugins [77] and we tested two relevant plugins that existed in July 2023. The plugins are available within the ChatGPT Plus service, but the whole service is in beta testing. ChatGPT-4 with Bing plugin (henceforth _ChatGPT w/Bing_) has no available information on how it processes its prompts. Furthermore, the plugin was only briefly available as an early Beta in late June, it was temporarily unavailable in July and was missing from the plugin store in August. The results must therefore be considered only as an indicator of the final capabilities. ChatGPT-4 with ScholarAI plugin (henceforth ScholarAI) was also under active development during our test. No documentation was available, but the plugin provided metadata about how it processed the prompt (Appendix A: L. 588, footnote 24; Fig. 2) and ScholarAI provided additional information (personal communication with Lakshya Bakshi, CTO and Co-founder of ScholarAI). First, it extracted keywords from the prompt, which were then recombined into a query. Based on this query, it returned the top results from the ScholarAI database of "40M+ peer-reviewed papers". The user then either confirmed the selection or requested further search. When the user was satisfied with the selection of articles, ScholarAI fed the ChatGPT-4 LLM with content. ScholarAI ensures that the LLM receives the source data and tries to get ChatGPT to discuss only the content provided to it, but this is a work in progress. The _Bing Chatbot_ is available either as a Bing Chat or a Bing Compose service and can be used free of charge in the Microsoft Edge browser and in the Bing app ([https://www.bing.com/new](https://www.bing.com/new)). The Bing Chatbot is based on proprietary technology called Prometheus, an AI model that combines Bing's search engine with ChatGPT-4. When prompted by a user, it iteratively generates a series of internal queries through a component called Bing Orchestrator. By selecting the relevant internal queries and leveraging the corresponding Bing search results, the model receives up-to-date information so that it can answer topical questions and reduce inaccuracies. For each search, it ingests about 128,000 words from Bing results before generating a response for the user. In the final step, Prometheus adds relevant Bing search responses and is also able to integrate citations into the generated content. This is how Prometheus grounds ChatGPT-4. However, for prompts that the system considers to be simple, it generates responses using Figure 2: Screenspaf of the ScholarAI plugin for ChatGPT plus demonstrating its inner workings. Microsoft's Turing language models, which consume less computing power ([https://www.linkedin.com/pulse/building-new-bing-jordi-ribas](https://www.linkedin.com/pulse/building-new-bing-jordi-ribas); Fig. 3). The most obvious difference between Chat and Compose is that the former is limited to about 200 words and the latter offers more setting options for the generated content. In our tests, however, the two generated significantly different content (Appendix A). _Aria_ is a browser AI available for free in the Opera browser ([https://www.opera.com](https://www.opera.com)). It was "built on Opera's AI engine, based on several Large Language Models (e.g. Generative Pre-trained Transformer built by OpenAI)." It was primarily designed to support and enhance the browsing experience by compiling information from various online sources. In addition, it can be used as an AI chatbot to assist in generating texts on any topic in any style for which it can ingest web content. One of Aria's most important features is its ability to convert certain keywords in its responses into web links [21,78]. _Bard_ is an AI chatbot available as a free service from Google to a (rapidly growing) limited number of users [18]. It is based on a lightweight and optimised version of Language Models for Dialogue Applications (hereafter LaMDA). LaMDA is a family of GPT LLMs [6] designed specifically for dialogues. First announced during the 2021 Google I/O keynote, it has up to 137 billion parameters and has been pre-trained with 1.56 trillion words from public dialogue data and web text. In addition, LaMDA was fine-turned for safety and factual grounding. The former was achieved by filtering candidate responses using a LaMDA classifier and data annotated by crowdworkers, the latter by enabling the model to consult external sources, such as an information retrieval system, a language translator, and a calculator. Therefore, Bard interacts with an external information retrieval system to improve the accuracy of facts provided to the user [79, 80, 81]. _Claude 2_ is an AI chatbot released by Anthropic, a public benefit corporation, and is currently freely available in the US and UK via a website (claude.ai) and as an API. The underlying LLM was developed on Google's hardware, but few details are available about the model's pre-training. Claude 2 stands out for its fine-tuning. In addition to reinforcement learning with human feedback, so-called constitutional AI was used. This method was developed by Anthropic researchers and requires the model to respond to a large number of questions. It is then instructed to make its answers less harmful by adhering to the "constitution". Finally, the model is adjusted to generate such less harmful content in response to the initial prompt. Claude's constitution, a document created by the developers, is based on the United Nations Declaration of Human Rights and other principles that reflect non-Western perspectives. Another unique feature of the Claude 2 AI chatbot is the size of its context window of 100,000 tokens, or about 75,000 words, which allows users to paste or upload large documents or entire books and prompt questions based on their content [82,83, 22]. In summary, in this article we compare three different types of AI tools: AI chatbots (ChatGPT, Bard, ChatGPT w/ Bing, Claude 2), a new generation of search engines (Bing Figure 3: Prometheus, a high level overview of the workflow (adapted after [https://www.linkedin.com/pulse/building-new-bing-jordi-ribas](https://www.linkedin.com/pulse/building-new-bing-jordi-ribas)). Chat, Aria), and tools powered by ChatGPT (Bing Compose, ScholarAI). Despite these differences, all except Bing Chat were able to comply with our prompts and were therefore suitable for our test. It must be emphasized that the tested AI chatbots were designed as general purpose AI chatbots capable of human-like conversation, and not to "do science". Such downstream applications can be expected in the near future. ### Domain-specific scientific background The case study chosen for testing AI chatbots was the migration of the South Slavs with a follow-up prompt on the Alpine Slavs, a subgroup of the South Slavs. The authors' thematic explanation of the case study can be found in the appendix (Appendix A: L. 383-439 and 799-855). The migration of the Slavs, including the South Slavs, has been a research topic for almost a century. Notwithstanding this, the rapid spread of the Slavic language in the second half of the first millennium CE remains a controversial topic [84; 85; 86; 87; 88; 89; 90; 91; 92]. It is part of the grand narrative of the "dawn of European civilisation". The Europe we live in today emerged in the centuries after the decline of the Roman Empire and was importantly shaped by the ancient Slavs, among others. The current scientific debate on this issue revolves around the gene pool landscape on the one hand and the so-called ethnic landscape on the other. Until the 1950s, migration was assumed to be the main process of change, e.g., [93] and peoples and tribes were understood as caroming around the continent like culture-bearing billiard balls [94]. It was during this period that the term Migration period was coined. Since the 1960s, the understanding of ethnic identity has shifted to the concept of dispersed identities, which states that people fluidly adopt different identities as changing social circumstances dictate, e.g., [95]. Today, most assume that hardly any physical migration has taken place, but rather that ideas and knowledge have been passed on, e.g., [96]. However, recent research in the field of DNA, ancient DNA, and deep data analysis supported by machine learning is providing increasingly compelling evidence that, at least in the case of the South Slavs, physical migrations of people and peoples took place [92]. ### Text prompts Our experiment was based on asking generative AI models two specific scientific questions. We designed two text prompts that were precise enough to produce the desired result without follow-up prompts. The selected case study spans several academic fields, one of which can be considered a natural science (DNA analysis), one a humanities science (historiography) and two a humanities science with links to natural science (archaeology, linguistics). In the USA, archaeology is also considered a social science in certain contexts. The two text prompts were: * Q1: What is scientific explanation for migration of South Slavs in Early Middle Ages? Write 500 words using formal language and provide references where possible. * Q2: What is scientific explanation for the settlement of Alpine Slavs in Early Middle Ages? Write 500 words using formal language and provide references where possible. _Q1_. The first prompt is a complex scientific question on the subject of Early Medieval studies. To discuss it requires knowledge of archaeology, historiography, linguistics, and DNA studies. However, the topic is relatively broad. Spatially, it covers an entire European region, the Balkans. Its scientific background, the migration of the Slavs, is relevant to over 200 million modern Europeans. In short, although it is not one of the foremost topics in the Humanities or even for Early Medieval scholars, there are numerous researchers working on this topic and dozens of relevant scientific papers are published every year. _Q2_. At first glance, the second prompt is almost exactly the same as the first, except for the target group, the Alpine Slavs. However, the added complexity of this prompt comes from the fact that it addresses a very narrow and specific topic. In fact, the only scholarly content on this topic is either more than half a century old, e.g., [97] and not available online, or it comes from a 2022 paper [92], which is too recent to be included in the datasets used for training ChatGPTs. However, the key term "Alpine Slavs" is very specific. In response to the search term "settlement of the Alpine Slavs", the search engines Bing, Google and DuckDuckGo as well as Google Scholar return the mentioned article as the top hit after Wikipedia or the Encyclopaedia Britannica. We therefore expected ChatGPT to respond well to Q1 but have more problems with Q2. On the other hand, AI chatbots with access to the current online content (Bing chatbot, GPT w/Bing) were expected to generate a high quality content for Q2 by sourcing it directly from the relevant article. Our scientific questions are therefore so-called one-shot prompts, where the user provides the AI chatbot a single example of the desired task and then asks it to perform a similar task. It is well known that GPT LLMs are "few-shot learners" (BrownEt2020), i.e. they are much better at generating content when given more examples of the expected content. When using one-shot prompts, multiple refinement prompts are expected to improve results, e.g., [98]. However, few-shot prompts were not suitable for our testing purpose because they did not mimic a scientific question and a series of prompts would reduce the value of a direct comparison between different AI chatbots and introduce subjectivity. Therefore, our one-shot prompts were optimised for comparison rather than for generating the best possible content. ### Tagging and analysis There are several existing studies similar to ours, but they refer to other fields of science. Regardless, a brief overview of the methods used is in order. Altmae and colleagues [51], for example, provided prompts and content generated by ChatGPT and then discussed the quality of the content. Petiska [56] focused only on references and analysed factors such as the number of citations, the date of publication, and the journal in which the paper was published. Majovsky and colleagues [57] posed questions and prompts to the model and refined them iteratively to produce a complete article, which was then reviewed by relevant experts for accuracy and coherence. Buholayka and colleagues [59] tasked ChatGPT with writing a case report based on a draft report and evaluated its performance by comparing it to a case report written by human experts. Our approach is similar to that of Majovsky and colleagues [57], but there are three significant differences. First, we did not use iterative refinement to ensure comparability between different AI chatbots. Second, we did not generate a complete paper. Third, our review was both qualitative and quantitative, not just qualitative. This was achieved by tagging the content. The aim was to provide what is, to our knowledge, the first quantitative comparison of different AI chatbots on the subject of scientific writing. The content generated by each of the tested AI chatbots was tagged for quantitative accuracy and qualitative precision. _Quantitative accuracy_ describes how accurate the content is in the opinion of the human experts (the authors). It was gradated into five classes: * Correct: Factually correct and on par with the content created by human experts. * Inadequate: Factually correct but falls short of the content created by human experts. * Unverifiable: The statement cannot be verified or there is no expert consensus. * w/ Errors: Mostly factually correct, but with important errors that change the meaning. * Incorrect: Factually incorrect. Quantitative accuracy is the measurement most commonly applied to AI-generated content, providing a quantifiable index of how trustworthy the tested AI chatbot is for the task at hand. From the perspective of academia, it can be understood as similar to the grading of a student's work. The questions in this case study are based on the grading of a senior undergraduate student attending a class on Early Medieval Archaeology. _Qualitative precision_ describes how "good" the content is in the opinion of a human expert. In other words, how it compares to a human-generated response in the context of scientific writing. It was gradated into four classes: * Original scientific contribution. * Derivative scientific contribution. * Generic content not directly related to the question. * Incorrect: Factually incorrect, containing errors that change the meaning, or disputed (the last three classes of above quantitative tagging combined). Qualitative precision, as we have defined it, is specific to testing AI-generated content for the purpose of scientific writing and, to our knowledge, has not yet been used. The reason for the insufficient development of such indices is mainly that the current generation of AI chatbots is not expected to generate original scientific content. However, in the near future AGI will be expected to produce original scientific content. The qualitative precision index was therefore developed with an eye to the future as a measure of how close the tested AI chatbots are to AGI. From an academic perspective, qualitative precision can be understood in a similar way to peer review of a scientific paper. As with any peer review, e.g., [99], it is a combination of objective and subjective evaluation. It should be mentioned in passing that in the humanities an article usually consists of both derivative and original content. Typically, introduction and method are predominantly derivative, while results, discussion, and conclusion are predominantly original. The ratio of derivative to original content varies widely, depending on the discipline, topic, type of article, etc. Thus, the expected "perfect score" is not 100%, but in the order of 50+% of the original scientific contribution. To establish the baseline for our case study, we have tagged the responses generated by human experts (see Results section). Both quantitative accuracy and qualitative precision tagging were performed by the two co-authors. To mimic the standard process used in grading students or reviewing scholarly articles we each made our own assessment and consulted to arrive at a unanimous decision. The results are shown in the appendices (qualitative accuracy: Appendix A: Quantitative accuracy: Appendix B). Both co-authors have experience with both tasks in a professional capacity and both are experts on the topic, e.g., [92,100,101]. The tagged content was quantified and the results are discussed in the next section. Given the small amount of data generated by tagging, the observational method of analysis was sufficient. To sort the different AI chatbots according to the result, we calculated the accuracy score using the following formula: _Correct%_ - (2 \(x\) _Incorrect%_). The higher the score, the better. Students would need a positive score for a passing grade. As the amount of tagging data increases in future projects, more sophisticated statistical methods will be used. ### Limitations of the method There are two limitations to our method. First, the case study is limited to a single (interdisciplinary) field in humanities and cannot address the differences between, for example, philosophy and geography. Second, only a limited number of human experts were involved. For better results, we plan to extend this study to a series of field-specific studies and involve more human experts. However, in the current structure of public science, it takes years to accomplish such an organisational feat. Therefore, this article can be understood as an interim measure taken in response to the incredible speed at which AI chatbots are developing. ## 3 Results ### Quantitative accuracy: AI chatbots taking an undergraduate test The quantitative accuracy tagging was intended to objectively determine how correct the answers generated by the AI chatbots were (Fig. 4; Appendix A). The highest accuracy score was achieved by ChatGPT-4 which also generated the highest percentage of correct content. On average, about half of the content provided was correct and about 1/5 was incorrect, with errors or unverifiable. However, as expected (see section 2.3), all of the incorrect content belonged to Q2 for which it could not source the relevant content from the 2022 article. Considering the complexity of the questions, the results were impressive, but far below what would be expected of, for example, a senior undergraduate student. Figure 4: Quantitative test results, generalized accuracy score above and detailed quantitative accuracy below. The performance of ChatGPT-3.5 was in line with expectations. On average, it produced approximately the same amount of correct content as ChatGPT-4, but almost 10% more incorrect content. The latter is the expected result of continuous reinforcement learning from human feedback [75], but the former is somewhat underwhelming given the 10-fold increase in the number of parameters in the model. The Bing chatbot generated only 28% and 13% incorrect or unverifiable content for Q1 and Q2 respectively. However, the percentage of correct content was also one of the lowest at 22% and 18%. The reason for this is that the content was almost exclusively derived from the corresponding Wikipedia pages. The Wikipedia source pages were correctly listed under references, but were systematically misquoted in the text. All references except Wikipedia were secondary, i.e. the generated content copied a selection of references listed on the Wikipedia pages without actually consulting them (many of them are copyrighted material not available on the public web). The article directly referring to Q2 [92] was neither used as a source for the content nor referenced, although, as mentioned above, it is returned by the Bing search engine as the highest-ranking non-Wikipedia result. The content generated by the Bing chatbot thus corresponded well to the chosen settings for a professional (i.e., closely following sources) blog (i.e., formatted in a free flowing text), but fell short of simulating an essay by, for example, a senior undergraduate student. ChatGPT-4 w/Bing performed significantly worse than Bing Chatbot and also worse than ChatGPT-4. In addition, it was unable to source the content for Q2 from the relevant scientific article. At the time of the test, this plugin was not yet working as intended. Bard was far less stable and reliable than ChatGPTs. It generated a comparable percentage of correct content, but almost twice as much incorrect content. ScholarAI, Claude 2, and Aria performed at notably lower level. ScholarAI was obviously not yet ready for production use. It systematically generated incorrect references and was unable to source relevant content. It should be mentioned, though, that online research databases often perform better in scientific fields other than the humanities, such as medicine, chemistry and natural sciences in general. There are several reasons for this, including the fact that the most prestigious form of dissemination in the humanities is books, which are not indexed in (so many) scientific databases and are often not available in open access format. The poor result was surprising for Aria, because it uses a GPT LLM from OpenAI. We can only assume that it uses a model older than GPT-3.5. Also, unlike Bing Chatbot and ChatGPT w/Bing, it did not consult online resources to generate the content, but rather it linked the generated content to online resources. Claude 2 results are comparable to Aria. We can only assume, that it is using a scaled down LaMDA LLM tuned not to outperform Bard. In summary, we can say that GPT-4, Bing Chatbot, and ChatGPT w/ Bing were close to the level of, for example, an undergraduate student's initial research steps for a term paper, which start with looking up general sources like Wikipedia. GPT-3.5 and Bard were a good substitute for searching the internet with general search engines. They were at a level that could be described as a layman looking at a new topic. ScholarAI, Aria, and Claude 2 were not yet up to the task of answering complex humanities questions. ### Qualitative precision: AI chatbots' take on original scientific contribution The main focus of our article was on whether the tested AI chatbots are able to generate original scientific contribution. The short and expected answer is no. A more detailed answer can be found below (Fig. 5; Appendix B). As mentioned earlier, human-generated scientific articles in the humanities are typically a combination of derivative and original scientific contributions. In our case study, the human-generated content included \(\nicefrac{{1}}{{2}}\) of the original scientific contribution for Q1 and \(\nicefrac{{3}}{{4}}\) for Q2. The AI chatbots did not reach this level by far. The only discernible original scientific contribution was at 11% generated by the ChatGPT. ChatGPT-4 aptly inferred in Q1 that the migration of the South Slavs was not a singular event (Appendix B: L. 91 - 93) and its introductory paragraph in Q2 was extremely apt, profound, and on the cutting edge of science (Appendix B: L. 478\(-\)483). Similarly, ChatGPT-3.5 summarised the settlement of the Alpine Slavs very astutely, if repetitively (Appendix B: L. 458\(-\)461 and 646\(-\)467). Claude 2 correctly pointed out that the fact that Christian missionaries had to preach in Slavic languages proves the demographic dominance of the Slavs. This is an established historical fact, but not commonly referred to in the context of migration, and was therefore tagged as an original scientific contribution. ScholarAI has generated what at first sight appeared to be very exciting original scientific content. It has established a direct link between the process of settlement of the Alpine Slavs and their cultural practices and beliefs (Appendix B: L. 578\(-\)580 and 585\(-\)588). The discussion of beliefs in the context of the migrations is far from the norm and, to our knowledge, has never been brought forward for the migrations of the Alpine Slavs. However, ScholarAI's argumentation was flawed because it was based on irrelevant knowledge pertaining to the Baltic Slavs [102] dwelling about 1000 km northeast of the Alpine Slavs. Interestingly, the same hypothesis could have been argued with another freely available scientific text [103], but this is a book rather than an article and is therefore not in the ScholarAI database. Other AI chatbots have not generate original scientific contributions. In conclusion, ChatGPT-4 was once again the best among the AI chatbots tested, but not on the same scale as the human-generated content (Fig. 5). Figure 5: Qualitative test results, original scientific contribution above and detailed precision below (* false argumentation). ### Reasoning errors, hallucinations, biases The most commonly cited shortcomings of AI chatbots are reasoning errors, hallucinations, and biases, e.g., [9; 13]. The terms themselves are not the best choice, because they inappropriately anthropomorphize AI chatbots. However, they are widely used and we have used them for clarity. In the quantitative analysis above, these shortcomings were interchangeably tagged as 'incorrect,' 'with errors,' or 'unverifiable' (Appendix A). Here we address them qualitatively, on a case-by-case basis. Reasoning errors, also termed lack of on-the-fly reasoning or lack of critical thinking, are the kind of incorrect content in causal statements where cause and effect do not match. Critical thinking is one of the most important qualities for humanities scholars and knowledge workers in general. However, AI chatbots based on LLMs are not designed for this task. The most obvious example of a reasoning error in our case study was the content generated by ChatGPT-4 and Bard, which causally linked the migration of Slavs into the Alps to the period of climate cooling (Appendix A: L. \(512-515\) and \(703-704\)). Similarly, ChatGPT-3.5 linked the settlement of the Alpine areas to "fertile lands" (Appendix A: L. \(456-459\)). For most Europeans and most people with formal education worldwide, the Alps are synonymous with mountains and hence with cold climate and harsh agricultural conditions. Most people would therefore reason that a cooling climate and the search for fertile land would not expediate the migration into the Alps, but rather impede it. Another example of a reasoning error was that almost all tested AI chatbots listed the decline of the (Western) Roman Empire as one of the attractors for the migration of South Slavs to the Balkans (Western Roman Empire: Appendix A: L. \(116-117\), \(135-136\), \(144-145\), \(278-281\), \(322-323\), \(454-455\), \(683-684\), \(736-738\); Roman Empire: Appendix A: L. \(124-125\), \(506-507\), \(516-517\), \(548-550\), \(584-585\), \(593-595\)). However, we learn in the high school history classes that the (Western) Roman Empire preceded the migration of the South Slavs by at least a century. In fact, the Byzantine Empire was the declining superpower that created the power vacuum for the immigration of the South Slavs to the Balkans. The fact that both LaMDA (bard) and GPT-4 (ChatGPT-4) generated almost identical incorrect content suggests that such behaviour is inherent in the current generation of GPT LLMs. The underlying issue on the lack of critical thinking was that none of the tested AI chatbots made any attempt to critically compare different sources. For example, the most important component of a human-generated response to Q1 was: "Currently, there are three main hypotheses..." (Appendix A: L. \(394\)), which was continued by comparing several different sources. No such attempt was detected in the content generated by the AI chatbot. Anecdotally, the majority of randomly selected human users were able to distinguish the critical thinking of the human expert from the content generated by ChatGPT, based solely on the 24-character snippet "There are 3 hypotheses..." without further context (Fig. 6). Critical comparison of different sources is typical and vital not just in any kind of scientific reasoning, but also in everyday life. The one-sided approach of the tested AI chatbots amplifies "the loudest voice" (the highest ranking search engine result), which is not only bad science but also a grave danger for balanced news reporting, democracy, minority rights, etc. Hallucinations or confabulations of AI chatbots are confident responses by an AI that are not justified by its training data. This is not typical of AI systems in general, but is relatively common in LLMs, as the pre-training is unsupervised [104]. The most obvious hallucinations in our case study were invented references (ChatGPT-4, Appendix C: L. \(36\); ChatGPT-3.5, Appendix A: L. \(495-498\); ScholarAI, Appendix A: L. \(189-197\)). Similarly, attempts to inline citations by Bing (Appendix A: L. 257) and ChatGPT-4 w/ Bing (Appendix A: L. 575\(-\)581) were largely confabulations. It would appear that the inability to provide correct inline citations is a known problem, as ChatGPT-4 w/ Bing generated a warning to this effect (Appendix A: L. 156\(-\)157). Another very clear example of hallucination was the Late Antique Little Ice Age phenomenon. ChatGPT-4 dated it "between 300 and 700 AD" (Appendix A: L. 512). The correct dates are 536 to about 660 AD, which is clearly intelligible to any human by consulting the title of the reference which ChatGPT-4 correctly provides: "... Late Antique Little Ice Age from 536 to around 660 AD" (Appendix A: L. 532\(-\)535). The underlying issue and a key challenge for current technology, is that "AI chatbots do not know what they do not know" and may very confidently invent facts [46] rather than formulate the sentence "I don't know". _Biases_ are often exhibited by AI chatbots. They are based on training data, but according to recent research, they can be amplified beyond existing perceptions in society. Biases generated by AI chatbots can therefore be informative about the underlying data, but they can also be misleading if the AI-generated content is used uncritically. The most researched biases to date are those related to gender, race, ethnicity, and disability status, e.g., [29; 49; 104; 105; 106]. In our test we have detected three different types of biases: language bias, neo-colon bias, and citation bias. First, language bias. Although there is far more relevant scholarly content written in Balkan languages than in English, 92% of the references generated by the AI chatbots in our test referred to English and none to Balkan-language publications (Fig. 7). This can only be partially explained by the fact that the prompts were in English. Namely, three (8%) German references prove that English was not the only criterion for selection. When question Q2 was asked with a prompt in Slovenian, two references were again in English and the third in Slovenian was a hallucination (Appendix C: L. 36). The detected language bias is most likely due in large part to the language bias of online search engine ranking algorithms that favour English publications [107]. This bias seems to be a wasted opportunity, because all tested AI chatbots "understand" many languages, e.g., [13]. Figure 6: Twitter (now X) poll asking human users to differentiate between ChatGPT and human-generated content with almost no context. Most respondents answered correctly. Second, neo-colonial bias. 88% of the references are by authors from the global West. Only a minority (12%) belong to English translations of texts originally written in Slavic languages [102; 108; 109], although there are several other (more) relevant translations, e.g., [88; 110; 111; 112; 113; 114; 115; 116; 117]. This reflects a scholarly hierarchy created by colonialism (until the 1910s, the Balkans were largely divided between the Austro-Hungarian and Ottoman Empires), sometimes referred to as a neo-colonial pattern in the global network of science in which the intellectual dominance of the global West is growing [118; 119]. To our knowledge, the neo-colonial bias for the study of medieval Slavs has not yet been explicitly analysed or even discovered, as it has never been revealed as clearly as through the use of AI chatbots in this case study. Third, citation bias. 75% of the references are from before 2005 and the oldest was originally published in 1895 [108]. This is showing a very clear bias towards new and up to date publications. For example, by far the most referenced publication in our case study is Curta [85]. While this is still a seminal work, it is outdated and has often been criticised, e.g., [91] and the critiques have been responded to [86]. Therefore, in a modern scientific text created by a human expert, the reference to Curta is always followed by either its critique or an up-to-date response to that critique. In AI-generated content, however, Curta is always referenced as the primary source. This bias is in line with the growing trend to cite old documents, caused at least in part by the "first page results syndrome" combined with the fact that all search engines favour the most cited documents [120; 121]. The ScholarAI plugin, for example, transparently discloses that it ranks references solely by the number of citations (Appendix A: L. 622, footnote 25). These findings are consistent with the recent study looking at references generated by ChatGPT-3.5. It revealed that ChatGPT-3.5 tends to reference highly cited publications, shows a preference for older publications, and refers predominantly to reputable journals. The study concluded that ChatGPT-3.5 appears to rely exclusively on Google Scholar as a source of scholarly references [56]. All three of the listed biases are abundant in the training material, which is of course mostly the public web. In other words, an uncritical online research by a human using one of the major search engines, Google Scholar or Wikipedia pages would yield comparable results with language bias, citation bias, and neo-colonial bias. However, proper research by a human expert(s) would avoid citation bias and at least reduce language and neo-colonial bias. Figure 7: Language of references generated by AI chatbots to an English language prompt. ### Race for parameters It was not the intention of this article to gain insights into the efficiency of LLMs, but as a side note, the current trend of upsizing the LLMs, sometimes referred to as an AI Arms Race, can be addressed. Currently, numerous well-funded startups, including Anthropic, AI21, Cohere, and Character.AI, are putting enormous resources into developing ever larger algorithms and the number of LLM parameters exceeds the exponential growth trend line (Fig. 8). However, it is expected that the returns on scaling up the model size will diminish [13]. In our data, we observed the impact of the exponential growth of the LLM on the content generated. ChatGPT-4 is approximately 10 times larger than ChatGPT-3.5. LaMDA has a similar number of parameters as ChatGPT-3.5, 137 and 175 billion respectively, but the AI chatbot tested, Bard, only uses the "lightweight" version of LaMDA. Assuming that the "lightweight" version uses only half the parameters, the size ratio between LaMDA (Bard), ChatGPT-3.5, and ChatGPT-4 is 1:2:20. The up to 10% improvement in generated content of ChatGPT-4 compared to ChatGPT-3.5 is not negligible, but for many use cases it might not be noticeable. Given the tenfold increase in the number of parameters, this is not particularly impressive. Especially since the 10% improvement is not solely due to the new model, but also to continuous reinforcement learning from human feedback. Nevertheless, the improvement of ChatGPT-4 over Bard in reducing incorrect content is significant. One can surmise that the twenty-fold increase in parameters from Bard to ChatGPT-4 is significant enough to be noticeable in daily use, while the ten-fold increase from ChatGPT-3.5 to ChatGPT-4 is mostly only observable in a controlled test environment. Given that improvements are diminishing and that continuous exponential growth of LLMs is physically impossible with current technology, we can only conclude that the growth of LLMs has reached a plateau. This observation is consistent with the sparse information coming out of the industry: "...we're at the end of the era where it's going to be these, like, giant, giant models... We'll make them better in other ways." (Sam Altman, CEO of OpenAI, at an event on MIT on 14 April 2023). Figure 8: The race for parameters. The increase in the number of parameters (red dotted line) exceeds the exponential trend (orange line); the in-context learning as detected by our test (blue columns; see section 3.1) only improves with a linear trend (blue line; adapted after [122] and updated with sources cited in section 2.1). ## 4 Discussion: Data to Knowledge The results of our analysis show that ChatGPT, while currently the most advanced AI chatbot, is not capable of producing original scientific content. This is a zero-shot learning problem, but it is also much more than that. Namely, LLM-based AI chatbots are not designed to generate content in the same way that human researchers produce new knowledge. A typical process for producing original scientific contribution, which was also used in our case study, is an intricate process that is perhaps best explained using Ackoff's Data-Information-Knowledge-Wisdom (DIKW) hierarchy. According to this hierarchy, data is the raw input, simple facts without context; information is data that has been given context and is meaningful in some way; knowledge is the understanding and interpretation of that information gained through experience or learning; and wisdom, the highest level of the pyramid, is the ability to apply knowledge judiciously and appropriately in different contexts, understand the broader implications, and make insightful decisions based on accumulated knowledge and experience [123]. In our case study, the data was the documentation of the excavations of 1106 archaeological sites. The information was summaries of the excavation reports and scientific publications on these 1106 sites curated in the structured database Zbiva, available on the public web since 2016 [124]. Knowledge were the scholarly articles discussing and/or analysing the migration of the Alpine Slavs, e.g., [97, 114, 115]. Wisdom is what happens (rarely) after the publication of the scientific articles and after the generation of AI chatbot content and does not concern us here. Human researchers approached Q2 by first obtaining and refining the data, which was followed by analysing the information using appropriate software, formulating knowledge based on the results, and finally disseminating the knowledge in the form of an original scientific article. In the real world, archaeological practice (and most humanities research) is messy and therefore this process is fuzzy and recursive. It takes the form of a hermenetic spiral [125], which from the perspective of computational algorithms are loops between data, information and knowledge. These loops involve solving computationally irreducible processes that cannot be handled by LLM-based AI chatbots alone (Fig. 1: I). In other words, to generate original scientific content requires not only access to curated data/information, but also the ability to analyse it. LLMs, on the other hand, are pre-trained on existing knowledge (texts, books) and only able to recombine it in new permutations. Occasionally this can lead to modest original scientific content. For instance, AI chatbot could be used to summarize what a historical text verbatim says about a certain subject, but not to interpret it. Therefore, this is a limited and finite avenue to original scientific contributions. Regardless of the future improvement of LLMs, LLM-based AI chatbots will not be able to replicate the workflow of human researchers, as they are only trained on the existing knowledge. A purpose-built LLM-based software, on the other hand, could handle such a workflow: searching for data/information, performing relevant analysis, generate textual and graphical information, and summarising it into new knowledge (Fig. 1: K, L, M; G, J). Such LLM-based software would have several qualities of an AGI and is in fact already feasible with existing technology, for example by combining Prometheus, a relevant cloud-based software connected through a ChatGPT API, and ChatGPT-4. In a nutshell, LLM-based AI chatbots are not and probably will never be able to generate new knowledge from data in the same way as human researchers (in the humanities), but appropriate LLM-powered software might be. ## 5 Conclusion: fluent but not factual Most commentators on generative AI, including the authors of this article, agree that the current generation of AI chatbots represent AI's inflection point. It is very likely that historiography will record ChatGPT-3 as the eureka moment that ushered in a new era for humanity. What this new era will bring is currently unknown, but it seems that it will, for better or worse and probably both, change the world. To maximize the positive and to mitigate the negative as much as possible, making AI safe and fair is necessary. And a significant part of making AI safe is testing. The aim of this article was to test the current AI chatbots from human(ities) perspective, specifically their scientific writing abilities. We compared six AI chatbots: ChatGPT-3.5, ChatGPT-4, Bard, Bing Chatbot, Aria, and Claude 2. In accordance with expectations, ChatGPT-4 was the most capable among the AI chatbots tested. In our quantitative test, we used a method similar to grading undergraduate students. Bing Chatbot and ChatGPT-4 were nearing the passing grade and ChatGPT-3.5 and Bard were not far behind. Claude 2 and Aria produced much weaker results. The ChatGPT-4 plugins were not yet up to the task. In our qualitative test, we used a method similar to peer reviewing a scientific article. ChatGPT-4 was again the best performer, but it didn't generate any notable original scientific content. Additional shortcomings of the AI-generated content that we found in our test include reasoning errors, hallucinations, and biases. Reasoning errors refer to the chatbot's inability to critically evaluate and link cause-and-effect relationships, as evidenced by several historical inaccuracies regarding the migration patterns and historical periods of the Slavs. Hallucinations denote confident but unsubstantiated claims by the AI, such as invented references and inaccurate dates. Our test also reveals significant biases in the content generated by the AI. These biases manifest as language bias, favouring English sources over other relevant languages; neo-colonial bias, displaying a preference for Western authors; and citation bias, skewing towards older and more highly cited publications. These findings highlight that despite their technological prowess, AI chatbots remain reliant on their training data echoing or even amplifying existing biases and knowledge gaps. Because they veer towards past data, they are likely to be too conservative in their recommendations. Since the listed deficiencies are relatively inconspicuous compared to, for example, gender or racial biases, it is unlikely that they will be remedied in the foreseeable future by resource-intensive reinforcement learning from human feedback. These biases are among the key concerns with the use of AI chatbots in scientific writing, as they are less likely to be highlighted in the review processes. Our results also highlighted the possible future trends in the development of AI chatbots. The large discrepancy between almost passing an undergraduate exam and not producing any notable scientific contribution may seem surprising at the first glance. On closer inspection, however, this was to be expected. "Doing science", i.e. making an original scientific contribution, is much more complex than just doing very well in the exams. It is based on proactively acquiring and analysing data and information to generate new knowledge, whereas "passing an exam" is based on accumulating existing knowledge and passing it on at a request. "Passing an exam" will further improve when AI chatbots are given access to curated data. An AI chatbot with access to selected datasets would be a typical downstream task developed around an existing LLM. However, without access to external tools, LLM-based AI chatbots will never be suitable for "doing science". Therefore, in the near future an evolution of current LLM-based AI chatbots towards LLM-powered software capable of, among other things, "doing science" seems likely. This assertion is in line both with the fact that the growth of LLMs seems to have plateaued and that the industry is turning to other solutions. In conclusion, we agree with the previous commentators that AI chatbots are already widely used for various tasks in scientific writing due to their applicability to a wide range of tasks. However, AI chatbots are not capable of generating a full scientific article that would make a notable scientific contribution to the humanities or, we suspect, to science in general. If left unsupervised, AI chatbots generate content that is fluent but not factual, meaning that the errors are not only many but also easily overlooked, _cf._[126]. Therefore, peer review processes need to be rapidly adapted to compensate for this, and the academic community needs to establish clear rules for the use of AI-generated content in scientific writing. The discussion about what is acceptable and what is not must be based on objective data. Articles like this one are necessary to support those decisions and we suspect that many more will follow. As for the future beyond the immediate foreseeable, when LLM-powered software and/or AGI will be able to generate original scientific contributions, we agree that questions about explainable AI are likely to come to the fore. Understanding our world, a fundamental aspiration of the humanities, will only be partially achieved through the use of black-box AI. Since the humanities, just like justice, for example, are as much about process as outcome, humanities scholars are unlikely to settle for uninterpretable AI-generated predictions. We will want human-interpretable understanding, which is likely to be the remaining task of human researchers in the humanities in the future, _cf._[127]. Appendix A, Appendix B, and appendix C are available on the open access repository Zenoto under CC-BY 4.0.12000. [https://doi.org/10.5281/zenodo.8345088](https://doi.org/10.5281/zenodo.8345088). **Author Contributions:** Both authors, E.L. and B.S., contributed equally to the article. Conceptualization, E.L. and B.S.; methodology, E.L. and B.S.; validation, E.L. and B.S.; formal analysis, E.L. and B.S.; writing\(-\)original draft preparation, E.L. and B.S.; writing\(-\)review and editing, E.L. and B.S.; visualization, E.L. and B.S.; project administration, E.L. and B.S.; funding acquisition, E.L. and B.S. All authors have read and agreed to the published version of the manuscript. **Funding:** This research was part of the AI4Europe project that has received funding from the European Union's Horizon Europe research and innovation programme under Grant Agreement n\({}^{\text{e}}\) 101070000. **Institutional Review Board Statement:** Not applicable. **Informed Consent Statement:** Not applicable. **Data Availability Statement:** All data used in and produced by the research are available in the appendices. **Acknowledgments:** The authors give thanks to dr. Zoran Cuckovic for introducing them to ChatGPT in December 2022. **Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.
2309.11367
Generalized van der Waerden Game on an Infinite Board
Consider the following Maker-Breaker game. Fix a finite subset $S\subset\mathbb{N}$ of the naturals. The players Maker and Breaker take turns choosing previously unclaimed natural numbers. Maker wins by eventually building a copy $aS+b$ of $S$, where $a\in\mathbb{N}\setminus\{0\}$ and $b\in\mathbb{Z}$. This is a generalization of a game analyzed by Beck. We show that Maker can win in $|S|$ moves if $|S|\leq 3$. When $|S|=4$, we show that Maker can always win in $5$ or less moves, and describe all $S$ such that Maker can win in $4$ moves. If $|S|\geq 5$, Maker has no winning strategy in $|S|$ moves.
Hannah Alpert, Liam Barham, Brian Freidin, Ian Tan, Gabriel Weiner
2023-09-20T14:50:17Z
http://arxiv.org/abs/2309.11367v1
# Generalized van der Waerden game on an infinite board ###### Abstract. Consider the following Maker-Breaker game. Fix a finite subset \(S\subset\mathbb{N}\) of the naturals. The players Maker and Breaker take turns choosing previously uncalemed natural numbers. Maker wins by eventually building a copy \(aS+b\) of \(S\), where \(a\in\mathbb{N}\setminus\{0\}\) and \(b\in\mathbb{Z}\). This is a generalization of a game analyzed by Beck [1]. We show that Maker can win in \(|S|\) moves if \(|S|\leq 3\). When \(|S|=4\), we show that Maker can always win in \(5\) or less moves, and describe all \(S\) such that Maker can win in \(4\) moves. If \(|S|\geq 5\), Maker has no winning strategy in \(|S|\) moves. **Keywords:** combinatorial games, van der Waerden, Maker-Breaker, optimal strategies ## 1. Introduction Let \(S\subset\mathbb{N}\) be a fixed finite set, and consider the following game between players Maker and Breaker. The players alternate picking previously unselected natural numbers, with Maker making the first selection. Maker wins when for some \(a\in\mathbb{N}\setminus\{0\}\) and \(b\in\mathbb{Z}\), the set of choices of Maker contains \(aS+b\). By a well-known theorem of van der Waerden [7], for any given \(S\), Maker always wins in a finite number of moves (see Theorem B in [2]). This is a generalization of a game considered by Beck in [1], where the players pick elements of the finite set \([m]\) and Maker wins by selecting all elements of an \(n\)-term arithmetic progression. While Beck was concerned with finding bounds on the minimum \(m\) such that Maker can win, we focus instead on the minimum number of selections needed for Maker to win. We note also a generalization of van der Waerden's theorem for subsets of size \(3\) in [3]. They provide an upper bound for the minimum \(m\) such that any \(2\)-coloring of \([m]\) contains \(aS+b\), restricting to the case \(|S|=3\). The problem of finding fast winning strategies for various Maker-Breaker games was previously explored in [6, 4]. In section 2, we explain a method for finding winning strategies by constructing trees with certain properties, concluding with a construction that yields a win for Maker in \(|S|\) moves if \(|S|\leq 3\). Then in section 3, we introduce the notion of "symmetric" sets, which we use to show that Maker has no winning strategy in \(|S|\) moves if \(|S|\geq 5\). Section 4 solves the case where \(|S|=4\); we show that Maker can win in \(5\) moves or less, and characterize \(S\) such that Maker can win in \(4\) moves. We end the paper with a brief discussion on some further questions. ## 2. Winning strategies We say that Maker has an _\(N\)-move strategy_ if there exists a strategy such that Maker always wins with \(N\) selections or less. Although we always assume that \(S\subset\mathbb{N}\), it will be convenient to talk about subsets of \(\mathbb{Q}\). Given \(R,R^{\prime}\subset\mathbb{Q}\), we say that \(R^{\prime}\) is a _copy_ of \(R\) if there exists \(a\in\mathbb{Q}^{+}\) and \(b\in\mathbb{Q}\) such that \(R^{\prime}=aR+b\). In other words, there exists an increasing affine function \(f:\mathbb{Q}\to\mathbb{Q}\) such that \(R^{\prime}=f(R)\). **Proposition 1**.: _Suppose that there exists a directed tree \(T=(V,E)\) in which each leaf is at most \(N-1\) edges from the root with the following properties:_ 1. _The vertices are distinct rational numbers._ 2. _The set of rationals along any branch contains a copy of_ \(S\)_._ 3. _Every vertex which is not a leaf nor the root has out-degree at least 2._ _Then Maker has an \(N\)-move strategy._ Proof.: Suppose such a tree exists, and choose one so that the root is 0. There exists some constant \(c>0\) such that \(V\subseteq[-c,c]\). We can pick \(k\in\mathbb{Q}\) large enough so that \(k(V\setminus\{0\})\cap[-c,c]=\emptyset\). By appending \(k(V\setminus\{0\})\) to 0, we get a tree satisfying (1) and (2) where the root now has outdegree 2. Thus it suffices to prove the case where every non-leaf vertex has outdegree at least 2. We can further assume that the vertices are distinct natural numbers, since there exists some increasing affine function \(f:\mathbb{Q}\to\mathbb{Q}\) such that \(f(V\cup kV)\subset\mathbb{N}\). Now Maker can adopt the following strategy. Maker's first selection is the root, and each subsequent selection is an outneighbor of the previous one such that Breaker has not selected any vertices on the subtree starting from Maker's selection. Since all the vertices are distinct, on each move Breaker can pick an element from at most one subtree starting from an outneighbor of Maker's previous move. From now on, we will look for trees as described in the proposition, as they correspond to winning strategies for Maker. **Example 2**.: The following tree describes a 4-move strategy when \(S=\{1,2,3,4\}\). **Example 3**.: The following tree describes a 5-move strategy when \(S=\{0,2,3,6\}\). The subsets \(\{0,16,24,48\},\ \{-32,0,16,64\},\ \{16,48,64,112\},\ \{-6,0,3,12\},\ \{-24,0,12,48\},\\ \{-6,12,21,48\}\) are copies of \(S\). **Corollary 4**.: _When \(|S|\leq 3\), Maker has a \(|S|\)-move strategy._ Proof.: If \(|S|\) is \(1\) or \(2\) the statement is trivial. Suppose \(|S|=3\) and write \(S=\{a,a+b,a+b+c\}\). Consider the following tree. ## 3. A weak lower bound For this section, we use the convention that \(|S|=n\), and we will list \(S=\{s_{1},\ldots,s_{n}\}\) in increasing order. The best hope for Maker is to win in \(n\) moves. A necessary condition on \(S\) for this to be possible is that after \(n-1\) moves, there are at least two different ways to complete a copy of \(S\). Otherwise, Breaker would take the only move. This motivates the following definition. **Definition 1**.: \(S\) is called _symmetric_ if there exists \(1\leq i<j\leq n\) such that \(S\setminus\{s_{i}\}\) is a copy of \(S\setminus\{s_{j}\}\). In this case we say that \(S\) has symmetry type \((i,j)\). **Example 5**.: The arithmetic sequence \(\{1,2,\ldots,n\}\) is symmetric of type \((1,n)\). The set \(\{1,2,3,5\}\) is symmetric of type \((2,4)\) since \(\{1,3,5\}=2\{1,2,3\}-1\). **Lemma 6**.: _Maker can only have a \(n\)-move strategy if \(S\) is symmetric._ Proof.: Suppose that \(S\) is not symmetric. We will describe a strategy for Breaker. Assume first that after \(n-1\) moves, Maker has constructed \(f(S\setminus\{s_{i}\})\) for some affine \(f:\mathbb{Q}\to\mathbb{Q}\). By assumption, for any \(g:\mathbb{Q}\to\mathbb{Q}\) affine and \(j\neq i\), we have \(f(S\setminus\{s_{i}\})\neq g(S\setminus\{s_{j}\})\). Thus, by playing \(f(s_{i})\) on move \(n-1\), Breaker can prevent Maker from winning on the next move. If Maker hasn't created a copy of \(S\setminus\{s_{i}\}\) for some \(i\leq n\), then Breaker can play their favorite legal number. **Lemma 7**.: _A symmetric set can only have symmetry type \((1,n)\), \((2,n)\) or \((1,n-1)\)._ Proof.: Suppose that \(S\) has symmetry \((i,j)\). If \(i>2\) (resp. \(j<n-1\)), then the first (resp. last) two elements of \(S\) are also the first (resp. last) two elements of \(S\setminus\{s_{i}\}\) and \(S\setminus\{s_{j}\}\). This is a contradiction since an affine \(f:\mathbb{Q}\to\mathbb{Q}\) does not fix two distinct points unless \(f\) is the identity function. The only remaining case to rule out is \((i,j)=(2,n-1)\). However, in this case the first and last elements are the same in \(S\setminus\{s_{i}\}\) and \(S\setminus\{s_{j}\}\). We later show that an \(S\) cannot have two different symmetry types at once when \(n>3\) (if \(n=3\), then every \(S\) is symmetric of every type). First, we shall characterize these symmetric sets. For those \((1,n)\)-symmetric sets which are not arithmetic sequences, we have the following. Lemma 8 justifies the description "representative copy." **Definition 2**.: For \(k>1\), choose the set \[C_{1}(n,k)=\{1,k,k^{2},\ldots,k^{n-1}\}\] to be a representative copy for the symmetry type \((1,n)\). **Lemma 8**.: _Suppose \(S\) has symmetry type \((1,n)\) and is not an arithmetic sequence. Then there exists \(k>1\) such that \(C_{1}(n,k)\) is a copy of \(S\)._ Proof.: Let \(d_{i}=s_{i+1}-s_{i}\) for \(i=1,\ldots,n-1\) denote the set of differences. There exists \(k,l\) such that \(S\setminus\{s_{1}\}=k(S\setminus\{s_{n}\})+l\), hence \[d_{i+1}=s_{i+2}-s_{i+1}=(ks_{i+1}+l)-(ks_{i}+l)=kd_{i}.\] It follows that \(d_{i}=k^{i-1}d_{1}\) for all \(i\leq n-1\). This shows that \[S^{\prime}=\{0,1,1+k,1+k+k^{2},\ldots,1+k+k^{2}+\cdots+k^{n-1}\}\] is a copy of \(S\). If \(S\) is not an arithmetic sequence, then \(k\neq 1\). Using the geometric series formula \(\sum_{i=0}^{n-1}k^{i}=\frac{k^{n}-1}{k-1}\) we have \[(k-1)\cdot S^{\prime}+1=\{1,k,k^{2},\ldots,k^{n-1}\}.\] If \(k>1\), then \(C_{1}(n,k)\) is a copy of \(S\). If \(k<1\), then multiply through by \(k^{-n+1}\) to show that \(C_{1}(n,\frac{1}{k})\) is a copy of \(S\). Given \(R,R^{\prime}\subset\mathbb{Q}\), we say that \(R^{\prime}\) is a _reflection_ of \(R\) if there exists \(a\in\mathbb{Q}^{-}\) and \(b\in\mathbb{Q}\) such that \(R^{\prime}=aR+b\). This definition is similar to that of "copy," but with \(a\) being negative. Note the following facts. If we have a tree describing a winning strategy for \(S\), then by switching the sign of every vertex we have a tree describing a winning strategy for a reflection of \(S\). If \(S\) has symmetry type \((2,n)\), then a reflection of \(S\) has symmetry type \((1,n-1)\). The following definition and lemma are analogous to those for \(C_{1}(n,k)\). **Definition 3**.: For \(k>1\) and \(n\geq 2\) define \(C_{2}(n,k)=\{0,1,k,k^{2},\ldots,k^{n-2}\}\). **Lemma 9**.: _Suppose \(S\) has symmetry type \((2,n)\) (resp. \((1,n-1)\)). Then there exists some \(k>1\) such that \(C_{2}(n,k)\) is a copy (resp. reflection) of \(S\)._ Lemmas 8 and 9 reveal a peculiar property of symmetric sets. If \(S\) is symmetric of type \((i,j)\), then \(S\setminus\{s_{i}\}\) and \(S\setminus\{s_{j}\}\) are also symmetric. This can be checked directly from the representative copies. For example, a symmetric set \(S\) with symmetry type \((1,n)\) is a copy of \(C_{1}(n,k)=\{1,k,\ldots,k^{n-1}\}\). Then \(S\setminus\{s_{n}\}\) is a copy of \(\{1,k,\ldots,k^{n-1}\}\setminus\{k^{n-1}\}=C_{1}(n-1,k)\) and, by the definition of a symmetric set, so is \(S\setminus\{s_{1}\}\). **Lemma 10**.: _If \(n\geq 4\) then \(S\) cannot be simultaneously symmetric of two different types._ Proof.: Suppose \(S\) is simultaneously symmetric of types \((1,n)\) and \((2,n)\). Then \(S\setminus\{s_{1}\}\) is a copy of \(S\setminus\{s_{n}\}\) and \(S\setminus\{s_{2}\}\) is a copy of \(S\setminus\{s_{n}\}\). It follows that \(S\setminus\{s_{1}\}\) is a copy of \(S\setminus\{s_{2}\}\), i.e. \(S\) also has symmetry type \((1,2)\). But this is impossible, by Lemma 7. Similarly, \(S\) cannot be simultaneously symmetric of types \((1,n)\) and \((1,n-1)\). This leaves one case to check. Suppose \(S\) is simultaneously symmetric of types \((2,n)\) and \((1,n-1)\). By Lemma 9, there exist increasing affine functions \(f,g:\mathbb{Q}\to\mathbb{Q}\) and constants \(k,l>1\) such that \[f(\{0,1,k,\ldots,k^{n-3},k^{n-2}\})=S=g(\{-l^{n-2},-l^{n-3},\ldots,-l,-1,0\}).\] Let \(h=g^{-1}\circ f\). Since increasing functions preserve order, \(h(0)=-l^{n-2}\), \(h(1)=-l^{n-3}\), \(h(k^{n-3})=-1\) and \(h(k^{n-2})=0\). If \(h(x)=ax+b\), the first equation implies \(b=-l^{n-2}\). From the other three we get the following system. \[a-l^{n-2} =-l^{n-3}\] \[ak^{n-3}-l^{n-2} =-1\] \[ak^{n-2}-l^{n-2} =0\] Solving for \(a\) and \(k\) in terms of \(l\), we obtain \(a=l^{n-2}-l^{n-3}\) and \(k=\frac{l^{n-2}}{l^{n-2}-1}\). Then we plug these values into \(ak^{n-2}-l^{n-2}=0\) and clear the denominator to get \[(l^{n-2}-l^{n-3})l^{(n-2)^{2}}-l^{n-2}(l^{n-2}-1)^{n-2}=0.\] By the rational root theorem, the only possible rational solutions are \(l=\pm 1\). But this contradicts \(l>1\). It is interesting to note that if \(S\) is allowed to contain any real number, \(S=\{0,1,\varphi,\varphi^{2}\}\) is simultaneously symmetric of types \((2,4)\) and \((1,3)\), where \(\varphi=\frac{1+\sqrt{5}}{2}\) is the golden ratio. We end this section with one of the main claims of this paper (Theorem 12). Its proof uses the following lemma. **Lemma 11**.: _Suppose Maker has a \(n\)-move strategy. Then \(S\) is symmetric, i.e. a copy or a reflection of \(C_{s}(n,k)\), where \(s\in\{1,2\}\). In any game using this winning strategy, on the \((n-r)\)th move Maker builds either_ * _a copy of_ \(C_{1}(n-r,k)\)_, if_ \(S\) _has symmetry_ \((1,n)\)_,_ * _a copy of_ \(C_{2}(n-r,k)\)_, if_ \(S\) _has symmetry_ \((2,n)\)_, or_ * _a reflection of_ \(C_{2}(n-r,k)\)_, if_ \(S\) _has symmetry_ \((1,n-1)\)_._ _Hence, Maker has an \((n-r)\)-move strategy for copies of \(C_{s}(n-r,k)\)._ Proof.: The fact that \(S\) is symmetric is Lemma 6. It suffices to prove the case \(r=1\). The result follows by induction. Suppose Maker builds \(R\) on their \((n-1)\)th move. Then Breaker would pick \(x\) such that \(R\cup\{x\}\) is a copy of \(S\). Maker then plays \(y\neq x\) such that \(R\cup\{y\}\) is also a copy of \(S\). Thus, \(R\cup\{y\}\) is a copy of \(R\cup\{x\}\). That is, there exists an affine \(f:\mathbb{Q}\to\mathbb{Q}\) such that \[f(R)\cup\{f(x)\}=R\cup\{y\}=S^{\prime}\] for some copy \(S^{\prime}=\{s^{\prime}_{1},s^{\prime}_{2},\ldots,s^{\prime}_{n}\}\) of \(S\), indexed in increasing order. Note that \(f(x)\neq y\), otherwise \(f(R)=R\) which is impossible if \(|R|>1\). It follows that \(S^{\prime}\setminus\{y\}\) is a distinct copy of \(S^{\prime}\setminus\{f(x)\}\). This means that \(S^{\prime}\) is symmetric of type \((k,l)\) where \(f(x)=s^{\prime}_{k}\) and \(y=s^{\prime}_{l}\) (or vice versa). By Lemma 10, \(f(x)=s^{\prime}_{k}\) and \(y=s^{\prime}_{l}\). Therefore \(R=S^{\prime}\setminus\{s^{\prime}_{l}\}\). By the comments preceding Lemma 10, we are done. **Theorem 12**.: _If \(n\geq 5\), then Maker does not have an \(n\)-move strategy._ Proof.: By Lemma 11, it suffices to prove the case \(n=5\). The result follows by induction. Suppose \(S\) is has symmetry type \((1,5)\) and is not an arithmetic sequence. Suppose for the sake of a contradiction that Maker has a \(5\)-move strategy. Again by Lemma 11, Maker builds \(f(\{k,k^{2},k^{3}\})\) on their third move and must build \(g(\{1,k,k^{2},k^{3}\})\) on their fourth move, where \(f,g:\mathbb{Q}\to\mathbb{Q}\) are increasing affine functions. We claim that the only way to do this is for Maker to select \(f(1)\) or \(f(k^{4})\) on their fourth move. Let \(x\in\mathbb{Q}\) such that \[f(\{k,k^{2},k^{3}\}\cup\{x\})=g(\{1,k,k^{2},k^{3}\}).\] If \(x<k\) then \(x=1\), otherwise \(g^{-1}\circ f\) is not the identity but fixes \(k,k^{2}\), and \(k^{3}\). Similarly, if \(x>k^{3}\) then \(x=k^{4}\). If \(k<x<k^{2}\) then \(\{1,k,x,k^{2},k^{3}\}\) is symmetric of type \((1,3)\), which is not possible. Similarly, if \(k^{2}<x<k^{3}\) then \(\{k,k^{2},x,k^{3},k^{4}\}\) is symmetric of type \((3,5)\), which is impossible. Now, if Breaker selects \(f(1)\) on their third move, then Maker is forced to select \(f(k^{4})\) on their fourth move. By an argument similar to the above, now the only way for Maker to build a copy of \(\{k,\ldots,k^{5}\}\) is to select \(f(1)\) or \(f(k^{5})\). But \(f(1)\) has already been selected and Breaker can select \(f(k^{5})\), stopping the win. If \(S\) is an arithmetic sequence, replace \(k^{i}\) with \(i\). A similar argument holds if \(S\) has symmetry type \((2,5)\) or \((1,4)\). ## 4. Sets of size \(4\) In the case where \(|S|=4\), calculating explicit winning strategies is feasible. We have the following results: **Theorem 13**.: _Maker has a \(4\)-move strategy if and only if \(S\) is symmetric._ **Theorem 14**.: _Maker always has a \(5\)-move strategy._ One direction of Theorem 13 follows from Lemma 6. We now prove the other direction by writing down the trees describing Maker's strategy. Proof.: Suppose \(S\) is \((1,4)\) symmetric. As noted in the proof of Lemma 8, we can take \(\{0,1,1+k,1+k+k^{2}\}\) for some \(k\geq 1\) to be a copy of \(S\). Then the tree below gives a winning strategy. The vertices are distinct since \[-\frac{1}{k+k^{2}}<-\frac{1}{k}<0<\frac{1}{1+k}<1<\frac{1+k+k^{2}}{1+k}<1+k<1 +k+k^{2}.\] Suppose that \(S\) is \((2,4)\) or \((1,3)\) symmetric. Note that \[C_{2}(4,k)=(k-1)\cdot\{-\frac{1}{k-1},0,1,1+k\}+1.\] Then the tree below gives a winning strategy. The vertices are distinct since \[-\frac{1}{k-1}<-\frac{1}{k}<0<\frac{1}{k^{2}}<\frac{1}{k}<1<k<1+k.\] Now we turn to the non-symmetric case and prove Theorem 14. Proof.: Let \(S=\{0,x,x+y,x+y+z\}\). Consider the following tree: where \[f_{0} =\frac{x+y+z}{x}, f_{1} =\frac{x+y+z}{x+y},\] \[f_{00} =\frac{x+y}{x}, f_{10} =\frac{x}{x+y},\] \[f_{01} =\frac{-xy-y^{2}-yz}{xz}, f_{11} =\frac{yz+xy+y^{2}}{xy+y^{2}+xz+yz},\] \[f_{010} =\frac{-x^{2}-2xy-y^{2}-yz-xz}{xz}, f_{110} =\frac{-x^{2}-xy-xz}{xy+y^{2}+xz+yz},\] \[f_{011} =\frac{-y^{2}+xz-yz}{xz}, f_{111} =\frac{2yz+xy+y^{2}+xz}{xy+y^{2}+xz+yz}.\] Note that \[S =\frac{xz}{x+y+z}\{0,f_{0},f_{01},f_{010}\}+x+y\] \[=\frac{xz}{y+z}\{1,f_{0},f_{01},f_{011}\}-\frac{xz}{y+z}+x+y\] \[=\frac{xy+y^{2}+xz+yz}{x+y+z}\{0,f_{1},f_{11},f_{110}\}+x\] \[=\frac{xy+y^{2}+xz+yz}{z}\{1,f_{1},f_{11},f_{111}\}-\frac{xy+y^{ 2}+yz}{z}.\] If all of the vertices of this tree are distinct, then this tree describes a strategy for Maker. If there are vertices which are not distinct, then it is still possible that the tree constructed for a reflection of \(S\) has all vertices distinct. We will characterize these cases and show that the only case where neither of these options is available turns out to be the situation where Example 3 gives a strategy for Maker. We now determine and deal with the values \((x,y,z)\) such that these numbers are not distinct. The vertices of the tree should be thought of as \(12\) rational functions, giving \(\binom{12}{2}\) relations to consider. Notice that \(S^{\prime}=\{0,z,z+y,z+y+x\}=-S+x+y+z\) is a reflection of \(S\). This motivates the following definition. Let \(\mathbb{F}\) be the field of rational functions in \(x,y,z\) and let \(\varphi:\mathbb{F}\to\mathbb{F}\) be the involutory automorphism that swaps the variables \(x\leftrightarrow z\). If \(f_{00}=f_{1}\), then setting \(k=f_{00}=f_{1}\) we have \(S=\{0,x,kx,k^{2}x\}\) is symmetric of type \((2,4)\). Then Maker has a \(4\)-move strategy, and this larger tree is not needed. From the equation \[xzf_{011}=(xy+y^{2}+xz+yz)(f_{10}-f_{11})\] we have \(f_{011}=0\) if and only if \(f_{10}=f_{11}\). But \(f_{10}=f_{11}\) implies that \(\varphi(f_{00})=\varphi(f_{1})\). Thus, by the previous argument, in this case a reflection of \(S\) is symmetric of type \((2,4)\) and \(S\) is symmetric of type \((1,3)\). One checks the rest of the relations and finds that there are only \(3\) with roots \((x,y,z)\) such that all coordinates are positive, namely \[g_{1}=f_{11}-f_{011},\quad g_{2}=f_{110}-f_{01},\quad g_{3}=f_{110}-f_{011}. \tag{1}\] If Maker has a \(5\)-move strategy for \(S^{\prime}\), then Maker has a \(5\)-move strategy for \(S\) by applying \(\varphi\) to every vertex of the tree. Therefore Maker does not have a \(5\)-move strategy for \(S\) only if \((x,y,z)\) vanishes on \(g_{i}\) as well as \(\varphi(g_{j})\), for some pair of indices such that \(1\leq i\leq j\leq 3\). For each pair \(i,j\) we look for points on the intersection of the curves \(g_{i}=0\) and \(\varphi(g_{j})=0\). We find only one (projective) rational solution with positive coordinates: any positive multiple of \((3,1,2)\) is a solution to \(g_{1}=\varphi(g_{2})=0\). However, if \((x,y,z)=k(3,1,2)\) for some \(k>0\) then \(S^{\prime}\) is a copy of \(\{0,2,3,6\}\). Then Example 3 shows that Maker has a \(5\)-move strategy for \(S^{\prime}\), and hence for \(S\). For more details on these computations, see the appendix. ## 5. Further questions We showed that Maker has a \(3\)-win strategy when \(|S|=3\) and Maker has a \(5\)-win strategy when \(|S|=4\). This naturally leads to some questions: 1. Does there exist a function \(C_{n}\) such that Maker has a \(C_{n}\)-move strategy when \(|S|=n\)? 2. If the answer to the first item is "Yes," what is the best order of \(C_{n}\)? 3. If the answer to the first item is "No," what is the minimum \(n\) such that Maker is not guaranteed to win in any finite number of moves? On the other hand, we showed that if \(|S|>4\) then Maker does not have a \(|S|\)-move strategy. It seems likely that this lower bound could be improved. In the case \(S=\{1,2,\ldots,n\}\), [1] proved that Maker has a winning strategy only selecting numbers from \([1,n^{4}\cdot 2^{n-4}-1]\). Thus, for general \(S\subset\mathbb{N}\), Maker can adopt the strategy of building a copy of \([1,m]\), where \(m\) is the least integer such that \([1,m]\) contains a copy of \(S\). This gives Maker a \((m^{4}\cdot 2^{m-5})\)-move strategy. However, this bound does not depend on \(|S|\). ## 6. Appendix Let us go over the calculations in the proof of Theorem 14 in more detail. We can see that many pairs of vertices of the tree can never be equal by the following inequalities: \[f_{010}<f_{01}<0<1<f_{00}<f_{0},\quad f_{01}<f_{011}<1,\] \[f_{110}<f_{0}<f_{11}<1<f_{111}<f_{1},\quad 0<f_{10}<1.\] As explained previously, if \(f_{00}=f_{1}\) or \(f_{011}=0\) or \(f_{10}=f_{11}\) then \(S\) is symmetric. Most of the remaining equations \(p=q\), where \(p,q\) are distinct vertices of the tree, can be seen to have no solutions because the coefficients of the numerator of \(p-q\) all have the same sign. This leaves the relations \(g_{1},g_{2},g_{3}\) defined in (1). For each tuple \((i,j)\) such that \(1\leq i\leq j\leq 3\) we have the system of equations \(g_{i}=0\) and \(\varphi(g_{j})=0\). We want to find the positive rational solutions \((x,y,z)\), i.e. every coordinate is positive and rational, for each \((i,j)\). Consider the case \((i,j)=(2,2)\). This corresponds to the system \(g_{2}=\varphi(g_{2})=0\). Notice that these functions are homogeneous. That is, if \(f=g_{2}\) or \(f=\varphi(g_{2})\) then \(f(x,y,z)=f(kx,ky,kz)\) for any constant \(k\). In particular, if \((x,y,z)\) is a root then so is any multiple \(k(x,y,z)\). Set \(k=1/y\), or equivalently set \(y=1\). Clearing denominators, we get the system of equations \(h=\varphi(h)=0\) in \(x\) and \(z\), where \[h=-x^{3}z-x^{2}z^{2}+xz^{2}+x^{2}+3xz+z^{2}+2x+2z+1.\] Now we notice that \(xz(1+x+z)(x-z)=\varphi(h)-h=0\). Then since \((x,z)\) lies in the first quadrant of the \(xz\)-plane, we must have \(x-z=0\). Substituting \(z=x\) into \(h\), we get \[-2x^{4}+x^{3}+5x^{2}+4x+1=0.\] One checks that \(\frac{1}{2}\) is not a root of this polynomial. By the rational root theorem, the polynomial has no positive rational roots. Next consider \((i,j)=(1,2)\), corresponding to the system \(g_{1}=\varphi(g_{2})=0\). As before, we set \(y=1\) and clear denominators to get the system \[-x^{2}z^{2}+xz^{2}+2xz+z^{2}+x+2z+1 =0,\] \[-x^{2}z^{2}-xz^{3}+x^{2}z+x^{2}+3xz+z^{2}+2x+2z+1 =0.\] By a Grobner basis computation (refer to [5] more information about Grobner bases) in the ring \((\mathbb{Q}[x])[z]\) we obtain the equivalent system of equations \[x^{7}-x^{6}-5x^{5}-3x^{4} =0,\] \[(2x^{3}+2x^{2})z-x^{6}+5x^{4}+6x^{3}+2x^{2} =0,\] \[z^{2}+(-2x^{2}+2)z+2x^{6}-2x^{5}-9x^{4}-7x^{3}-3x^{2}+1 =0.\] From the first equation, if \(x>0\) then \(x\) must be a root of the polynomial \(x^{3}-x^{2}-5x-3\). By the rational root theorem, the only positive rational root of this polynomial is \(x=3\) which leads to the solution \((x,y,z)=(3,1,2)\). The series of computations for \((i,j)=(1,2)\) works for all of the remaining tuples. The table below records for each tuple the polynomial \(p(x)\) we get, where \(x>0\) must be a root of the polynomial. \[\begin{array}{c|c}(i,j)&p(x)\\ \hline(1,1)&x^{5}-4x^{3}-6x^{2}-4x-1\\ (1,3)&x^{7}-4x^{6}-12x^{5}-4x^{4}+10x^{3}+11x^{2}+5x+1\\ (2,3)&x^{8}+x^{7}-7x^{6}-15x^{5}-3x^{4}+19x^{3}+23x^{2}+11x+2\\ (3,3)&3x^{8}+17x^{7}+23x^{6}-6x^{5}-34x^{4}-22x^{3}+2x^{2}+7x+2\end{array}\] None of these polynomials have positive rational roots. ## 7. Acknowledgements We would like to thank Joe Briggs for suggesting this problem in the Auburn Graduate Student Research Seminar. We would like to thank the other graduate students in the research seminar, particularly Haile Gilroy and Evan Leonard.
2309.16408
Assessing the Solvency of Virtual Asset Service Providers: Are Current Standards Sufficient?
Entities like centralized cryptocurrency exchanges fall under the business category of virtual asset service providers (VASPs). As any other enterprise, they can become insolvent. VASPs enable the exchange, custody, and transfer of cryptoassets organized in wallets across distributed ledger technologies (DLTs). Despite the public availability of DLT transactions, the cryptoasset holdings of VASPs are not yet subject to systematic auditing procedures. In this paper, we propose an approach to assess the solvency of a VASP by cross-referencing data from three distinct sources: cryptoasset wallets, balance sheets from the commercial register, and data from supervisory entities. We investigate 24 VASPs registered with the Financial Market Authority in Austria and provide regulatory data insights such as who are the customers and where do they come from. Their yearly incoming and outgoing transaction volume amount to 2 billion EUR for around 1.8 million users. We describe what financial services they provide and find that they are most similar to traditional intermediaries such as brokers, money exchanges, and funds, rather than banks. Next, we empirically measure DLT transaction flows of four VASPs and compare their cryptoasset holdings to balance sheet entries. Data are consistent for two VASPs only. This enables us to identify gaps in the data collection and propose strategies to address them. We remark that any entity in charge of auditing requires proof that a VASP actually controls the funds associated with its on-chain wallets. It is also important to report fiat and cryptoasset and liability positions broken down by asset types at a reasonable frequency.
Pietro Saggese, Esther Segalla, Michael Sigmund, Burkhard Raunig, Felix Zangerl, Bernhard Haslhofer
2023-09-28T12:59:57Z
http://arxiv.org/abs/2309.16408v2
# Assessing the Solvency of Virtual Asset Service Providers: ###### Abstract Entities like centralized cryptocurrency exchanges fall under the business category of virtual asset service providers (VASPs). As any other enterprise, they can become insolvent. VASPs enable the exchange, custody, and transfer of cryptoassets organized in wallets across distributed ledger technologies (DLTs). Despite the public availability of DLT transactions, the cryptoasset holdings of VASPs are not yet subject to systematic auditing procedures. In this paper, we propose an approach to assess the solvency of a VASP by cross-referencing data from three distinct sources: cryptoasset wallets, balance sheets from the commercial register, and data from supervisory entities. We investigate 24 VASPs registered with the Financial Market Authority in Austria and provide regulatory data insights such as who are the customers and where do they come from. Their yearly incoming and outgoing transaction volume amount to 2 billion EUR for around 1.8 million users. We describe what financial services they provide and find that they are most similar to traditional intermediaries such as brokers, money exchanges, and funds, rather than banks. Next, we empirically measure DLT transaction flows of four VASPs and compare their cryptoasset holdings to balance sheet entries. Data are consistent for two VASPs only. This enables us to identify gaps in the data collection and propose strategies to address them. We remark that any entity in charge of auditing requires proof that a VASP actually controls the funds associated with its on-chain wallets. It is also important to report fiat and cryptoasset and liability positions broken down by asset types at a reasonable frequency. **JEL Classification:** C81, F31, G15, G20, G33, M41, 033 **Keywords:**_Blockchain, Proof of Solvency, Virtual Asset, Cryptoasset, VASP, Accounting, Auditing, Regulation_ + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal: Journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates + Footnote †: journal of LaTeX Templates ## 1 Introduction In 2022, the cryptoasset sector experienced a crash driven by two major incidents that exposed the repercussions of inadequate regulation and accountability in the industry. In May, Terra's algorithmic stablecoin protocol experienced a stablecoin run, similar to a bank run, on its associated cryptoassets LUNA and UST (Klages-Mundt & Minca, 2021; Briola et al., 2022). This triggered the bankruptcy of the crypto lenders Celsius and Voyager, and the hedge fund Three Arrows Capital (The Economist, 2022). In November, the crypto trading platform FTX filed for bankruptcy, leading to BlockFi's downfall and bankruptcy consideration for Aax and Genesis1. Even more recently, in June 2023, the U.S. Security and Exchange Commission (SEC) brought forward charges against some of the largest U.S.-based VASPs (SEC, 2023a,b). Footnote 1: See [https://nyti.ms/3WUnEP7](https://nyti.ms/3WUnEP7), [https://bit.ly/3kIjeGp](https://bit.ly/3kIjeGp), and [https://on.ft.com/3XTogs8](https://on.ft.com/3XTogs8). These companies, and other centralized cryptoasset exchanges (CEXs) like FTX, fall under the broader definition of virtual asset service providers (VASPs). They facilitate financial activity involving virtual assets (VAs), such as their exchange for other VAs or fiat currencies, their custody and transfer via cryptoasset wallets, and portfolio management services for their customers (FMA, 2021; EC, 2018, 2022; FATF, 2021). As Figure 1 shows, VASPs lie at the interface of the traditional and the crypto financial ecosystems, respectively called _off-chain_ and _on-chain_ financial activity in jargon. The aforementioned and other (Moore et al., 2018) incidents that affected cryptoasset exchanges highlight a critical aspect of VASPs, i.e., the lack of proper accounting and business continuity concepts (Zetzsche et al., 2023). While their _off-chain_ activities are audited according to generally accepted accounting principles, _on-chain_ assets are held in pseudo-anonymous cryptoasset wallets across multiple, possibly privacy-preserving DLTs (ElBahrawy et al., 2017) and are not yet systematically audited. Furthermore, whilst VASPs share several characteristics with traditional financial intermediaries, they are less regulated and their activities often lack transparency. Whether and how regulating them is an ongoing, highly controversial debate resembling a tug-of-war game; some argue for more VASP regulations (Moser & Narayanan, 2019), while others claim it would come at a substantial social cost and could be misunderstood as undeserved legitimacy2. A clear understanding of the financial functions VASPs provide, how they operate, and what risks are involved may provide guiding principles for regulators and policymakers. Footnote 2: See e.g. [https://on.ft.com/3bgfFhYD](https://on.ft.com/3bgfFhYD) and [https://bit.ly/3Y0JqBo](https://bit.ly/3Y0JqBo). This paper proposes an approach for determining a virtual asset service provider's solvency status by measuring their cryptoasset holdings. By solvency, we mean that the total amount of assets held in custody is larger than the total amount of liabilities, whereby the difference is equity. We investigate the VASPs registered with the Financial Market Authority (FMA) in Austria in the context of the Anti-Money Laundering Act. We compare data from three distinct sources: we rely on publicly available DLT transaction records from the Bitcoin and Ethereum DLTs and use established algorithms (Androulaki et al., 2013; Ron & Shamir, 2013; Meiklejohn et al., 2016) to identify and cluster cryptoasset wallets likely controlled by the same entity. Then we reconstruct the VASPs' cryptoasset flows, compare their net positions to balance sheet data from the commercial register3, and complement them with supervisory data from FMA. Footnote 3: The commercial register is a central, public directory. It contains important information about numerous companies. The primary purpose of the commercial register is to provide business transactions with the opportunity to obtain relevant information about the registered company. To the best of our knowledge, our work is the first that combines these distinct sources in a unified framework. Also, to our understanding, a consolidated approach to measuring the types of cryptoassets held by VASPs against their liabilities to customers still does not exist, although their activity is based on DLTs whose transactions are publicly auditable by design. Moreover, we position VASPs in the landscape of financial intermediaries, by systematically comparing the services they offer to those of traditional financial service providers. While previous research has compared VASPs to banks (Anderson et al., 2019; Dagher et al., 2015), we discuss why this comparison can be misleading. Our work provides the following contributions: * We study 24 Austrian VASPs and systematize the services they offer. We find that they are most similar to _brokers_, _money exchanges_, and _funds_, rather than to _banks_; * We provide regulatory data insights showing that their yearly incoming and outgoing transaction volume in 2022 amounted to 2 billion EUR for around 1.8 million users; * We measure on-chain transaction flows for four VASPs and compare their holdings to balance sheet data from the commercial register. Data are consistent for two VASPs only; * We identify gaps in data collection practices and propose strategies to fill them: any entity in charge of auditing requires proof that a VASP actually controls the funds associated with its on-chain wallets; it is also important to report fiat and crypto asset and liability positions broken down by asset types, and at a reasonable frequency. Currently, supervisory auditing of VASPs does not fully exploit the public availability of DLT transactions. We believe our work provides valuable insights toward a better and more systematic assessment of their solvency, and might help make the process more effective and less error-prone. By comparing the VASPs cryptoasset holdings to balance sheet data, we show that the major issues are related to the different management of cryptoasset wallets in different DLTs, the lack of wallet addresses attribution data for VASPs, and the absence of breakdowns by cryptoasset types in balance sheets. The paper is structured as follows. In Section 2, we introduce key background concepts and review the literature. Then we analyze VASPs and their features in Section 3. Section 4 describes the data, our measurement approach, and reports our results. In Section 5 we discuss how the data gap can be reduced, while in Section 6 we draw conclusions. Our study follows an open-source approach and can be replicated on any other entity, provided the data on their cryptoasset wallets. Figure 1: _Virtual Asset Service Provider. VASPs hold virtual assets in custody, transfer them, and facilitate their purchase and sale against fiat currencies and other virtual assets. Customers can interact with them by depositing or withdrawing cryptoassets through DLT-based transactions, or fiat currency via commercial banks._ ## 2 Background and related literature ### Definitions - What is a VASP? While the term VASP has become increasingly common, its precise meaning and the specific activities that fall under this term still need to be clarified. We begin by providing the definition of VASPs according to the FMA (2021), which follows the \(5th\) EU AML directive (EC, 2018) and the Financial Action Task Force guidelines (FATF, 2021). According to this definition, a **Virtual Asset**, implemented on a distributed ledger technology, is (FMA, 2021, p. VII) _[...] a digital representation of value that is not issued or guaranteed by a central bank or a public authority, is not necessarily attached to a legally established currency and does not possess a legal status of currency or money, but is accepted by natural or legal persons as a means of exchange and which can be transferred, stored and traded electronically._ We note that in the "Market in Crypto Asset Regulation" (MiCA), the term "crypto asset" is used instead of virtual asset. In our context, the two terms can be considered equivalent4. Footnote 4: MiCA also refers to crypto asset service providers (CASPs), rather than to VASPs. For the purpose of this work, VASPs and CASPs can be considered as synonymous as well. **Virtual Asset Service Providers** are any natural or legal person that, as a business, conducts activities or operations for or on behalf of another natural or legal person. They can offer _"[...] one or more services"_ (FMA, 2021, p. VIII), which we summarize in Table 1. VASPs lie at the interface of the traditional and the crypto financial ecosystems. The former encompasses financial activity with fiat currencies, i.e., legal tender money, and fiat assets, i.e., assets denominated in fiat currencies (similarly to cryptoassets being assets denominated in a cryptocurrency). It can rely on commercial banks and other traditional financial intermediaries. The latter entails financial activity executed on Distributed Ledger Technologies (DLTs) like the Bitcoin and Ethereum blockchains, and with cryptoassets such as bitcoin, ether, and the stablecoins tether (USDT), USD coin (USDC), or DAI. On the off-chain side, VASPs and customers interacting with them have strong identities; that is, the former need to register with regulatory bodies and the latter undergo identification processes such as KYC and AML5 compliance. On the on-chain side, activities involve weak identities (Moser et al., 2013; Ford and Bohme, 2019): transactions, enabled by cryptographic keys, occur among pseudonymous counterparts, and the same entity can control multiple addresses. We also note that VASPs differ from decentralized finance (DeFi) actors. This term indicates an emerging financial ecosystem built on DLTs that is non-custodial and does not require a \begin{table} \begin{tabular}{l l} **Service** & **Description** \\ \hline **Custodian** & Services to safeguard private cryptographic keys, to hold, store and transfer virtual assets on behalf of a customer (custodian wallet providers) \\ \hline **V2F-Exchange** & Exchanging of virtual assets into fiat currencies and vice versa \\ \hline **V2V-Exchange** & Exchanging of one or more virtual assets between one another \\ \hline **Payment** & Transferring of virtual assets \\ \hline **Issuance** & Provision of financial services for the issuance and selling of virtual assets \\ \hline \end{tabular} \end{table} Table 1: _Description of the services provided by VASPs in Austria._ central organization to operate (Auer et al., 2023). VASPs are instead centralized intermediaries that provide interfaces to exchange cryptoassets via conventional IT systems, and transactions are not necessarily recorded on DLTs but at times are rather stored in private ledgers (Aramonte et al., 2021; Auer et al., 2022a). ### Proof of Solvency A company is solvent if the total amount of assets held in custody is larger than the total amount of liabilities, whereby the difference is equity. Substantial documentation exists regarding incidents and exchange closures of VASPs (Moore et al., 2018), including recent events such as FTX's bankruptcy filing. To increase transparency and foster trust, several VASPs have recently disclosed lists of cryptoasset wallet addresses as a _proof of reserve_, i.e., proof that they hold a given amount of assets. However, such an approach alone does not constitute a valid _proof of solvency_ because it does not guarantee that VASPs have the financial resources to meet their current and future obligations5. First, a _proof of deposits_, i.e., a verification of the customers' deposit amount, is needed as well (Buterin, 2022). Second, in addition to revealing the existence of an address, it is necessary to prove control over the corresponding private key. Third, even this might not be sufficient, as colluding actors could lend each other cryptoassets to conduct one-time proof of reserves.6 Footnote 5: _proof of reserves_ and _proof of solvency_ are terms adopted in jargon. More technically, the latter is the capital cushion to fulfill liabilities and obligations against customers, i.e., the capital requirements. Proof of reserves were collected by projects such as DefiLLama, which gathered several CEX wallet addresses: [https://bit.ly/3KpdnHT](https://bit.ly/3KpdnHT). Footnote 6: See e.g., [https://bit.ly/3XXBlgP](https://bit.ly/3XXBlgP) and [https://bit.ly/3DjajJiq](https://bit.ly/3DjajJiq). Data from the commercial register contains both information on the asset and liability side of VASPs balance sheets, and the fiat assets are audited according to generally accepted accounting principles. Therefore, in our context, it is sufficient to verify that the asset side is consistent with the cryptoasset holdings of a VASP to prove its solvency. ### Traditional financial intermediaries VASPs allow customers to deposit and exchange assets, and can provide consulting and portfolio management services for their customers, often holding funds on behalf of their customers (Anderson et al., 2019). Therefore, they share several characteristics with traditional financial intermediaries. Here we describe the ones that are most important in our context and their main economic functions. A comprehensive description of traditional financial intermediaries can be found in Howells & Bain (2008) and Cecchetti & Schoenholtz (2014). The two primary financial intermediary categories are deposit-taking institutions (DTIs) like banks and non-deposit-taking institutions (NDTIs) such as brokerage firms, mutual funds, and hedge funds. Other DTIs include building societies (UK), savings and loan societies (US), and mutual and cooperative banks (DE, FR). The main difference is that DTIs issue loans and use their liabilities as official money (Howells & Bain, 2008). Consequently, an increase in DTI business ultimately increases the money supply in an economy. _Banks_, by far the most important DTIs, pool small savings to make large loans. They also provide liquid deposit accounts, access to the payment system, and screen and monitor borrowers. Among NDTIs, _brokerage firms_ facilitate access to trading in financial instruments. They offer custody and accounting services for customer investments and are additionally involved in the clearing and settlement of trades. _Mutual funds_, including exchange-traded funds (ETFs), sell shares to customers and invest in a diverse range of assets offering access to large, diversified portfolios. _Hedge funds_ operate as financial partnerships, often requiring accredited or high-net-worth investors. They pool savings to earn returns through actively managed investment strategies, including derivatives, arbitrage, and short sales. ### Literature The academic literature on VASPs is vast and primarily focuses on cryptoasset exchanges, highlighting the central role they have in the crypto ecosystem (Makarov and Schoor, 2021; Lischke and Fabian, 2016). Recent studies show that most of the trading on cryptoasset markets happens off-chain on CEXs (Auer et al., 2022b; Brauneis et al., 2019); according to Makarov and Schoor (2021), 75% of the bitcoin transactions involve exchanges or exchange-like service providers. CEXs play a major role also as they facilitate price discovery (Brandvold et al., 2015). Scholars exploited price time series from the largest exchanges to investigate the price formation dynamics (Kristoufek, 2015; Katsiampa, 2017; Li and Wang, 2017) and estimate the fundamental value of cryptoassets (Cheah and Fry, 2015; Kristoufek, 2019). Other studies used instead exchange-based data to investigate topics such as market (in)efficiency (Urquhart, 2016; Kristoufek, 2018), the behavior of bitcoin as a currency or asset (Glaser et al., 2014; Yermack, 2015), the effects of cross-listing on returns (Benedetti and Nikbakht, 2021), as well as arbitrage, both across exchanges (Makarov and Schoor, 2020) and within one exchange alone (Saggese et al., 2023). Exchange data were also used to study market microstructure aspects, such as price jumps (Scaillet et al., 2020), or market liquidity (Brauneis et al., 2022). Another relevant strand of literature investigates risks associated with CEXs such as price manipulation (Gandal et al., 2018), susceptibility to attacks (Feder et al., 2017), wash trading (Chen et al., 2022) and data fabrication (Cong et al., 2022). Previous studies have provided taxonomies or categorizations of crypto financial intermediaries (Kazan et al., 2015; Blandin et al., 2020) and compared them to traditional ones (Aramonte et al., 2021). In Fang et al. (2022), the authors define cryptoasset trading and survey the related works. Our work differs in that we base our categorization of virtual asset service providers on the legal definition of the Austrian Financial Market Authority. Then, we identify the financial functions that VASPs offer and provide an overview of the different types of VASPs. Close to our work, Decker et al. (2015) and Dagher et al. (2015) implemented a software-based solution to automate the audit of centralized Bitcoin cryptoasset exchanges. Other works focused instead on proof of reserves for less relevant DLTs (Dutta and Vijayakumaran, 2019, 2019; Dutta et al., 2021). Our work differs as it is based on an empirical approach that cross-references multiple different sources of information (cryptoasset wallets, balance sheet data from the commercial register, and information from supervisory entities), and because it focuses on the two most relevant blockchains (Bitcoin and Ethereum), but can be extended to others. ## 3 VASPs: A Closer Examination Using information from the Austrian Financial Market Authority (FMA), we first describe what financial services they offer and what cryptoassets they support. Next, we complement the FMA data with additional public information collected from the VASPs websites to group them based on similarity scores. Finally, we compare their economic functions highlighting similarities and differences to traditional financial intermediaries. ### The Austrian VASP landscape VASPs in Austria are supervised by the Financial Market Authority (FMA) under the Anti-Money Laundering Act. In December 2022, 24 VASPs were registered in the FMA database7. Footnote 7: [https://bit.ly/3kMUKwg](https://bit.ly/3kMUKwg) Figure 2a shows the aggregate number of VASPs registered for each service described in Table 1. The vast majority of them (\(N=20\)) offers _V2F-Exchange_, i.e., services to exchange virtual assets and fiat currencies; nine also facilitate the exchange from and to other virtual assets (_V2V-Exchange_). In most cases, customer funds are or can be kept in custody by the VASP (\(N=15\)). Finally, only a few of them are legally authorized to transfer virtual assets and to issue and sell them (respectively services _Payment_, \(N=4\), and _Issuance_, \(N=3\)). Additional details on the number of services offered per VASP are reported in Appendix A. Figure (b)b shows what virtual assets are used by the Austrian VASPs. We follow the taxonomy described in Auer et al. (2023) to aggregate cryptoassets into five categories. We could retrieve reliable information for 20 VASPs out of the 24 in the FMA database. Notably, all VASPs offer services related to bitcoins (\(N=20\)). More than 75% support Ethereum (\(N=16\)), and the latter typically also support Ethereum tokens, i.e., ERC-20 and (or) ERC-721 compatible non-native tokens, and stablecoins (respectively \(N=8\) and \(N=12\)). A limited number of VASPs also provide services related to privacy-focused cryptoassets (i.e., Monero, Dash, Zcash). Finally, several VASPs also support tokens native to other DLTs (e.g., Litecoin or Cardano). In addition to FMA data, we collect additional public information documented on their websites. Our aim is to categorize VASPs by their service offering. We construct categorical variables that indicate whether the VASP offers custody services, facilitates payments, allows users to exchange cryptoassets, implements a trading platform, or offers consulting or investment services. We consider 21 VASPs for which we could gather sufficient information. Data for each (anonymized) VASP are reported in Appendix A. Whilst the sample is small and the features are few, to ensure consistency and objectivity in categorizing VASPs we exploit an unsupervised learning method. We aggregate them using the hierarchical agglomerative clustering (HAC) method (Murtagh and Contreras, 2012). With this bottom-up approach, objects are iteratively clustered based on their similarity. The two main parameters of HAC are the distance among objects, and the linkage method, i.e., the distance used to merge groups. In our setting, we select the Euclidean distance and the Ward metric, and distances are iteratively computed using the Lance-Williams update formula. Results are similar when using other parameters. We report our classification in Figure 3. It categorizes VASPs into five clusters: the first one (red rectangle, \(N=7\)) includes VASPs that do not keep customers' funds in custody, and only facilitate the exchange of virtual assets for fiat currencies and (or) other virtual assets. Some of them automate the process through physical vending machines. Thus the first left branch mainly separates VASPs that offer custody and those that do not. We identify them as "Group 1". The green rectangle identifies VASPs providing investment advice and/or portfolio management in addition to custody services (\(N=6\)). They propose investing strategies, give advice on portfolio management and coin selection services, and in some cases, lend customer funds. These VASPs are referred to as "Group 2". The purple rectangle ("Group 5") aggregates VASPs that act as cryptoasset custodians. Figure 2: **The Austrian VASP landscape.** Subfigure (a) shows the number of VASPs registered for each of the five service categories described above. Most of the VASPs offer V2F-Exchange (\(N=20\)), and offer more than one service, such as custody (\(N=15\)). Subfigure (b) reports how many VASPs offer services related to bitcoin (\(N=20\)), ether (\(N=16\)), and other relevant cryptoassets. Most VASPs exploit multiple DLTs. Typically, they also facilitate the exchange of cryptoassets and are similar to VASPs in the blue rectangle ("Group 3"). These VASPs in addition provide customers with an internal trading platform, manage and match orders in a private limit order book, and update their account balances in cryptoasset or fiat money when trades are executed. Trades executed in private ledgers do not affect the public distributed ledgers unless the customers withdraw cryptoassets from the service. Such VASPs play an essential role in the crypto-financial system. As a result of the matching mechanism for demand and supply, these are the platforms where price formation takes place. The other VASPs derive their offered prices from other platforms as an exogenous variable. In the following, we consider these two as a single group (i.e., Group 3). All the VASPs in the groups described above are cryptoasset centralized exchanges, or CEXs. The remaining VASP in the yellow rectangle is instead a payment processor service. It offers solutions to facilitate the purchase and sale of commodity goods with cryptoassets; such VASPs play a minor role in the crypto ecosystem. ### A comparison with traditional financial intermediaries Having outlined the landscape of VASPs in Austria, we are now interested in understanding how they differ from traditional financial intermediaries. Figure 4 stylizes the traditional financial intermediaries on the right and the VASPs on the left. In the middle, rectangles represent the primary economic services, and links indicate what services each intermediary category offers. The comparison shows that an analogy with traditional intermediaries exists for three out of the four groups described in Figure 3. More specifically, VASPs in group 1 operate similarly to _money exchanges_. Indeed, the only service they offer is to buy and sell virtual assets for customers. VASPs in group 2 provide investment services to their users, akin to _funds_. Third, groups 3 (and 5) include VASPs allowing users to trade, keep their funds in custody, and thus act as _brokers_, connecting buyers and sellers to facilitate a transaction. The last group that provides payment services can be compared to payment processor systems. Interestingly, we find that the comparison of VASPs to banks can be misleading: while the two share overall several financial services, such as exchanging money, trading, or investing, banks also enable customers to open loan positions with the funds they hold and to open savings and deposit positions. On this note, we mention that some VASPs have recently acquired an e-money institution license. However, that does not qualify them to offer bank-type financial services automatically. First, e-money institutions do not have the same supervisory requirements as traditional banks. Second, they do not necessarily offer bank-type financial services -- they can e.g. use the license only to process their fiat payments. Further information on the Figure 3: _VASPs categorization by their service offering. We use a hierarchical agglomerative clustering approach to categorize VASPs. The two largest groups are VASPs that facilitate the exchange of virtual assets without offering custody ( ) and offer consulting services ( ). The others offer custody and exchange services ( ), are payment processors ( ), or implement trading platforms ( )._ taking up, pursuit, and prudential supervision of the business of electronic money institutions can be found in the Directive 2009/110 of the European Commission (EC, 2009). ## 4 Measuring VASPs Cryptoasset Holdings After describing the VASPs service offerings, we now move on and devise an approach to empirically assess their solvency by correlating data from multiple on-chain and off-chain sources. The underlying intuition is that, by quantifying the cryptoassets held on-chain by one VASP, we should be able to verify the numbers reported in the balance sheets. Furthermore, it is sufficient to measure the asset side, because on the liability side cryptoassets are either customer liabilities or equity. Since balance sheet assets minus liabilities are equal to equity, our approach serves as a first proof of solvency. We first discuss which DLTs we analyze, motivate our choice, and document our approach to reconstruct the VASPs net positions by extracting the data from the two most relevant DLTs, Bitcoin and Ethereum. VASPs wallet addresses are extracted from a large collection of public attribution tags, or identified by executing manual transactions, and have not been revealed by the VASPs themselves. Next, we describe the balance sheet data from the commercial register. We concentrate our empirical analysis on four VASPs whose wallets appear in the attribution tag collection and that have published their balance sheets consistently over time, allowing to compare on-chain cryptoasset holdings to balance sheets. Their market share is around 99% of the total market share. ### On-chain data DLTs can be divided into two major typologies based on their conceptual design. They either follow the Bitcoin-like Unspent Transaction Output (UTXO) or the Ethereum-like account model. Both support by design a native token, like bitcoin or ether. The latter, by enabling the deployment of arbitrary smart contracts, also supports issuing non-native tokens such as the stablecoins USDT, USDC, and DAI. We begin by gathering the transaction history of the two most relevant DLTs, Bitcoin and Figure 4: **Comparison of traditional financial intermediaries with VASPs.** _Circles on the left represent VASPs, divided into groups as described in Figure 3, while on the right are traditional financial intermediaries. Links point to the financial functions offered by each financial intermediary. VASPs are most similar to money exchanges, brokers, and funds, rather than banks. The colors in the circles highlight what traditional intermediary each group is most similar to._ Ethereum, from their origin to the \(3^{rd}\) of April 20228. We focus on the Bitcoin and Ethereum ledgers for the following reasons: first, as shown in Section 3, all VASPs operate with bitcoins and in most cases also with ether. Cryptoassets deployed on other DLTs are less relevant. Second, bitcoin, ether, and the stablecoins USDT and USDC alone account for more than 70% of the total cryptoasset market capitalization, and these are also the most traded and held cryptoassets by CEXs customers9. Third, while stablecoins like USDT are deployed on multiple smart contract-compatible ledgers10 and currently deploy significant amounts of tokens also in other DLTs11, Ethereum is historically the most relevant one. Footnote 8: The time frame can be extended to 2022 to include the balance sheet of upcoming years when available. Footnote 9: See [https://coinmarketcap.com/charts/](https://coinmarketcap.com/charts/) and [https://coinmarketcap.com/rankings/exchanges/](https://coinmarketcap.com/rankings/exchanges/) Footnote 10: see, e.g., USDT [https://bit.ly/3YSYNwR](https://bit.ly/3YSYNwR) and USDC [https://www.circle.com/en/multichain-usdc](https://www.circle.com/en/multichain-usdc) We implement two approaches to extract on-chain VASP-related information for the UTXO-based and the account-based DLTs. The entities that operate on the Bitcoin blockchain interact with each other as a set of pseudo-anonymous addresses. We exploit known address clustering heuristics (Androulaki et al., 2013; Ron and Shamir, 2013; Meiklejohn et al., 2016) to associate addresses controlled by the same entity12. Furthermore, we exploit a collection of public tagpacks, i.e., attribution tags that associate addresses with real-world actors, to filter the clusters associated with any of the VASPs considered in our study. We expanded the dataset by conducting manual transactions with the VASPs in our sample (further details are discussed in Appendix A, where we also report a list of the addresses used). We identified 88 addresses and their corresponding clusters associated with four different VASPs. Footnote 11: [https://tether.to/en/transparency/](https://tether.to/en/transparency/). To reconstruct their net positions, we filter the Bitcoin transaction history and select only the transactions in which the sender or recipient is an address associated with the four VASPs. In total, we consider 1,574,125 Bitcoin transactions. We use a different approach for the Ethereum DLT. An Ethereum address identifies an account whose state is updated via state transitions through transactions. The account state stores information about the balance and the number of transactions executed, maintaining thus a historical database. While approaches for address clustering have been devised for Ethereum as well (Victor, 2020), in practice, addresses are typically reused. We thus extract all relevant information by running a full Ericgeon Ethereum archive node (Ledgerwatch, 2022). Similarly to the previous approach, we exploit attribution tags and manual transactions to identify the addresses associated with VASPs. In total, we identified nine relevant addresses associated with three different VASPs. We proceed by querying the state of each account, from the beginning of the Ethereum transaction history (block 0) to the \(3^{rd}\) of April 2022, every 10,000 blocks. In addition to the ether balance, we collect data on the address balance for the tokens USDT, USDC, DAI, wETH, wBTC. The list of ground-truth addresses is reported in the appendix. We remark that our attribution dataset contains more than 265,000,000 deanonymized Bitcoin addresses, covering more than 24% of the total number of existing Bitcoin addresses. In addition, 278,244 tagged Ethereum addresses cover 0.11% of the existing addresses. The former identifies around 3000 entities active in the Bitcoin ecosystem, the latter more than 25,000 Ethereum entities. ### Off-chain data We collect balance-sheet data for 17 Austrian VASPs through the Austrian Commercial Register. We construct an unbalanced panel data13 starting from 2014 to 2021. Ultimately, in our empirical analysis, we use the data of four Austrian VASPs for which we can identify on-chain and off-chain data. Our variable of interest is a firm-level measure of crypto asset holdings. Some firms describe their crypto asset holdings as explicit balance-sheet items; for other firms that aggregate them with other items we construct a variable that approximates the corresponding crypto asset holdings from their described asset items. The balance sheet does not allow us to distinguish between cryptoasset holdings such as ether and bitcoin. The variable _crypto asset holdings_ in form of red markers in Figure 6, Figure 7, Figure 8 and Figure 9 represents those balance-sheet items. ### Comparing on- and off-chain data Supervisory data from FMA show that in a 12-month period (roughly 2021 until 2022 due to varying reporting dates for VASPs), the transaction volume of virtual assets converted to EUR conducted by VASPs registered in Austria amounts to 2.03 billion incoming transaction volume and 2.76 billion outgoing. The transaction volume is computed as the sum of the transactions related to customer relationships only. As Figure 5 shows, in comparison, during the same time we observed a transaction volume for credit institutions of 723.46 (incoming) and 780.38 (outgoing) billion and of 7.37 (incoming) and 77.07 (outgoing) billion for payment institutions. Table 2 reports additional supervisory data from FMA on the number of VASP customers by residence and legal form. A VASP customer refers to a natural or legal person, who has opened an account and gone through a validated KYC process with the particular VASP. The rows distinguish natural persons, i.e., individuals, and legal persons, i.e., entities with legal rights. Customers are further divided by jurisdiction: the first column indicates the number of Austrian customers, while the second one reports the number of customers in the European Union, excluding Austrians (we note that customers are never counted in two columns). The subsequent columns identify customers by jurisdictions that are respectively offshore financial centers (IMF, 2019), subject to embargo (WKO, 2020), and under increased monitoring (grey list; FATF, 2022). The last columns respectively aggregate all remaining countries and report the total number of users. The assignment works in such a way that countries that appear in several lists will be assigned to the group that bears the greater risk. Total customers are 1.79 million, and they are mainly natural persons. The vast majority are Austrian or members of the European Union (respectively around 327,000 and N = 1,279,300). We note that this number might include customers who created an account but never transacted, i.e. the count is not weighted by transaction number. Furthermore, the same customers can have accounts at multiple VASPs. Customers from subsidiaries and inactive are excluded. Figure 5: **Transaction volumes of Austrian VASPs and other financial intermediaries.** The incoming and outgoing transaction volumes of VASPs are respectively one order and two orders of magnitude smaller than those of payment institutions and credit institutions. The four entities we study cover around 99% of the Austrian VASP transaction volumes measured in total assets. Consistently with the labels introduced in Figure 4, we denote them as VASP-2, VASP-5, VASP-9, and VASP-12 and hide their real names to avoid that the corresponding VASPs can be directly recognized in our study. They are representative of different VASP groups (i.e., money exchanges, brokers, and brokers with trading platforms). #### 4.3.1 Vasp-2 ObservationsWe report the values for VASP-2 in Figure 6. In this and the subsequent plots, the bitcoin holdings are in dark blue, ether in light blue, USDC in dark green, USDT in light green, and DAI in gray. The dots represent the cryptoasset holdings declared in the balance sheet data at the end of each year for the period 2018 to 2021. This VASP implements a trading platform and falls within group 3. The cryptoasset holdings identified on-chain correspond to 75.59% of the cryptoassets declared in the balance sheet at the end of 2018, 66.68% at the end of 2019, 194.56% at the end of 2020, 116.79% at the end of 2021. The amount of bitcoin increased significantly after April 2021, and the largest amount of tokens is held in ether. FindingsOverall, the two sources of information point in the same direction. Interestingly, after 2020, the on-chain activity is higher than what the balance sheet reports. A possible interpretation is that the cryptoassets in excess represent equity or private funds. VASP-2 \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & Austria & EU(\({}^{*}\)) & Offshore & Embargo & Grey list & Other & Total \\ \hline Natural persons & 326,660 & 1,279,132 & 1160 & 1183 & 36,421 & 141,491 & 1,785,747 \\ Legal persons & 326 & 147 & 2 & - & - & 26 & 501 \\ \hline \hline \multicolumn{10}{l}{\({}^{*}\)excluding Austrian customers} \\ \end{tabular} \end{table} Table 2: _VASP customers residency in different jurisdictions. We report figures for natural persons (top) and legal persons (bottom). Customers are never double counted; e.g., the first column reports the number of Austrian customers, while the second reports European Union members excluding Austrians. We further distinguish customers by jurisdictions that are offshore, subject to embargo, and under increased monitoring (“grey list”). The last columns aggregate all other jurisdictions (Other) and report the total number of customers (Total)._ Source: supervisory data from FMA. Figure 6: _Estimation of the cryptoasset holdings of VASP-2. Colors correspond to different cryptoassets: bitcoin in dark blue, ether in light blue, USDC in dark green, USDT in light green, and DAI in gray. Red markers indicate the cryptoasset holdings declared in the balance sheet data at the end of each year for the period 2018 to 2021._ reports well-separated balance sheet positions, allowing us to compute precisely the amount of cryptoasset holdings. #### 4.3.2 Vasp-12 ObservationsFigure 7 shows the cryptoasset holdings of VASP-12. It is a non-custodial VASP that provides exchange services based both on ether and bitcoin. The cryptoassets measured on-chain are partially comparable with those reported on the balance sheets (42.59% at the end of 2019, 102.45% at the end of 2020, but 549.38% at the end of 2021). FindingsSimilarly to VASP-2, on-chain activity is higher than the value reported on the balance sheet after 2020. As expected, the amount of cryptoasset holdings is small, as the VASP is non-custodial, and exceeds 100K EUR only after 2021. All reported assets are ether: the absence of stablecoins is expected, as this VASP trades bitcoin, ether, and a few other cryptoassets. However, we could not identify bitcoin flows from or to their wallets in the time frame we considered. To identify the addresses associated with this VASP, we relied on manual transactions: re-identification attacks are a possible strategy to collect attribution tags. While this strategy is effective for Ethereum accounts, the Bitcoin addresses we gathered identify the VASP activity dating back to November 2022 only, thus outside of the time frame we considered. Regarding balance sheet data, we note that the values, in this case, are a proxy: cryptoassets are aggregated with other items in the balance sheet. #### 4.3.3 Vasp-9 ObservationsVASP-9 is shown in Figure 8. It is categorized in group 5 in Subsection 3.1. Unlike the previous cases, the cryptoasset holdings cover only a tiny fraction of the funds declared in the balance sheets; in the best case, i.e., at the end of 2021, we can identify on-chain only 16.85% of the total cryptoassets reported in the balance sheet. FindingsA possible explanation for the discrepancy is that our dataset might include only _hot wallets_, i.e., addresses used to conduct daily operations such as the deposit and withdrawal, but not the _cold wallets_, i.e., addresses that control the large majority of customers funds and that are subject to stricter security measures. An alternative explanation could be that the considered VASP is part of a larger company structure and that the company engages next to VASP activities also in non-VASP-related business activities. In that case, the reported balance Figure 7: _Estimation of the cryptoasset holdings of the VASP-12. On-chain and off-chain data correspond until the end of 2020. All reported assets are ether. Balance sheet data are a proxy, as cryptoassets are aggregated with other items in the balance sheet._ sheet items might contain aggregated business activities, whereby it is difficult to disentangle the specific positions related to the crypto activities of the VASP-9. As a result, the proxy variable from the balance sheet might then overestimate the actual figure we are interested in. Furthermore, this VASP operates with multiple DLTs and also exchanges stablecoins, but the cryptoasset wallets we analyzed do not hold any USDC, USDT, or DAI. #### 4.3.4 Vasp-5 Observations.VASP-5 is the last we analyze; values are shown in Figure 9. This VASP bases its services on the purchase and sale of bitcoins. For this VASP, using both attribution tags in the TagPack database mentioned above and re-identification strategies, we could only gather information for a few months in between 2014 and 2017 and after 2021. The results are consistent only for the years 2015 and 2016, when the VASP held very small amounts of cryptoassets, if compared to the subsequent years. Findings.Similarly to VASP-9, we could not collect sufficient data to obtain comparable values to the figures reported in the balance sheets. As for VASP-12, the Bitcoin addresses we gathered Figure 8: _Estimation of the cryptoasset holdings of VASP-9. The cryptoasset holdings cover only a small fraction of the funds declared in the balance sheets._ Figure 9: _Estimation of the bitcoin holdings in Euro of VASP-5. On-chain and off-chain data are comparable only in 2015 and 2016._ through manual transactions identify clusters whose transaction history only dates back to a few months (mid-2021). Again, this highlights that re-identification is less effective for Bitcoin than Ethereum addresses. The data gap between 2018 and 2020 reveals another issue: likely, after 2017, funds were moved to other addresses that are not reused with those in our sample. VASPs apply different strategies to organize their cryptoasset transfers and holdings, e.g., to create new addresses for each transaction, or reuse them. If they are not reused, cryptoasset holdings can be held at multiple apparently unrelated clusters that can change over time. ## 5 Closing The Data Gap We presented an approach to measure the cryptoasset holdings of VASPs by correlating data from multiple on-chain and off-chain sources. Empirical analysis of four VASPs reveals that only two of them show consistent comparisons of on-chain and off-chain data, indicating potential data-related problems. In this section, we systematically discuss the encountered data issues and provide suggestions for possible improvements. ### On-chain data issues Different wallet management strategiesVASPs employ diverse approaches to manage their cryptoasset transfers and holdings. While some create new addresses for each user transaction, others might reuse addresses. Moreover, their approach varies when dealing with UTXO-based or account-based ledgers. We observed that VASPs deploy user-specific Ethereum smart contracts wallets for each customer and subsequently forward the funds to a collector wallet. We did not observe this pattern with Bitcoin. This organization strategy makes it more challenging to identify cryptoasset holdings associated with VASPs. Identification largely relies on heuristics approaches, which can produce false positives and are often inadequately understood. Lack of attribution dataAnother issue concerns the lack of attribution data, i.e., associations of addresses with additional contextual information allowing the identification of their owner. Our attribution dataset contains more than 265,000,000 deanonymized Bitcoin addresses and 278,244 tagged Ethereum addresses. Furthermore, we have conducted additional manual transactions with their services to identify and tag the specific addresses associated with the VASPs investigated in our study. Despite this, the resulting data only provide a partial view of their holdings, as shown in the previous section. Another issue associated with manual tagging is that it misses historical data. As we showed in Section 4 for VASP-12 and VASP-5, we could only trace the Bitcoin transaction history of a VASP back in time for a few months when using re-identification techniques. Missing cross-ledger perspectiveThe data collected for both ledger types face a common issue -- they may only represent a portion of the total cryptoassets holdings. This could be because manual transactions used to tag hot wallets, which are addresses used for daily deposit and withdrawal operations, may not successfully identify cold wallets, i.e., the addresses that manage most of the VASP funds. The latter are subject to stricter security measures that may prevent association with hot wallet addresses. Additionally, wallets such as VASP-2 and VASP-12 contain more funds than reported to authorities, making it difficult to differentiate between customers' funds and other cryptoassets managed under the same wallet, such as equity or private funds. ### Off-chain data collection issues In addition to on-chain data, we used all data sources currently available for VASPs in Austria: balance sheet data from the commercial register and data from the supervisory entities. Long reporting periodsBalance sheets are only published yearly, and asset holdings might differ before and after the exact reporting due date. Thus, the balance sheet statements of VASPs are only partially suitable for assessing their solvency. Missing breakdown by cryptoasset typeNevertheless, it is important to outline the type of data and a good reporting practice for such data to improve the transparency of virtual asset-providing companies. Not all firms report balance-sheet items for crypto and fiat asset holdings separately. In the data comparison in Section 4, we sometimes needed to use proxies for some VASPs that overestimate the actual cryptoasset holdings of a VASP, primarily due to the aggregation of multiple items within the same balance sheet entry. It is, therefore, essential that VASPs report their fiat and crypto asset and liability positions at a reasonable frequency separately from other activities within a company's holding structure. Subsidiary companies and different jurisdictionsVASPs may be subsidiaries of larger corporations. For VASP-9, we could not precisely determine the proportion of assets attributable to the subsidiary we examined. Moreover, many companies operate in several countries and fall under multiple jurisdictions, which adds another layer of complexity. ### Limitations of our approach Other limitations related to our approach stand out. First, data are extracted from the two major DLTs, Bitcoin and Ethereum, and only on a limited number of tokens supported by the latter. While these are the most relevant for market capitalization, including other DLTs and Ethereum tokens would be a straightforward improvement. Second, we gather Ethereum data by querying the account balances. Thus, we do not reconstruct balances from transactions, and we repeat the procedure on an interval of 10,000 blocks. We favor the approach based on querying the account states as it facilitates reproducibility at the cost of a lower granularity. We also note that this time interval can be easily changed with a shorter one. Third, our current approach is limited to end-year of 2021, but the analysis can potentially be extended to subsequent years. ### Towards a systematic assessment of proof of solvency Having discussed the data issues and limitations of our approach, we would like to sketch out our vision for a more systematic, reliable, and highly automated assessment of proof of solvency. Assessing proof of solvency todayFiat assets and liabilities are held at traditional financial intermediaries and undergo audits based on established standards. On the other hand, cryptoassets are held in cryptoasset wallets, scattered across various, potentially privacy-preserving DLTs, and are not subject to systematic and consistent audits. By measuring the cryptoassets held by one VASP, we can validate the amounts reported in the balance sheets. Given that the difference between assets and liabilities on a balance sheet equals equity, our method offers an initial, systematic validation and proof of solvency. However, balance sheets currently disclose crypto and fiat deposits from customers under one balance sheet position. Thus, we cannot answer whether the VASPs retain the customer funds in crypto or convert them to fiat (or vice-versa). Improving on-chain data reportingRegarding on-chain data, we note that determining the solvency of VASPs is unfeasible without knowledge of the crypto addresses they control. Hence, any auditing entity must be aware of the on-chain cryptoasset holdings a particular VASP manages. Furthermore, sharing a list of on-chain wallet addresses alone is insufficient. In a system with weak identities, anyone could hold the corresponding private keys and control the associated funds. VASPs need to prove they also control the funds they hold in custody for their users. Revealing a list of on-chain wallet addresses and transferring funds proves that a VASP possesses and manages specific funds. However, this approach can create privacy, security, and operational efficiency concerns. One way to mitigate these issues is to share this information only with trusted entities such as certified auditors or regulatory authorities. Furthermore, this approach would not disclose any information on actual user deposits. Finally, in addition to disclosing their on-chain wallets, VASPs should provide additional metadata describing the use of these wallets. Most importantly, they should differentiate between hot and cold wallets and customer and non-customer (corporate) wallets. With hot wallets, they could also distinguish between deposit and withdrawal wallets and specify whether they are used per customer or across customers. In addition to the amounts contained therein, it would also be important for auditors to know what digital and physical security measures are taken to prevent cold wallets from being compromised. Improving off-chain data reportingOn the off-chain side, reporting requirements for a VASP should include a breakdown of asset holdings differentiating between fiat and crypto holdings. Such a breakdown is necessary for items on the asset side but also for items on the liability side. A step towards even more granularity is to differentiate the crypto items according to major cryptoassets and to provide wallet information on the storage of crypto asset holdings and liabilities. To understand the implications of VASPs on financial stability, frequent and detailed reports on the distribution of who are the counter-parties of VASPs (private customers, companies, other VASPs,...) and concepts of how and where crypto assets are stored are necessary information. Enhancing VASP solvency assessmentOne possible strategy to improve the assessment process is to use cryptographic primitives. The academic literature has already proposed cryptographically secure proof-of-concept implementations for proofing the solvency of cryptoasset exchanges. Decker et al., 2015, in particular, proposed an audit process in a trusted computing environment that exploits digital signatures on their associated addresses for proofing reserves. Merkle trees, instead, are used to prove the total size of user deposits without directly leaking user-specific information. This technique has already been implemented by several centralized exchanges (e.g., Binance14). However, that method has two flaws: first, an attacker that controls many accounts could still potentially learn a significant amount about the exchange's users; second, Merkle trees could allow an exchange that has more customer deposit assets than reserves to make up the difference by adding fake accounts with negative balances. To improve the privacy and robustness of that approach, Buterin (2022) recently proposed to use ZK-SNARK to prove that all balances in the tree are non-negative. Footnote 14: [https://www.binance.com/en/proof-of-reserves](https://www.binance.com/en/proof-of-reserves) A more forward-thinking strategy goes in the direction of automation. Given access to both on- and off-chain data with specific detail and granularity, the entire audit process could be streamlined and performed more systematically, frequently, and reliably than current methods allow. In line with this perspective, Auer (2019) introduced the concept of "embedded supervision" enabling automated monitoring of decentralized finance (DeFi) services to ensure compliance with regulatory objectives. Buterin et al. (2023) studied an automated privacy-enhancing protocol that utilizes smart contracts and ZK-SNARKS to prove that the users' assets were received from lawful sources. Additionally, Eichengreen et al. (2023) suggest that real-time audits carried out by independent proof-of-reserve systems and facilitated by smart contracts could effectively mitigate the threat of stablecoin devaluation. In conclusion, it is noteworthy that, according to Article 29 (1) of the Austrian AML-Act, the FMA already possesses the authority and legal mandate to request essential data from all obliged entities (i.e., VASPs) at any time on all issues that are addressed in the Austrian AML-Act and Regulation (EU) 2015/847, e.g. a list of cryptoasset addresses under their control15. Footnote 15: [https://www.ris.bka.gv.at/eli/bgbl/i/2016/118/P29/N0R40189690](https://www.ris.bka.gv.at/eli/bgbl/i/2016/118/P29/N0R40189690), [https://eur-lex.europa.eu/eli/reg/2015/847/oj](https://eur-lex.europa.eu/eli/reg/2015/847/oj) ## 6 Conclusions In this work, we investigate 24 VASPs registered with the Austrian Financial Market Authority (FMA) at the end of 2022. We aim to provide an empirical approach to assess their solvency status, by measuring their cryptoasset holdings across time and distributed ledgers. To do so, we cross-reference data from three distinct sources: publicly auditable cryptoasset wallets, balance sheet data from the commercial register, and information from supervisory entities. We begin by describing the financial services they offer, the virtual assets they support, and compare them to conventional financial intermediaries. Their core financial activity can be compared to money exchanges, brokers, and funds, rather than to commercial banks. Furthermore, we provide regulatory data insights showing that their yearly incoming and outgoing transaction volume in 2022 amounted to 2 billion EUR for around 1.8 million users. Next, we implement address clustering algorithms and entity identification techniques to reconstruct their cryptoasset flows on the Bitcoin and Ethereum blockchains and compare their net positions to balance sheet data from the commercial register. We focus on four VASPs for which we could gather information both on their cryptoasset transactions and balance sheets. These four entities cover around 99% of the Austrian VASP transaction volumes measured in total assets. With our approach, we find proof, for two VASPs out of four, that they control enough assets to fulfill liabilities and obligations against customers, i.e., they meet the capital requirements, while we could not collect enough data for the remaining two. Then we discuss the data collection-related issues and suggest solutions towards better assessing a VASP solvency. In particular, we remark that any entity in charge of auditing requires proof that a VASP actually controls the funds associated with its on-chain wallets. It is also important that a VASP reports fiat and crypto asset and liability positions, broken down by asset types at a reasonable frequency. In conclusion, our approach highlights the need to address the identified data gaps in the current data collection process and provides a starting point for developing more effective strategies to systematically assess the solvency status of virtual asset service providers. ## Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2309.08622
Representation Learning in Low-rank Slate-based Recommender Systems
Reinforcement learning (RL) in recommendation systems offers the potential to optimize recommendations for long-term user engagement. However, the environment often involves large state and action spaces, which makes it hard to efficiently learn and explore. In this work, we propose a sample-efficient representation learning algorithm, using the standard slate recommendation setup, to treat this as an online RL problem with low-rank Markov decision processes (MDPs). We also construct the recommender simulation environment with the proposed setup and sampling method.
Yijia Dai, Wen Sun
2023-09-10T21:40:51Z
http://arxiv.org/abs/2309.08622v2
# Representation Learning in Low-rank Slate-based Recommender Systems ###### Abstract Reinforcement learning (RL) in recommendation systems offers the potential to optimize recommendations for long-term user engagement. However, the environment often involves large state and action spaces, which makes it hard to efficiently learn and explore. In this work, we propose a sample-efficient representation learning algorithm, using the standard slate recommendation setup, to treat this as an online RL problem with low-rank Markov decision processes (MDPs). We also construct the recommender simulation environment with the proposed setup and sampling method. Machine Learning, ICML ## 1 Introduction Recommender systems aim to find personalized contents based on the learned user preferences, as in collaborative filtering (Breese et al., 2013; Konstan et al., 1997; Srebro et al., 2004; Mnih and Salakhutdinov, 2007) and content-based filtering (Van Meteren and Van Someren, 2000). A good recommender increases user engagement, and keeps the user interacting within the system. The current popular platforms, like Youtube (Covington et al., 2016), Spotify (Jacobson et al., 2016), and Netflix (Gomez-Uribe and Hunt, 2015), make extensive use of recommender systems. However, as a user interacts with the system for a longer term, it becomes necessary to keep track of the user dynamics. Traditional methods focus more on myopic predictions, where they estimate the users' immediate responses. Current research has increased to narrate the problem as a Markov decision process (Rendle et al., 2010; He and McAuley, 2016), and do long-term planning using reinforcement learning algorithms (Shani et al., 2005; Gauci et al., 2018). However, using RL methods in recommender systems faces an issue regarding the large observation and action space, and doing efficient exploration becomes a harder question. Prior RL methods in recommender systems often overlook exploration, or use \(\epsilon\)-greedy and Boltzmann exploration (Afsar et al., 2022). In this work, we do representation learning under the low-rank MDP assumption. More specifically, we focus on the case where a user can be learned using a history of observations into a low-dimension representation. And the user dynamics can be modeled as transitions of such representation. Concretely, a low-rank MDP assumes that the MDP transition matrix admits a low-rank factorization, i.e., there exists two unknown mappings \(\mu(s^{\prime})\), \(\phi(s,a)\), such that \(P(s^{\prime}|s,a)=\mu(s^{\prime})^{\top}\phi(s,a)\) for all \(s,a,s^{\prime}\), where \(P(s^{\prime}|s,a)\) is the probability of transiting to the next state \(s^{\prime}\) under the current state and action \((s,a)\). The representation \(\phi(s,a)\) in a low-rank MDP not only linearizes the optimal state-action value function of the MDP (Jin et al., 2020), but also linearizes the transition operator. Such low-rankness assumption has been shown to be realistic in movie recommender systems (Koren et al., 2009). Our main contribution is using upper confidence bound (UCB) driven representation learning to efficiently explore in user representation space. It is a practical extension from Rep-UCB, which provides theorical guarantee on efficient exploration under low-rank MDP with sample complexity \(O(d^{4}|\mathcal{A}|^{2}/(\epsilon^{2}(1-\gamma)^{5}))\)(Uehara et al., 2021). We focus on the case where action is slate recommendation. And under the mild assumption of user choice behavior which we will elaborate in Section 2, the combinatorial action space \(O(|\mathcal{A}|)\) can be reduced to \(O(k|\mathcal{I}|)\) where \(k\) is the slate size and \(\mathcal{I}\) is the item space. To evaluate our method systematically, we introduce a recommender simulation environment, _RecSim NG_, that allows the straightforward configuration of an item collection (or vocabulary), a user (latent) state model and a user choice model (Mladenov et al., 2021). We describe specific instantiations of this environment suitable for user representation learning, and the construction of our Rep-UCB-Rec learning and optimization methods. ### Related Works **Recommender Systems** Recommender systems have relied on collaborative filtering techniques to learn the connection between users and items. Conceptually, they cluster users and items, or embed users and items in a low-dimensional representation (Krestel et al., 2009; Moshfeghi et al., 2011) for further predictions. For the sake of capturing more nuanced user behaviors, deep neural networks (DNNs) are used in real world applications (Van den Oord et al., 2013; Covington et al., 2016). It naturally follows to be studied as a RL problem, as the user dynamics can be modeled as MDPs. Embeddings are commonly used to learn latent representations from immediate user interactions (Liu et al., 2020). And predictive models are commonly used to improve sample efficiency (Chen et al., 2021) and do self-supervised RL (Xin et al., 2020; Zhou et al., 2020). **Low-rank MDPs** Oracle-efficient algorithms for low-rank MDPs (Agarwal et al., 2020; Uehara et al., 2021) provide sample complexity bounds easing the difficulty of exploration in large state space. Under more restricted setting, such as block MDPs (Misra et al., 2020) and \(m\)-step decodable MDPs (Efroni et al., 2022), methods are also studied to deal with the curse of dimensionality. **State-based recommendation and choice models** Slate recommendation is common in recommender systems (Deshpande and Karypis, 2004; Viappiani and Boutilier, 2010; Ie et al., 2019). Within the context, the complexity within a constructed plate is studied using methods like off-policy evaluation and learning inverse propensity scores (Swaminathan et al., 2017). Hierarchical models are also used for studying user behavior interacting with slates (Mehrotra et al., 2019). The user choice model is linked with the slate recommendation. A common choice model is multinomial logit model (Louviere et al., 2000). As a good representation of user choice boosts the probability capturing the real-world user behaviors, many areas, such as econometrics, psychology, and operations research (Luce, 2012), have studied it using their own scientific methods. Within the ML community, another popular choice is cascade model (Joachims, 2002), as it also captures the fading attention introduced by browsing behavior. ## 2 Preliminaries We consider an episodic MDP \(\mathcal{M}=\langle\mathcal{S},\mathcal{A},P,r,\gamma,d_{0}\rangle\) for slate-based recommendations. In which, a recommender presents a slate to a user, and the user selects zero or one item to consume. Then, the user respond to this consumed item with an engagement measure. The above setup is commonly used and easily extensible to real world applications (Ie et al., 2019). Under our setup, the states \(\mathcal{S}\) can reflect the ground-truth user states \(\mathcal{U}\), which includes static user features (such as demographics), as well as dynamic user features (such as moods). Particularly, the user history interacting with past recommendations plays a key role. This history summarization is usually domain specific, and can capture the user latent state in a partially observable MDP. The state should be predictive of immediate user response (e.g., immediate engagement, hence reward) and self-predictive (i.e., summarizes user history in a way that renders the implied dynamics Markovian). The action space \(\mathcal{A}\) is the set of all possible recommendation slates. We assume a fixed set of items \(\mathcal{I}\) to recommend. Then, an action \(a\in\mathcal{A}\) is a subsets \(a\subseteq\mathcal{I}\), and \(|\mathcal{A}|=\binom{|\mathcal{I}|}{k}\), where \(k\) is the slate size. We assume no constraints, so that each item \(i\in\mathcal{I}\) and each slate \(a\) can be recommended at each state \(s\). Note that we do not account for positional bias within a slate in this work. However, we do note that the effects of ordering within one slate can be learned using offline methods (Schnabel et al., 2016). Because a user may select no item from a slate, we assume that every slate includes a \((k+1)\)-th null item. This is standard in most choice modeling work for specifying the necessary user behaviors induced by a choice from the slate. The transition \(P(s^{\prime}|s,a)\) represents the probability of user transitioning to \(s^{\prime}\) from \(s\) when action \(a\) is taken by the recommender. The uncertainty mainly reflects two aspects of a recommender system MDP. First, it indicates how a user will consume a particular recommended item \(i\in a\) from the slate, marking the critical role that choice models play in evaluating the quality of a slate. Second, the user state would dynamically transit based on the consumed item. Since the ground truth \(P^{\star}\) is unknown, we need to learn it by interacting with environments in an online manner or utilizing offline data at hand. The reward \(r(s,a)\) usually measures user engagement un Figure 1: A latent state model captured by low-rank MDP. The states represent the logged user history, including the responses to the recommendations. The \(\phi^{\star}(s,a)\) is a distribution over the latent user representation space \(\mathcal{U}\). Note that this is still Markovian model since there is no direct transition between latent states. der state \(s\) when recommended with slate \(a\). Note that, expectation is more often used to account for the uncertainty introduced by user choice. Without loss of generality, we assume trajectory reward is normalized, i.e., for any trajectory \(\{s_{h},a_{h}\}_{h=0}^{\infty}\), we have \(\sum_{h=0}^{\infty}\gamma^{h}r(s_{h},a_{h})\in[0,1]\). We assume that \(r(s,a)\) is known. This assumption largely relies on the success of existing myopic, item-level recommender (Covington et al., 2016). The discounted factor \(\gamma\in[0,1)\) and initial distribution \(d_{0}\in\Delta(\mathcal{S})\) are also known. Our goal is to learn a policy \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\) which maps from state to distribution over actions (i.e., recommended slates). We use the following notations. Under some probability transition \(P\), the value function \(V_{P}^{\pi}(s)=\mathbb{E}[\sum_{h=0}^{\infty}\gamma^{h}r(s_{h},a_{h})|s_{0}=s,P,\pi]\) to represent the expected total discounted reward of \(\pi\) under \(P\) starting at \(s\). Similarly, we define the state-action \(Q\) function \(Q_{P}^{\pi}(s,a)=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim P(\cdot|s,a)}V_{P}^{ \pi}(s^{\prime})\). Then, the expected total discounted reward of a policy \(\pi\) under transition \(P\) and reward \(r\) is denoted by \(V_{P,\cdot}^{\pi}=\mathbb{E}_{s_{0}\sim d_{0}}V_{P}^{\pi}(s_{0})\). We define the state-action discounted occupancy distribution \(d_{P}^{\pi}(s,a)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}d_{P,t}^{\pi}(s,a)\), where \(d_{P,t}^{\pi}(s,a)\) is the probability of visiting \((s,a)\) at time step \(t\) under \(P\) and \(\pi\). The state visitation as \(d_{P}^{\pi}(s)=\sum_{a\in\mathcal{A}}d_{P}^{\pi}(s,a)\) Finally, given a vector \(a\), \(\|a\|_{2}=\sqrt{a^{\top}a}\), \(\|a\|_{B}=\sqrt{a^{\top}Ba}\), and \(\{c_{i}\}\) where \(i\in\mathbb{N}\) are constants. We focus on low-rank MDP defined as follows, with normalized function classes. **Definition 2.1**.: (Low-rank MDP) A transition model \(P=\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\) admits a low-rank decomposition with rank \(d\in\mathbb{N}\) if there exist two embedding functions \(\phi\) and \(\mu\) such that \[\forall s,s^{\prime}\in\mathcal{S},a\in\mathcal{A}:P(s^{\prime}|s,a)=\mu(s^{ \prime})^{\top}\phi(s,a)\] where \(\|\phi(s,a)\|_{2}\leq 1\) for all \((s,a)\) and for any function \(f:\mathcal{S}\rightarrow[0,1]\), \(\|\int\mu(s)f(s)d(s)\|_{2}\leq\sqrt{d}\). An MDP is low-rank if \(P\) admits low-rank decomposition. Within the context of recommender systems, low-rank MDPs capture the latent user representation dynamics as shown in Figure 2. As states are observed and follow Markovian transition, they mainly contain information of the history of user interactions with the recommender. Note that the states are likely to have overlapping information under this definition, and this simplifies the model class of \(\mu\) as we will later discuss in Section 2. The actions are the slates presented to the user at each time step. Thus, from a logged history of user and a current slate recommendation, \(\phi^{\star}\) maps \((s,a)\) to the latent representation of user. In reality, the latent space \(\mathcal{U}\) would be a compact representation space that contains the most important information that impacts user decisions. Learning such representation and interpreting would be meaningful in real life applications. In episodic online learning, the goal is to learn a stationary policy \(\hat{\pi}\) that maximize \(V_{P^{\star},\cdot}^{\pi}\), where \(P^{\star}\) is the ground truth transition. We can only reset at initial distribution \(d_{0}\), which emphasize the restricted nature of recommender systems and the challenge for exploration. The sampling of state \(s\) from visitation \(d_{P}^{\pi}\) is done by following the _roll-in_ procedure: from starting state \(s_{0}\sim d_{0}\), at every time step \(t\), with probability \(1-\gamma\) we terminate, otherwise we execute \(a_{t}\sim\pi(s_{t})\) and transit to \(s_{t+1}\sim P(\cdot|s_{t},a_{t})\). **Function approximation for representation learning.** Since \(\phi^{\star}\) and \(\mu^{\star}\) are unknown, we use function classes to capture them. These model classes can be learned using the existing myopic methods for user behavior prediction. **Assumption 2.2**.: (Realizability) We assume to have access to model classes \(\phi^{\star}\in\Phi\) and \(\mu^{\star}\in\Psi\). We assume the function approximators also follow the normalization as \(\phi^{\star}\) and \(\mu^{\star}\), i.e., for any \(\phi\in\Phi\) and \(\mu\in\Psi\), \(\|\phi(s,a)\|_{2}\leq 1\) for all \((s,a)\), \(\|\int\mu(s)f(s)d(s)\|_{2}\leq\sqrt{d}\) for all \(f:\mathcal{S}\rightarrow[0,1]\), and \(\int\mu^{\top}(s^{\prime})\phi(s,a)d(s^{\prime})=1\) for all \((s,a)\). We learn the functions via supervised learning oracle, which should be computationally efficient given the task. **Definition 2.3**.: (Maximum Likelihood Estimator) The MLE oracle takes a dataset of \((s,a,s^{\prime})\) tuples, and returns \[\hat{P}:=(\hat{\mu},\hat{\phi})=\arg\max_{(\mu,\phi)\in\mathcal{M}}\mathbb{E}_{ \mathcal{D}_{n}+\mathcal{D}_{n}^{\prime}}[\ln\mu^{\top}(s^{\prime})\phi(s,a)].\] The assumption above is achievable by the myopic estimators of user behavior models within recommendation context. **User choice model.** We introduce an assumption that is developed to reasonably structuralize the user behavior, and lead to effective reduction on the action space (Ie et al., 2019). **Assumption 2.4**.: (Reward / transition dependence on selection) We assume that \(r(s,a)\) and \(P(s^{\prime}|s,a)\) depend only on the item \(i\in a\) on slate that is consumed by the user (including the null item). The original action space under our MDP is \((\begin{subarray}{c}|\mathcal{I}|\\ k\end{subarray})\) with unordered \(k\)-sets over \(\mathcal{I}\). With large item space, effective exploration is impossible. Luckily, the nature of human preference and structure of recommender systems poses the possibility for reduction. With the prior assumption on user choice setup, where user select zero (null) or one item from slate, we formalize the properties as follows: \[r(s,a) =\sum_{i\in a}P(i|s,a)r(s,a,i),\] \[P(s^{\prime}|s,a) =\sum_{i\in a}P(i|s,a)P(s^{\prime}|s,a,i).\] \[\forall a,a^{\prime}\text{ containing }i,\quad r(s,a,i)=r(s,a^{ \prime},i)=r(s,i),\] \[P(s^{\prime}|s,a,i) =P(s^{\prime}|s,a^{\prime},i)=P(s^{\prime}|s,i),\forall a,a^{ \prime}.\] Now, notice that the transition can be described as \(P(s^{\prime}|s,a)=\sum_{i\in a}P(i|s,a)P(s^{\prime}|s,i)\), where \(P(i|s,a)\) represents the user's choice on item \(i\) given a slate, and \(P(s^{\prime}|s,i)\) is the user state transition when \(i\) is consumed. This helps us to redefine uniform action in Section 3 and obtain complexity bound independent of the combinatorial action space. Similar to the reward \(r(s,a)\), we assume the user choice given a \((s,a)\)-pair is known, relying on the success of existing myopic, item-level recommender (Covington et al., 2016). Thus, the low-rank MDP realizability assumption here is used to describe \(P(s^{\prime}|s,i)\). ## 3 Main results We now define uniform action \(U(\mathcal{A})\) within the above setup. Remind that the uniform action is used to encourage user transition to some novel states. As user transition is only dependent on consumed item \(i\), we use the following definition for sufficient exploration. **Definition 3.1**.: The uniform action for a slate recommendation \(U(\mathcal{A})\) is defined as the following: 1. randomly pick an item \(i\) from \(\mathcal{I}\). 2. fill the remainder of the slates using the items least-likely be selected by the user. We further show that under this definition, the effective uniform action space is \(O(k|\mathcal{I}|)\). **Lemma 3.2**.: _The user choose the selected item \(i\) with probability at least \(1/k\)._ We use the basic properties of probability and pigeonhole principle to further arrive at the proposition. **Proposition 3.3**.: _Efficient exploration action space is achieved in \(O(k|\mathcal{I}|)\)._ Proof.: By product rule of probability, the user select an item \(i\) with probability at least \(1/(k|\mathcal{I}|)\) under the uniform action \(U(\mathcal{A})\) as defined above. By pigeonhole principle, every \(k|\mathcal{I}|+1\) uniform actions lead to at least one duplicate action in this space. Note that the above change in distribution of action space definition is introduced by asymmetry of uniform distributions. More specifically, by the principle of indifference, we choose our objective on user state transition instead of naive action space (White, 2010). ### Algorithm The algorithm is based on Rep-UCB (Uehara et al., 2021), with specified definition on state space \(\mathcal{S}\), action space \(\mathcal{A}\), the data collection process, and the uniform action \(U(\mathcal{A})\) under the context of recommender system. A state \(s\) consist of static user features and a history of past recommendations and user responses. An action \(a\) provides a slate recommendation. During the data collection process, for every iteration, we do one rollout under current policy \(\pi\). We assume the states sampled from the initial distribution \(d_{0}\) follows the prior Assumption 2.2 on model classes. The sampling procedure for \(s\sim d_{P^{*}}^{\pi_{n-1}}\) begins with \(s_{0}\sim d_{0}\). At every time step \(t\), we terminate with probability \(1-\gamma\), or execute \(a_{t}\sim\pi(s_{t})\) and observe \(s_{t+1}\sim P^{*}(\cdot|s_{t},a_{t})\). We sample uniform action \(U(\mathcal{A})\) defined by Definition 3.1. It means the recommender would do recommendations based on planning until termination, then do two uniform recommendations by the end of data collection process. We collect the tuple \((s,a,s^{\prime},a^{\prime},\tilde{s})\) and update datasets. Representation learning, building the empirical covariance matrix, and calculating the bonus are done sequentially after datasets updates. The final step within the episode is planning using the learned estimation of transition with added exploration bonus. ### Analysis The PAC bound for Rep-UCB (Uehara et al., 2021) provides us a good starting point. Our anlaysis focuses on reducing the naive combinatorial action space \(|\mathcal{A}|\) to be polynomial in slate size \(k\) and item space size \(|\mathcal{I}|\). **Theorem 3.4**.: _(PAC Bound for Rep-UCB-Rec) Fix \(\delta\in(0,1)\), \(\epsilon\in(0,1)\). Let \(\hat{\pi}\) be a uniform mixture of \(\pi_{1},...,\pi_{N}\) and \(\pi^{\star}:=\arg\max_{\pi}V_{P^{*},r}^{\pi}\) be optimal policy. Set the parameters as follows:_ \[\alpha_{n}=O\left(\sqrt{(k|\mathcal{I}|+d^{2})\gamma\ln(|\mathcal{M}|n/\delta )}\right),\] \[\lambda_{n}=O(d\ln(|\mathcal{M}|n/\delta)),\] _with probability at least \(1-\delta\), we have_ \[V_{P^{*},r}^{\pi^{*}}-V_{P^{*},r}^{\hat{\pi}}\leq\epsilon.\] _The number of collected samples is at most_ \[O\left(\frac{d^{4}k^{2}|\mathcal{I}|^{2}\ln(|\mathcal{M}|/\delta)^{2}}{(1- \gamma)^{5}\epsilon^{2}}\cdot\nu\right)\] _where_ \[\nu:=O\left(\ln\left(\frac{\iota}{\delta}\ln^{2}{(1+\iota)}\right)\cdot\ln^{ 2}{(1+\iota)}\right)\] _and_ \[\iota:=\frac{d^{4}k^{2}|\mathcal{I}|^{2}\ln(|\mathcal{M}|/\delta)^{2}}{(1- \gamma)^{5}\epsilon^{2}}.\] Proof.: The upper bound dependency on \(|\mathcal{A}|\) is introduced by the importance weighing of the policy \(\pi\) to uniform action, where \(\max_{(s,a)}\frac{\pi(a|s)}{u(a)}\leq|\mathcal{A}|\). Under our definition of \(U(\mathcal{A})\) under slate recommendation context, we naturally change the upper bound to \(k|\mathcal{I}|\). Note that this dependency is then interchangeable throughout the proof. ## 4 Simulations We discuss the simulation environment setup and the algorithm construction in this section. The simulation uses _Recsim NG_(Mladenov et al., 2021). A graph illustration is shown in Appendix A. ### Simulation environment **Item class.** A static set of items are sampled at the beginning of the simulation. Each item is represented by a \(T\)-dimensional vector \(\mathbf{i}\in[-1,1]^{T}\), where each dimension represents a topic. Each item has a length \(l(\mathbf{i})\in[0,1]\) (e.g. length of a video, music or an article), and a quality \(q(\mathbf{i})\in[-1,1]\) that is unobserved from the user. **User interest.** Users \(\mathbf{u}\in U\) have various degrees of interests in each topics. Each user \(\mathbf{u}\) is represented by an interest vector \(\mathbf{u}\in[-1,1]^{T}\). The user interest towards a certain item is calculated by inner product \(\mathbf{i}^{\top}\mathbf{u}\). The user interest vector \(\mathbf{u}\) and the mechanism of user interest towards items are unobserved to recommenders. **User choice.** Given a slate of \(k\) items, a user choose to consume one item from the slate with \[P(\mathbf{i}|a,\mathbf{u})=\frac{e^{\mathbf{i}^{\top}\mathbf{u}}}{\sum_{ \mathbf{j}\in a}e^{\mathbf{j}^{\top}\mathbf{u}}}.\] This choice model is called multinomial logit model (Louviere et al., 2000). For the null item (no choice), the item is simply represented by a \(T\)-dimensional zeros vector with length and quality zeros. **User dynamics.** The internal transition of user interest vector allows the environment to capture Markovian transition and allow RL methods to do meaningful planning. After the consumption on item \(\mathbf{i}\) at time step \(t\), a user \(\mathbf{u}\) follows \[\mathbf{u}_{t}=c_{0}\mathbf{u}_{t-1}+c_{1}q(\mathbf{i})(\mathbf{i}-\mathbf{u} )+\epsilon\] where \(\epsilon\sim\mathcal{N}(0,c_{3})\). The constants are for normalization. Under this update, the user transits to favor the topics in \(\mathbf{i}\) more if the quality of \(\mathbf{i}\) is positive. **Reward.** The reward is reflected by the user consumption time of a chosen item \(\mathbf{i}\). It is linear with respect to the length and the user interest of item \(\mathbf{i}\). ### Algorithm construction The state is a history of length \(h\) that the user interacts with the recommender. The history contains the slate recommendations and user responses at each time step \(t-h,...,t-1\). We construct the sampling procedure introduced by Rep-UCB-Rec. At each episode, the recommender follows the current learned policy and observe user responses. Until the rollin procedure terminates, the recommender further does two step uniform recommendation. The supervised learning for representation is done in an offline manner, where we combine two model classes and train them together to make predictions from \((s,a)\) to \(s^{\prime}\). Note that by definiton of states, we only predict the user next response towards current action, and slide up one time step the time window of history. It cases the complexity of large state space to \(O(k)\) and reflects on the special structure behind recommender systems. After calculating the emperical covariance matrix and exploration bonus, we utilize a costomized simulator where transition and reward are estimated. Standard policy gradient methods are used to calculate the updated policy within the this environment under \(\hat{P}\) and \((r+\hat{b})\). ## 5 Conclusion In this work, we propose a sample-efficient representation learning algorithm, using the standard slate recommendation setup, to treat this as an online RL problem with low-rank Markov decision processes (MDPs). We show that the sample complexity for learning near-optimal policy is \(O(d^{4}k^{2}|\mathcal{I}|^{2}/\epsilon^{2}(1-\gamma)^{5})\) where \(k\) is the slate size and \(\mathcal{I}\) is the item space. We further show the detailed construction of a recommender simulation environment with the proposed setup and sampling method.
2309.04018
Time-Symmetric Resolutions of the Renninger Negative-Result Paradoxes
The 1953 and 1960 Renninger negative-result thought experiments illustrate conceptual paradoxes in the Copenhagen formulation of quantum mechanics. In the 1953 paradox we can infer the presence of a detector in one arm of a Mach-Zehnder interferometer without any particle interacting with the detector. In the 1960 paradox we can infer the collapse of a wavefunction without any change in the state of a detector. I resolve both of these paradoxes by using a time-symmetric formulation of quantum mechanics. I also describe a real experiment that can distinguish between the Copenhagen and time-symmetric formulations.
Michael B. Heaney
2023-09-07T21:09:29Z
http://arxiv.org/abs/2309.04018v1
# Time-symmetric resolutions of the Renninger Negative-Result Paradoxes ###### Abstract. The 1953 and 1960 Renninger negative-result thought experiments illustrate conceptual paradoxes in the Copenhagen formulation of quantum mechanics. In the 1953 paradox we can infer the presence of a detector in one arm of a Mach-Zehnder interferometer without any particle interacting with the detector. In the 1960 paradox we can infer the collapse of a wavefunction without any change in the state of a detector. I resolve both of these paradoxes by using a time-symmetric formulation of quantum mechanics. I also describe a real experiment that can distinguish between the Copenhagen and time-symmetric formulations. ## 1. Introduction One of the five great problems in theoretical physics is to resolve the conceptual paradoxes in the foundations of the Copenhagen formulation of quantum mechanics, either by making sense of the Copenhagen formulation or developing a different formulation that does make sense [1]. One of these paradoxes involves negative-result (or interaction-free) measurements [2]. Such a measurement was proposed by Renninger in 1953 using a Mach-Zehnder interferometer thought experiment [3]. In 1960 Renninger proposed a more striking thought experiment using an isotropic point source and a spherical detector [4]: see Figure 1. The problem is that in the Copenhagen formulation a wavefunction seems to collapse without interacting with the detectors or leaving any observable trace behind. This is paradoxical. There is a controversy about what counts as an interaction in these and similar experiments [5]. It has also been claimed that what counts as an interaction depends on which formulation of quantum mechanics is being used [5, 6]. In the Copenhagen formulation the absorption of a particle would seem to count as an interaction [7]. But what about the collapse to zero of a particle wavefunction upon encountering a detector? What if the spatial wavefunction collapses but the internal state of the particle does not collapse [8]? I will discuss these issues in the context of the time-symmetric formulation below. The structure of the paper is as follows: Section 1 describes the 1960 Renninger negative-result paradox, largely in the words of de Broglie. Section 2 gives a general history and description of time-symmetric formulations of quantum mechanics, and shows how the Copenhagen Born rule is a special case of the time-symmetric transition equation. Section 3 gives a time-symmetric analysis of Renninger's 1960 thought experiment. Section 4 describes the Renninger 1953 negative-result paradox. Section 5 gives a time-symmetric analysis of Renninger's 1953 thought experiment. Section 6 discusses the results and implications of this paper. ## 2. The 1960 Paradox De Broglie explained the 1960 Renninger negative-result paradox and outlined his proposed resolution as follows [9]: In the example which [Renninger] gives, a point source S emits particles isotropically in all directions. A screen \(E_{1}\) in the form of a sector of a sphere centered on S and having a radius \(R_{1}\) is covered on the inside with a substance which indicates the arrival of a particle by a scintillation...Another screen \(E_{2}\), in the form of a complete sphere centered on S and having a radius \(R_{2}>R_{1}\), completely surrounds the screen \(E_{1}\). The second sphere is also covered inside with a phosphor. Suppose now that the screen \(E_{1}\) subtends a solid angle \(\Omega\) at S. The propagation of the wave emitted by S is restricted by the screen \(E_{1}\), and diffraction phenomena occur at the edges of of \(E_{1}\). Notwithstanding the existence of these diffraction phenomena, it is obvious that a particle emitted by the source will have a probability \(P_{1}=\Omega/4\pi\) of producing a scintillation on \(E_{1}\) and a probability \(P_{2}=(4\pi-\Omega)/4\pi\) of producing a scintillation on \(E_{2}\). At the instant of emission by the source of a particle with velocity \(v\), the emission of the associated wave commences at a time _t=0_ and lasts for a finite time \(\tau\). The Figure 1. The 1960 Renninger negative-result thought experiment. A source S at the center of the sphere \(E_{2}\) emits a single particle whose wavefunction spreads isotropically. The emitted particle is later detected at a localized point either on the inside surface of the sphere sector \(E_{1}\) or on the inside surface of the sphere \(E_{2}\). emitted wave \(\psi\) forms a spherical shell whose leading edge reaches the screen \(E_{1}\) in a time \(t_{1}=R_{1}/v\) while the trailing edge reaches the same screen at the time \(t_{1}+\tau\). If, at time \(t_{1}+\tau\) no scintillation is produced on screen \(E_{1}\), we can be certain that the scintillation will be produced on \(E_{2}\). \(P_{1}\) suddenly becomes zero, and \(P_{2}\) becomes equal to 1. Thus, there will be a sharp change in the amplitude of the wave on the two screens and, according to the usual [Copenhagen] theory, we shall have a special case of the reduction of a probability packet. A particularly paradoxical situation will now exist, since the observer sees nothing at all on screen \(E_{1}\), where nothing has happened. In this experiment the reduction of the probability packet is quite incomprehensible. It is, in fact, impossible to accept that this reduction is due to the increase of knowledge of the observer who has observed nothing, nor to a device--here screen \(E_{1}\)--which registered nothing. The situation becomes clearer if we accept that the source emits a particle which remains closely associated with a wave, but which has a definite position at each time and, consequently, a definite trajectory. The trajectory should be closely linked with the propagation of the wave and should be influenced by it. It can be accepted that, at least on average, these trajectories are straight lines starting from S, except for the immediate vicinity of the edges of screen \(E_{1}\), which constitute an obstacle to propagation of the wave and give rise to diffraction, thus producing local modifications of the trajectories. On the whole, it can be said that the number of possible trajectories emanating from S and terminating on \(E_{1}\) is proportional to \(\Omega\) whilst the number of trajectories emanating from S and reaching \(E_{2}\), whether after a rectilinear trajectory or a trajectory which has been disturbed by diffraction at the edges of screen \(E_{1}\), is proportional to \(4\pi-\Omega\). We thus find the probabilities \(P_{1}=\Omega/4\pi\) and \(P_{2}=(4\pi-\Omega)/4\pi\) of the arrival of the particle at either \(E_{1}\) or \(E_{2}\). If no scintillation is produced on \(E_{1}\) in the time \(t_{1}+\tau\), which is the time taken by the [trailing edge of the] wave to reach the sphere of radius \(R_{1}\), then we may be sure that the trajectory followed by the particle is not one of those terminating on \(E_{1}\). There would thus be a sudden change to \(P_{1}=0\) and \(P_{2}=1\). This sudden change will represent simply a change in our knowledge of the trajectory of the particle. This removes the incomprehensible effect of the mind of the observer on the particle since there is no scintillation on \(E_{1}\). As far as the presence of "measuring devices" is concerned, the screen \(E_{1}\) is simply an obstacle to the propagation of the wave and thus influences the _possible_ trajectories by stopping certain trajectories and giving rise to diffraction. This interpretation is very clear and much more comprehensible than the one based upon a mysterious effect which imposes on the particle the simple _possibility_ that it might become localised on \(E_{1}\). How can we possibly imagine that a possibility which has never become real would have such an effect? De Broglie then goes on to describe the details of his alternative formulation of quantum mechanics, now known as the pilot wave theory. I will instead explain and resolve Renninger's thought experiment using an alternative time-symmetric formulation of quantum mechanics. ## 3. Time-Symmetric Formulations of Quantum Mechanics Time-symmetric explanations of quantum behavior predate the discovery of the Schrodinger equation [10] and have been developed many times over the past century [11]. The time-symmetric formulation used in this paper has been described in detail and compared to other time-symmetric theories before [12, 13, 14, 15]. The key ideas are that _transitions_ between specified quantum states are the basic objects of interest in quantum mechanics, these transitions are fully described by transition amplitude densities, and wavefunction collapse never occurs. These ideas were originally developed by Feynman [16, 17]. In addition, I postulate that particle sources spontaneously emit isotropic retarded waves, particle detectors spontaneously emit isotropic advanced waves, and a transition only occurs when these two types of waves overlap at a source and a detector. These ideas are very similar to ideas that were originally developed by Cramer [18, 19]. The time-symmetric theory used in this paper postulates that the transition of a single free particle is described by the algebraic product of a retarded wavefunction \(\psi(\vec{r},t)\) which satisfies the initial conditions and evolves in time according to the retarded Schrodinger equation \[i\frac{\partial\psi}{\partial t}=-\frac{1}{2}\nabla^{2}\psi, \tag{1}\] and an advanced wavefunction \(\phi^{*}(\vec{r},t)\) which satisfies the final conditions and evolves in time according to the advanced Schrodinger equation \[-i\frac{\partial\phi^{*}}{\partial t}=-\frac{1}{2}\nabla^{2}\phi^{*}, \tag{2}\] where we assume the particle mass \(m=1\) and use natural units where \(\hbar=1\). These two equations are the low energy limits of the relativistic Klein-Gordon equation [13]. The equation of continuity can be obtained as follows. If we multiply Equation 1 on the left by \(\phi^{*}\), multiply Equation 2 on the left by \(\psi\), take the difference of the two resulting equations, and rearrange terms, we get \[\frac{\partial}{\partial t}(\phi^{*}\psi)+\nabla\cdot\left[\frac{1}{2i}\left( \phi^{*}\nabla\psi-\psi\nabla\phi^{*}\right)\right]=0. \tag{3}\] Now we will define \(\rho_{s}(\vec{r},t)\) as \[\rho_{s}\equiv\phi^{*}\psi, \tag{4}\] and define \(\vec{j}_{s}(\vec{r},t)\) as \[\vec{j}_{s}\equiv\frac{1}{2i}\left(\phi^{*}\nabla\psi-\psi\nabla\phi^{*} \right), \tag{5}\] to get a local conservation law \[\frac{\partial\rho_{s}}{\partial t}+\nabla\cdot\vec{j}_{s}=0. \tag{6}\] Note that both \(\rho_{s}(\vec{r},t)\) and \(\vec{j}_{s}(\vec{r},t)\) are generally complex functions, and therefore cannot be interpreted as a real probability density and a real probability density current. Instead, we will interpret \(\rho_{s}(\vec{r},t)\) as the amplitude density for a transition defined as an isolated, individual physical system that starts with maximally specified initial conditions, evolves in space-time, then ends with maximally specified final conditions. The complex function \(\rho_{s}(\vec{r},t)\) is called the transition amplitude density. This complex function is also known as the transition amplitude density in the Copenhagen formulation. We will also interpret \(\vec{j}_{s}(\vec{r},t)\) as the transition amplitude current density. Integrating Equation 6 over all space gives \[\iiint_{-\infty}^{+\infty}\frac{\partial\rho_{s}}{\partial t}dV+\iiint_{- \infty}^{+\infty}\nabla\cdot\vec{j}_{s}dV=0. \tag{7}\] We can use Gauss's theorem to express the second term as \[\iint_{A}\vec{j}_{s}\cdot d\vec{A} \tag{8}\] where \(A\) is a closed surface at \(\vec{r}=\pm\infty\). If we assume the wavefunctions \(\psi(\vec{r},t)\) and \(\phi^{*}(\vec{r},t)\) are normalized and go to \(0\) at \(\vec{r}=\pm\infty\) this term goes to \(0\), leaving \[\frac{d}{dt}\iiint_{-\infty}^{+\infty}\rho_{s}(\vec{r},t)dV=0 \tag{9}\] where we have moved the derivative outside of the integral. We will now define \[A_{s}\equiv\iiint_{-\infty}^{+\infty}\rho_{s}(\vec{r},t)dV, \tag{10}\] where by Equation 9\(A_{s}\) is a constant, independent of time. The time-symmetric formulation interprets \(A_{s}\) as the amplitude for an isolated, individual physical system to start with maximally specified initial conditions, evolve in space-time, then end with maximally specified final conditions. The complex number \(A_{s}\) is called the transition amplitude. This predicts that the volume under the curve of the real part of \(\rho_{s}(x,t)\) is conserved, and the volume under the curve of the imaginary part of \(\rho_{s}(x,t)\) is also conserved. The transition probability \(P_{s}\) can then be defined by \[P_{s}\equiv A_{s}^{*}A_{s} \tag{11}\] where \(P_{s}\) is also a constant, independent of time. The time-symmetric formulation interprets \(P_{s}\) as the probability that an isolated, individual physical system will start with a given set of maximally specified initial conditions, evolve in space-time, then end with a given set maximally specified final conditions. This is the conditional probability that a particle with the given initial conditions will later be found with the given final conditions. Since \(P_{s}\) is time-independent, it is also the probability that a particle with the given final conditions would have been found earlier with the given initial conditions. This time symmetry is generally true for quantum transition probabilities. Let us consider some examples of the transition amplitude \(A_{s}\) and the transition probability \(P_{s}\). First, consider a one-dimensional infinite square well of length \(a\). The stationary states are \[\xi_{n}(x,t)=\sqrt{\frac{2}{a}}\sin\left(\frac{n\pi}{a}x\right)\exp\left[-i \frac{n^{2}\pi^{2}}{2a^{2}}t\right], \tag{12}\] where \(n=1,2,3...\), we assume the particle mass \(m=1\), and use natural units where \(\hbar=1\). Choose \(\psi(x,t)=\xi_{1}(x,t)\) and \(\phi^{*}(x,t)=\xi_{1}^{*}(x,t)\) so \[\rho_{s}=\phi^{*}\psi=\frac{2}{a}\sin^{2}\left(\frac{\pi}{a}x\right). \tag{13}\] Then integrating \(\rho_{s}(x)\) over the square well gives \[A_{s}=\int_{0}^{a}\rho_{s}(x)dx=1 \tag{14}\] and \(P_{s}=A_{s}^{*}A_{s}=1\) as expected, since the initial and final states perfectly overlap. As a second example, consider choosing \(\psi(x,t)=\xi_{1}(x,t)\) and \(\phi^{*}(x,t)=\xi_{2}^{*}(x,t)\) so \[\rho_{s}=\frac{2}{a}\sin\left(\frac{\pi}{a}x\right)\sin\left(\frac{2\pi}{a}x \right)\exp\left[i\frac{3\pi^{2}}{2a^{2}}t\right]. \tag{15}\] Then integrating \(\rho_{s}(x,t)\) over over the square well gives \[A_{s}=\int_{0}^{a}\rho_{s}(x,t)dx=0 \tag{16}\] and \(P_{s}=A_{s}^{*}A_{s}=0\) as expected, since different stationary states are orthogonal to each other. A third example, involving two different stationary but time dependent gaussians, is given in Section 4. In that case, the calculated transition probability is \(P_{s}=3\times 10^{-10}\). We see that the transition probabilities can range from \(0\) to \(1\) and are time-independent. One of the postulates of the Copenhagen formulation is the Born rule \[\rho=\psi^{*}(\vec{r},t)\psi(\vec{r},t), \tag{17}\] where \(\rho(\vec{r},t)\) is the probability density for finding the particle at \(\vec{r}\) at time \(t\). This is a special case of the transition amplitude density when the initial state is \(\psi(\vec{r},t)\) and the final state \(\phi^{*}(\vec{r},t)\) is a delta function \(\delta(\vec{r}\,^{\prime},t^{\prime})\): \[\rho_{s}=\delta(\vec{r}\,^{\prime},t^{\prime})\psi(\vec{r},t)=\psi(\vec{r}\,^ {\prime},t^{\prime}), \tag{18}\] \[\rho_{s}^{*}=\delta(\vec{r}\,^{\prime},t^{\prime})\psi^{*}(\vec{r},t)=\psi^{*} (\vec{r}\,^{\prime},t^{\prime}), \tag{19}\] and then redefining variables \((\vec{r}\,^{\prime}\rightarrow\vec{r},\,t^{\prime}\to t)\) gives \[\rho=\rho_{s}^{*}\rho_{s}=\psi^{*}(\vec{r},t)\psi(\vec{r},t) \tag{20}\] which is the Copenhagen Born rule. Note that although the amplitude density of Equation 17 is time dependent, the integral over all space of Equation 17 is \[\iiint_{-\infty}^{+\infty}\psi^{*}(\vec{r},t)\psi(\vec{r},t)dV=1 \tag{21}\] which is the probability of finding the particle somewhere in space. This shows that the Copenhagen formulation Born rule is consistent with the time symmetric formulation: the amplitude densities depend on time and space, but when integrated over all space the results are independent of time. Note that \(\phi^{*}\psi\) is a time-symmetric formulation amplitude density, giving the amplitude for a transition from state \(\psi\) to state \(\phi\). It is not the same as the Born rule probability amplitude density \(\psi\) of finding the particle to be at some location in space. For example, the Born rule probability density \(\psi^{*}\psi\) must always be equal to one when integrated over all space. But the transition amplitude density \(\phi^{*}\psi\) will usually not be equal to one when integrated over all space, because \(\phi^{*}\) and \(\psi\) need not perfectly overlap. The time-symmetric formulation tries to treat \(\psi\) and \(\phi^{*}\) on an equal footing. In practice in the laboratory, we can usually manipulate \(\psi\) because it originates in our present. It is sometimes possible to manipulate \(\phi^{*}\) in the laboratory. For example, consider the Mach-Zehnder Interferometer of Figure 3. By changing the phase in one of the arms, we can change the future final state \(\phi^{*}\) from a particle always detected in detector \(D_{1}\) to a particle always detected in detector \(D_{2}\). But in general, we usually cannot manipulate \(\phi^{*}\) because our world is asymmetric in time and \(\phi^{*}\) originates in our future. ## 4. The Time-Symmetric Explanation of Renninger's 1960 Thought Experiment For easier visualization we will assume the experiment shown in Figure 1 is two-dimensional and both wavefunctions are stationary Gaussians with initial and final standard deviations \(\sigma=1\). The retarded stationary Gaussian is \[\psi(x,y,t)\equiv\left(\frac{2}{\pi}\right)^{1/2}\left(\frac{1}{i(t-t_{i})+2} \right)exp\left[-\frac{(x-x_{i})^{2}+(y-y_{i})^{2}}{2i(t-t_{i})+4}\right], \tag{22}\] where \((x,y)\) is the location of the particle, \(t\) is the time, \((x_{i},y_{i},t_{i})=(0,0,0)\) is the emission location and time, all masses are set to 1, and natural units are used: \(\hbar=1\). The advanced stationary Gaussian is \[\phi^{*}(x,y,t)\equiv\left(\frac{2}{\pi}\right)^{1/2}\left(\frac{1}{i(t_{f}-t) +2}\right)exp\left[-\frac{(x_{f}-x)^{2}+(y_{f}-y)^{2}}{2i(t_{f}-t)+4}\right], \tag{23}\] where \((x,y)\) is the location of the same particle, \(t\) is the time, \((x_{f},y_{f},t_{f})=(0,-60,28)\) is the detection location and time, all masses are set to 1, and natural units are used: \(\hbar=1\). Figure 2 shows how the transition amplitude density \(\phi^{*}\psi\) evolves over time, assuming the initial condition is localization at source S at the origin and the final condition is localization at the outer circle \(E_{2}\) at \((x,y)=(0,-60)\). There is no wavefunction collapse between the initial and final conditions. The time-symmetric formulation assumes the probability \(P_{s}\) for this transition is \(P_{s}=A_{s}^{*}A_{s}\), where the subscript \(s\) denotes the time-symmetric theory and the amplitude \(A_{s}\) for the transition is \[A_{s}=\int_{-\infty}^{\infty}\phi^{*}(x,y,t)\psi(x,y,t)dxdy, \tag{24}\] where \(t\) is a variable. Plugging in numbers gives a time-symmetric transition probability \(P_{s}=3\times 10^{-10}\). The Copenhagen formulation assumes \(t=t_{f}\) and gets the same numerical result for the transition probability. ## 5. The 1953 Paradox In 1953 Renninger proposed a negative-result thought experiment using a Mach-Zehnder Interferometer [MZI] [3]: see Figure 3. The MZI is constructed such that when both arms of the interferometer are open the emitted particle always goes to detector \(D1\). In the Figure 2. The time-symmetric formulation explanation of Renninger's 1960 negative-result experiment in two dimensions, with a single particle emitted from (0,0) and detected on the circle \(E_{2}\) at (0,-60). (**a**) The transition amplitude density \(\phi^{*}\psi\) is localized at the source S. (**b,c,d**) \(\phi^{*}\psi\) has left S and is traveling towards the circle \(E_{2}\). Note that the scale on the vertical axis varies between graphs. (**e**) \(\phi^{*}\psi\) arrives at the circle \(E_{2}\) as a localized transition amplitude density and produces a scintillation. The transition amplitude density diverges from the source and converges to the detector in a time-symmetric manner, without hitting the arc \(E_{1}\). If the detector had been located in the upper right quadrant of the circle \(E_{2}\), the transition amplitude density would have diffracted around the arc \(E_{1}\). Copenhagen formulation, this implies the particle's wavefunction must have taken both arms of the MZI. In the time-symmetric formulation, this implies the particle's transition amplitude density must have taken both arms of the MZI. But if a third detector \(D3\) is surreptitiously placed between beam-splitter \(B1\) and mirror \(M2\) (see Figure 4) there will be three possible outcomes: the particle is detected in detector \(D1\) with probability \(1/4\), the particle is detected in detector \(D2\) with probability \(1/4\), or the particle is detected in detector \(D3\) with probability \(1/2\). In the cases where the particle is detected in detector \(D2\), we can infer the presence of detector \(D3\) without any particle interaction with detector \(D3\), which is a paradox. Alternatively, if we know that detector \(D3\) is blocking the upper arm of the MZI, and we wait until detector \(D3\) could have detected the particle but see no detection, then we can conclude that the particle's wavefunction is only taking the lower arm of the MZI. As in the 1960 thought experiment, we have localized the particle's wavefunction without any interaction with the particle, which is a paradox. A similar thought experiment was later described by Elitzur and Vaidman [20]. ## 6. The Time-Symmetric Explanation of Renninger's 1953 Thought Experiment The time-symmetric formulation postulates that particle sources spontaneously emit isotropic retarded waves, particle detectors spontaneously emit isotropic advanced waves, and a transition only occurs when these two types of waves overlap at a source and a detector. The presence of detector \(D3\) between beam-splitter \(B1\) and mirror \(M2\) prevents the retarded wave from source S from overlapping with the advanced waves from detectors \(D1\) or \(D2\), so no transition amplitude density can form along the upper arm of the MZI. But transition amplitude densities can still form between source S and detector \(D1\) or detector \(D2\) along Figure 3. The 1953 Renninger negative-result thought experiment. A Mach-Zehnder Interferometer (MZI) is formed by a single-particle source S, two single-particle detectors \(D1\) and \(D2\), two beam-splitters \(B1\) and \(B2\), and two mirrors \(M1\) and \(M2\). The source S emits a single particle whose wavefunction is a traveling gaussian. The MZI is constructed such that when all arms of the interferometer are open the emitted particle always goes to detector \(D1\). the lower arm of the MZI. Figure 4 shows an example of a transition amplitude density moving between source S and detector \(D2\) along the lower arm of the MZI. The retarded traveling gaussian is \[\psi(x,y,t)=\frac{50\sqrt{\frac{2}{\pi}}\exp\left[0.4i(-0.2(t-t_{i})+x-x_{i})- \frac{(-0.4(t-t_{i})+x-x_{i})^{2}+(y-y_{i})^{2}}{10000+2i(t-t_{i})}\right]}{500 0+i(t-t_{i})} \tag{25}\] where \(t_{i}\) is the initial time and \(x_{i}\) and \(y_{i}\) are the initial position. The advanced traveling gaussian is \[\phi^{*}(x,y,t)=\frac{50\sqrt{\frac{2}{\pi}}\exp\left[-0.4i(-0.2(t_{f}-t)-x+x_ {f})-\frac{(-0.4(t_{f}-t)-x+x_{f})^{2}+(y_{f}-y)^{2}}{10000-2i(t_{f}-t)}\right]} {5000-i(t_{f}-t)}. \tag{26}\] where \(t_{f}\) is the final time and \(x_{f}\) and \(y_{f}\) are the final position. ## 7. Discussion The time-symmetric formulation resolves the 1960 Renninger negative-result paradox because both the initial and final states must be specified to apply the theory, the transition amplitude density does not collapse, and the transition amplitude density travels as a localized beam between the initial and final states, terminating on either the inside surface of the sphere sector \(E_{1}\) or the inside surface of the sphere \(E_{2}\). For repeated experiments, we can estimate the probabilities to be \(P_{1}=\Omega/4\pi\) and \(P_{2}=(4\pi-\Omega)/4\pi\). If there is no particle detection by the time \(t_{1}+\tau\), the probabilities suddenly change to \(P_{1}=0\) and \(P_{2}=1\). But there is no associated change in the transition amplitude density. This sudden change in probabilities simply reflects our change in knowledge of the trajectory of the transition amplitude density. The time-symmetric formulation has the additional benefit of being consistent with the classical limit of Renninger's 1960 thought experiment. As the quantum particle becomes more massive, with a shorter de Broglie wavelength, and starts behaving more like a classical particle, it will always go to either the inner sphere section or the outer sphere in a straight trajectory with a narrow dispersion. There is a logical continuity between its behavior in the quantum and classical regimes, in contrast to the Copenhagen formulation predictions. The time-symmetric formulation resolves the 1953 Renninger negative-result paradox because a retarded wave from a particle source and an advanced wave from a particle detector must overlap at a source and detector for a transition amplitude density to form. Neither a retarded wave by itself nor an advanced wave by itself will trigger a detector or cause a particle transition. Since the upper arm of the MZI between the source S and the detectors \(D1\) and \(D2\) is blocked, a transition amplitude density cannot form between the source S and the detectors \(D1\) or \(D2\) in that arm. But a transition amplitude density can still form in the lower arm of the MZI between the source S and the detectors \(D1\) or \(D2\). The retarded and advanced waves essentially tell the particle which pathways are blocked and open before the particle takes the pathways. Note that in both thought experiments, in the time-symmetric formulation, the source emits an isotropic retarded wave and the detector emits an isotropic advanced wave that can hit the detectors and sources. These may count as interactions. But the retarded and advanced waves each by themselves cannot trigger a detector. In contrast, in the Copenhagen formulation, when the retarded wave by itself hits the detectors, it can collapse to a particle. The Copenhagen formulation does not explain why it chooses one detector and not a different detector, while the time-symmetric formulation does. Figure 4. The time-symmetric formulation explanation of Renninger’s 1953 negative-result experiment, with a single particle emitted from source S and detected at detector \(D2\). (**a**) The transition amplitude density \(\phi^{*}\psi\) is emitted from the source S. (**b**) The transition amplitude density \(\phi^{*}\psi\) has passed through the beam-splitter \(B1\). Note that transition amplitude densities are not necessarily split by beam-splitters. (**c**) \(\phi^{*}\psi\) has reflected from mirror 1 and is traveling towards beam-splitter \(B2\). (**d**) \(\phi^{*}\psi\) has passed through beam-splitter \(B2\) and is traveling towards detector \(D2\). (**e**) \(\phi^{*}\psi\) has reached detector \(D2\) but has not yet been detected. (**f**) \(\phi^{*}\psi\) has been detected by detector \(D2\). Note that wavefunction collapse does not occur. The time-symmetric formulation also resolves the more general Copenhagen formulation paradox of a nonlocalized wavefunction instantaneously collapsing into a localized wavefunction at a detector. In order to conserve momentum this collapse must be instantaneous in all reference frames, in clear conflict with the special theory of relativity. In the time-symmetric formulation the transition amplitude density is localized at the source, partly delocalizes as it approaches the halfway point between source and detector, then relocalizes as it continues to the detector. No wavefunction collapse is required. One might wonder if a theory based on transition amplitude densities will be able to reproduce all of the predictions of the Copenhagen formulation. In 1932 Dirac showed that all the experimental predictions of the Copenhagen formulation of quantum mechanics can be formulated in terms of transition probabilities [21]. The time-symmetric formulation inverts this fact by postulating that quantum mechanics is a theory which experimentally predicts _only_ transition probabilities. This implies the time-symmetric formulation has the same predictive power as the Copenhagen formulation. The Copenhagen formulation has several asymmetries in time: only the initial conditions of the wavefunction are specified, the wavefunction is evolved only forward in time, the transition probability is calculated only at the time of measurement, wavefunction collapse happens only at the time of measurement, and wavefunction collapse happens only forwards in time. This seems unphysical: shouldn't the fundamental laws of nature be time-symmetric? Consider the details of a specific example: according to the Copenhagen formulation, Equation 24 must be evaluated only at the time of the collapse. In contrast, according to the time-symmetric formulation, the transition amplitude of Equation 24 can be evaluated at any time. But the two transition amplitudes give the same results. The fact that the transition amplitude need not be evaluated at a special time shows that quantum mechanics has more intrinsic symmetry than allowed by the Copenhagen formulation. Heisenberg said "Since the symmetry properties always constitute the most essential features of a theory, it is difficult to see what would be gained by omitting them in the corresponding language [22]." The intrinsic time symmetry of a quantum transition is built into the time-symmetric formulation, but is not present in the Copenhagen formulation. The Copenhagen formulation predicts a rapid oscillating motion of a free particle in empty space. Schrodinger discovered the theoretical possibility of this rapid oscillating motion in 1930, naming it zitterbewegung [23]. This prediction of the Copenhagen formulation is inconsistent with Newton's first law, since it implies a free particle does not move with a constant velocity. The time-symmetric formulation predicts zitterbewegung will never occur [12]. Direct measurements of zitterbewegung are beyond the capability of current technology, but future technological developments should allow measurements to confirm or deny its existence, thereby distinguishing between the Copenhagen formulation and the time-symmetric formulation. One possible future experiment to directly measure zitterbewegung would be to trap an electron in a Penning trap in a gaussian ground state, then remove the trap fields and use antennae to search for zitterbewegung radiation. The Copenhagen formulation assumes an isolated, individual physical system is maximally described by a retarded wavefunction and maximally specified initial conditions. The time-symmetric formulation assumes a complete experiment is maximally described by the time-symmetric amplitude density \(\rho_{s}(\vec{r},t)\), which is composed of a retarded wavefunction and an advanced wavefunction, and maximally specified initial and final conditions. The existence of both retarded and advanced wavefunctions in the time-symmetric formulation does not imply that particles can travel at superluminal speeds. In the time-symmetric formulation every particle is represented by algebraic products of an advanced wavefunction and a retarded wavefunction, so the particle cannot travel to space-time locations that the retarded wavefunction cannot reach, and relativistic wave equations limit the velocity of the retarded wavefunction to less than \(c\). Conversely, the particle cannot travel to space-time locations that the advanced wavefunction cannot reach. This is a type of symmetrical forward and backward causality: what happens during an experiment depends on what happened at the start of the experiment, and what will happen at the end of the experiment. This suggests the past, present, and future have equal status. This is implicit in the time-symmetric postulates, and consistent with the block universe view and with the special theory of relativity. The Copenhagen formulation postulate that an individual particle is maximally described by a retarded wavefunction and maximally specified initial conditions means the Copenhagen formulation is a "prespentist" theory, where only the present moment is real: the past is no longer real, and the future is not yet real. A "prespentist" theory is equivalent to a three-dimensional world, which changes as time passes. The time symmetric formulation postulates that a complete experiment is maximally described by the time symmetric amplitude density \(\phi^{*}\psi\), which is composed of a retarded wavefunction and an advanced wavefunction, and incorporates maximally specified initial and final conditions. This means the time symmetric formulation is an "eternalist" theory, where the past, present, and future are equally real. The "eternalist" theory is equivalent to a four-dimensional world, where time is just another parameter, like position. It is an experimental fact, proven by many experiments confirming the special theory of relativity, that the world is four-dimensional, not three-dimensional. Finally, the time-symmetric formulation may be able to resolve other negative-result or interaction-free paradoxes such as counterfactual quantum computation. A future paper will address these topics.
2309.13202
Investigating Large Language Models and Control Mechanisms to Improve Text Readability of Biomedical Abstracts
Biomedical literature often uses complex language and inaccessible professional terminologies. That is why simplification plays an important role in improving public health literacy. Applying Natural Language Processing (NLP) models to automate such tasks allows for quick and direct accessibility for lay readers. In this work, we investigate the ability of state-of-the-art large language models (LLMs) on the task of biomedical abstract simplification, using the publicly available dataset for plain language adaptation of biomedical abstracts (\textbf{PLABA}). The methods applied include domain fine-tuning and prompt-based learning (PBL) on: 1) Encoder-decoder models (T5, SciFive, and BART), 2) Decoder-only GPT models (GPT-3.5 and GPT-4) from OpenAI and BioGPT, and 3) Control-token mechanisms on BART-based models. We used a range of automatic evaluation metrics, including BLEU, ROUGE, SARI, and BERTscore, and also conducted human evaluations. BART-Large with Control Token (BART-L-w-CT) mechanisms reported the highest SARI score of 46.54 and T5-base reported the highest BERTscore 72.62. In human evaluation, BART-L-w-CTs achieved a better simplicity score over T5-Base (2.9 vs. 2.2), while T5-Base achieved a better meaning preservation score over BART-L-w-CTs (3.1 vs. 2.6). We also categorised the system outputs with examples, hoping this will shed some light for future research on this task. Our code, fine-tuned models, and data splits are available at \url{https://github.com/HECTA-UoM/PLABA-MU} \begin{IEEEkeywords} Large Language Models, Text Simplification, Biomedical NLP, Control Mechanisms, Health Informatics \end{IEEEkeywords}
Zihao Li, Samuel Belkadi, Nicolo Micheletti, Lifeng Han, Matthew Shardlow, Goran Nenadic
2023-09-22T22:47:32Z
http://arxiv.org/abs/2309.13202v2
# Large Language Models and Control Mechanisms Improve ###### Abstract Biomedical literature often uses complex language and inaccessible professional terminologies. That is why simplification plays an important role in improving public health literacy. Applying Natural Language Processing (NLP) models to automate such tasks allows for quick and direct accessibility for lay readers. In this work, we investigate the ability of state-of-the-art large language models (LLMs) on the task of biomedical abstract simplification, using the publicly available dataset for plain language adaptation of biomedical abstracts (**PLABA**). The methods applied include domain fine-tuning and prompt-based learning (PBL) on: 1) Encoder-decoder models (T5, SciFive, and BART), 2) Decoder-only GPT models (GPT-3.5 and GPT-4) from OpenAI and BioGPT, and 3) Control-token mechanisms on BART-based models. We used a range of automatic evaluation metrics, including BLEU, ROUGE, SARI, and BERTscore, and also conducted human evaluations. BART-Large with Control Token (BART-L-w-CT) mechanisms reported the highest SARI score of 46.54 and T5-base reported the highest BERTscore 72.62. In human evaluation, BART-L-w-CTs achieved a better simplicity score over T5-Base (2.9 vs. 2.2), while T5-Base achieved a better meaning preservation score over BART-L-w-CTs (3.1 vs. 2.6). We also categorised the system outputs with examples, hoping this will shed some light for future research on this task. Our code, fine-tuned models, and data splits are available at [https://github.com/HECTA-UoM/PLABA-MU](https://github.com/HECTA-UoM/PLABA-MU) ## 1 Introduction The World Health Organization (WHO) defines _health literacy_ as: "the personal characteristics and social resources needed for individuals and communities to access, understand, appraise, and use information and services to make decisions about health" (Dodson et al., 2015). From this, the National Health Service (NHS) of the UK emphasises two key factors for achieving better health literacy 1, i.e., the individual's comprehension ability and the health system itself. The "system" here refers to the complex network of health information and sources which promote it. These two factors are codependent. For instance, professionals write much healthcare information using complex language and terminologies without considering the readability of patients and the public in general. The health system must take into account patient ability to achieve health literacy. Scientific studies have reported a correlation between low health literacy, poorer health outcomes, and poorer use of health care services (Berkman et al., 2011; Greenhalgh, 2015). Thus, Plain Language Adaptation (PLA) of scientific reports in the healthcare domain is valuable for knowledge transformation and information sharing for public patients so as to promote public health literacy (McCray, 2005). Nowadays, there have been industrial practices on such tasks, which include the publicly available plain summaries of scientific abstracts from the American College of Rheumatology (ACR) Virtual Meeting 2020 offered by the medicine company Novartis.com 2. Footnote 1: [https://www.england.nhs.uk/personalisedcare/health-literacy/](https://www.england.nhs.uk/personalisedcare/health-literacy/) Footnote 2: [https://www.novartis.com/node/65241](https://www.novartis.com/node/65241) The PLA task is related to text simplification and text summarisation, which are branches of the Natural Language Processing (NLP) field. This work investigates the biomedical domain PLA (BiomedPLA) using state-of-the-art large language models (LLMs) and control token methods Nishihara et al. (2019); Martin et al. (2020); Agrawal et al. (2021); Li et al. (2022) that have been proven to be effective in such tasks. Examples of BiomedPLA can be seen in Figure 1 from the PLABA2023 shared task3. We highlight some of the factors in colours of such tasks, including sentence simplification in grey (removing clause "which" in the first sentence example; separating into two sentences and removing bracket for the second example), term simplification in yellow ("pharyngitis" and "pharynx" into "throat"), paraphrasing and synonyms in green (e.g. "acute" into "sore" and "posterior" into "back" for synonyms), and summarisation (on overall text in certain situations). The LLMs we applied include advanced Encoder-Decoder models (T5, SciFive, BART) and Generative Pre-trained Transformers (BioGPT, ChatGPT). The methodologies we applied include fine-tuning LLMs, prompt-based learning (PBL) on GPTs, and control token mechanisms on LLMs (BART-base and BART-large) with the efficient fine-tuning strategy. Using the publicly available PLABA (Plain Language Adaptation of Biomedical Abstracts) data set from Attal et al. (2023), we demonstrate the capabilities of such models and carry out both quantitative and human evaluations of the model outputs. We also discuss the interesting findings from different evaluation metrics, their inconsistency, and future perspectives on this task. Footnote 3: [https://bionlp.nlm.nih.gov/plaba2023/](https://bionlp.nlm.nih.gov/plaba2023/) The rest of the paper is organised as below. Section 2 introduces related work to ours including biomedical text simplification, broader biomedical LLMs, and effective training. Section 3 presents different models we applied to this investigation. Section 4 displays the experimental work and evaluation. Section 5 and 6 are the discussion and conclusion of this paper. ## 2 Related Work We first introduce recent developments in biomedical text simplification, then extend to broader biomedical LLMs related to this paper, followed by efficient training methodologies which we will apply to our work. ### Biomedical Text Simplification To improve the health literacy level for the general public population, Guo et al. (2021) developed the first lay language summarisation task using biomedical scientific reviews. The key points for this task include an explanation of context knowledge and expert language simplification. The evaluation included quality and readability using quantitative metrics. Ondov et al. (2022) carried out a survey, up to 2021, on biomedical text simplification methods and corpora using 45 relevant papers on this task, which data covers seven natural languages. In particular, the authors listed some published corpora on English and French languages and divided them into comparable, non-parallel, parallel, thesaurus, and pseudo-parallel. The quantitative evaluation metrics mentioned in these papers include SARI, BLEU, ROUGE, METEOR, and TER, among which three of them are borrowed from the machine translation (MT) field i.e., BLEU, METEOR, and TER Han et al. (2021). Very recently, Bacco et al. (2023) transferred lay language style from expert physicians' notes. They developed a comparable dataset from many non-parallel corpora on plain and expert texts. The baseline model applied for training is BART, with positive outcomes. Lyu et al. (2023) did a case study using chatGPT models on translating radiology reports into plain language for patient education. Detailed prompts were discussed to mitigate the GPT models to reduce "over-simplified" outputs and "neglected information". ### Broader Biomedical LLMs Beyond text simplification tasks, there have been active developments towards biomedical domain adaptation of LLMs in recent years. For instance, BioBERT used 4.5B words from PubMed and 13.5B words from PMC to do continuous learning based on the BERT pre-trained model. Then, it is fine-tuned in a task-specific setting on the following tasks: NER, RE, and QA Lee et al. (2019). In comparison, BioMedBERT Chakraborty et al. (2020) created new data sets called BREATHE using 6 million articles containing 4 billion words from different biomedical literature, mainly from NCBI, Nature Research, Springer Nature, and CORD-19, in addition to BioASQ, BioRxiv, medRxiv, BMJ, and arXiv. It reported better evaluation scores on QA data sets, including SQuAD and BioASQ, among other tested tasks, compared to other models. BioMedLM 2.7B developed by Stanford Center for Research on Foundation Models (CRFM) and Generative AI company MOSAIC ML team 4 formerly known as PubMedGPT 2.7B 5 is trained on biomedical abstracts and papers using data from The Pile (Gao et al., 2020). BioMedLM 2.7B claimed new state-of-the-art performance on the MedQA data set. Footnote 4: [https://www.mosaicml.com/](https://www.mosaicml.com/) Footnote 5: [https://huggingface.co/stanford-crfm/BioMedLM](https://huggingface.co/stanford-crfm/BioMedLM) BioALBERT from Naseem et al. (2021) is based on the ALBERT (Lan et al., 2020) structure for training using biomedical data and reported higher evaluation scores on NER tasks of Disease, Drug, and Species, on several public data sets but much shorter training time in comparison to BioBERT. BioALBERT was also tested on broader BioNLP tasks using its different base and large models, including RE, Classification, Sentence Similarity, and QA by Naseem et al. (2022) Afterwards, based on the T5 model structure (Raffel et al., 2020), SciFive (Phan et al., 2021) was trained on PubMed Abstract and PubMed Central (PMC) data and claimed new state-of-the-art performance on biomedical NER and RE and superior results on BioASq QA challenge over BERT and BioBERT. Similarly, BioBART (Yuan et al., 2022) was developed recently based on the new learning structure BART model (Lewis et al., 2020). This work will examine T5, SciFive, and BART, leaving BioBART as a future work. In a similar period, to explore the performances of GPT-like models in the biomedical domain, Luo et al. (2022) pre-trained BioGPT from scratch using 15M PubMed items with titles and abstracts after filtering. BioGPT used GPT-2 model architecture as its backbone. Other notable related works include a) the model comparisons in biomedical domains with different tasks by Lewis et al. (2020); Alrowili and Shanker (2021); Tinn et al. (2023) on BERT, ALBERT, ELECTRA, PubMedBERT, and PubMedELECTRA; b) task- and domain-specific applications on QA by Alrowili and Shanker (2021), on Medicines by Shah et al. (2023), on radiology (RadBERT) by Yan et al. (2022), concept normalisation by Lin et al. (2022), abstract generation by Sybrandt and Safro (2021); c) language specific models such as in French (Berhe et al., 2023) and Turkish (Turkmen et al., 2023); and d) survey work by Wang et al. (2023). ### Efficient Training Due to the computational cost of the extra large-sized PLMs, some researchers proposed efficient training, which factor we will also apply to our study. These include some previously mentioned works. Houlsby et al. (2019) proposed Parameter-Efficient Transfer Learning for NLP tasks using their Adapter modules. In this method, the parameters of the original PLMs are fixed, and a few trainable parameters are added for each fine-tuning task, between 2-4 % of the original parameter sizes. Using GLUE benchmark data, they demonstrated that efficient tuning with the Adapter modules can achieve similar high performances compared to the BERT models with full fine-tuning of 100% parameters. ALBERT (Lan et al., 2020) applied parameter reduction training to improve the speed of BERT model learning. The applied technique uses a factorisation of the embedding parameters, which are decomposed into smaller-sized matrices before being projected into the hidden space. They also designed a self-supervised loss function to model the inner-sentence coherence. This reduced the parameter Figure 1: Examples from the PLABA dataset on Biomedical Sentences Adaptation. _to be updated with highlights using colours_. sizes from 108M in the BERT base to 12M in the ALBERT base models. Addressing similar issues, Li and Liang (2021) proposed _Prefix-tuning_ method, which modifies only 0.1% of the full parameters to achieve comparable performances using GPT-2 and BART for table-to-text generation and summarisation tasks. Focus on the biomedical domain, Tinn et al. (2023) carried out fine-tuning stability investigation using the BLURB data set (Biomedical Language Understanding and Reasoning Benchmark) from Gu et al. (2021). Their findings show that freezing lower-level layers of parameters can be helpful for BERT-based model training, while re-initialising the top layers is helpful for low-resource text similarity tasks. Instead of using Adapter modules (Houlsby et al., 2019) that require additional inference latency, Hu et al. (2022) introduced Low-Rank Adaption (LoRA) that further reduces the size of trainable parameters by freezing the weights in PLMs and injects "trainable rank decomposition matrices" into every single layer of the Transformer structure for downstream tasks. The experiments were carried out on RoBERTa, DeBERTa, and GPTs that showed similar performances compared to the Adapter modules. We will apply LoRA for efficient fine-tuning on T5 and BioGPT in our work. ## 3 Methodologies and Experimental Design The overall framework of our experimental design is displayed in Figure 2. In the first step, we fine-tune selected LLMs including T5, SciFive, BioGPT, and BART, apply prompt-based learning for ChatGPTs, and optimise control mechanisms on BART model. Then, we select the best performing two models using quantitative evaluation metrics SARI, BERTscore, BLEU and ROUGE. Finally, we chose a subset of the testing results of the two best-performing models for human evaluation. In this section, we first introduce the models we used, followed by LoRA efficient training, and then introduce the quantitative metrics we applied. ### Models The models we investigated in our work include T5, SciFive, GPTs, BioGPT, BART, and Control Mechanisms; we will give more details below. #### 3.1.1 T5 T5 (Raffel et al., 2020) used the same Transformer structure from (Vaswani et al., 2017) but framed the text-to-text learning tasks using the same vocabulary, sentence piece tokenisation, training, loss, and decoding. The pre-fixed tasks include summarisation, question answering, classification, and translation. The authors used the common crawl corpus and filtered to keep only natural text and de-duplication processing. They extracted 750GB of clean English data to feed into the model for multi-task pre-training. Different masking strategies are integrated into the T5 model to facilitate better performances of specific fine-tuning tasks. It has demonstrated state-of-the-art results across a wide spectrum of natural language processing tasks, showcasing its remarkable capabilities in capturing nuanced semantics and generating simplified texts while upholding high levels of accuracy. Notably, it has been successfully employed in various fields such as Clinical T5 by Lehman and Johnson (2023) and Lu et al. (2022). Furthermore, T5's pre-training on an extensive and diverse corpus of text data endows it with a strong foundation in understanding intricate language structures, a crucial asset for handling medical terminology. Its fine-tuning capability further enhances its adaptability, allowing us to tailor the model specifically to the nuances of the biomedical text simplification task. These attributes make T5 a compelling candidate for this work. In this paper, we fine-tuned three versions of T5, namely t5-small, t5-base, and t5-large, paired with their sentence-piece pre-trained tokenizer. Each is fine-tuned independently on the same dataset as the other models to provide comparable results. Note that we use the prompt "summarize:" as it is the closest to our task. #### 3.1.2 SciFive Using the framework of T5, SciFive is a Large Language Model pre-trained on the biomedical domain and has demonstrated advanced performances on multiple biomedical NLP tasks (Phan et al., 2021). While preserving the abilities of T5 in sequence-to-sequence tasks, SciFive offers a deep understanding of medical terminology, concepts, and language structures. As a result, SciFive emerges as a strong candidate for text summarisation and medical language processing tasks, offering the potential to generate clear and accurate simplifications of medical texts. Similarly to our work on T5, we fine-tuned two versions of SciFive, namely SciFive-base and SciFive-large, paired with their pre-trained tokenizer. Each is fine-tuned independently on the same dataset as the other models to provide comparable results. We again use the prompt "summarize:" for task-relation purposes. #### 3.1.3 OpenAI's GPTs Given the remarkable performance demonstrated by OpenAI's GPT models in text simplification(Jeblick et al., 2022), we decided to apply simplifications using GPT-3.5-turbo and GPT-4 via its API6, both models were acc. Example prompts we used can be found in Appendix B. Footnote 6: [https://openai.com/blog/openai-api](https://openai.com/blog/openai-api) #### 3.1.4 BioGPT BioGPT (Luo et al., 2022) is an advanced language model specifically designed for medical text generation. BioGPT is built upon GPT-3 but is specifically trained to understand medical language, terminology, and concepts. BioGPT follows the Transformer language model backbone and is pre-trained on 15 million PubMed abstracts. It has demonstrated a high level of accuracy and has great potential for applications in medicine. BioGPT is fine-tuned on the training and validation set, as with other encoder-decoder models (Figure 2). #### 3.1.5 Bart Like the default transformer structure, BART (Lewis et al., 2020) aims to address the issues in BERT and GPT models by integrating their structures with a bi-directional encoder and an autoregressive decoder. In addition to the single token masking strategy applied in BERT, BART provides various masking strategies, including deletion, span masking, permutation, and rotation of sentences. Compared to GPT models, BART provides both leftward and rightward context in the encoders. #### 3.1.6 Controllable Mechanisms We applied the modified control token strategy in (Li et al., 2022) for both BART-base and BART-large models. The training includes 2 stages, leveraging both Wikilarge training set (Zhang and Lapata, 2017) and our split of training set from PLABA (Attal et al., 2023). The four attributes for control tokens (CTs) are listed below: * \(<\)DEPENDENCYTREEDPTH_x\(>\)(DTD) * \(<\)WORDRAN_x\(>\)(WR) * \(<\)REPLACEONLYLEVENSHTEIN_x\(>\)(LV) * \(<\)LENGTHRATIO_x\(>\)(LR) They represent 1) the syntactic complexity, 2) the lexical complexity, 3) the inverse similarity Figure 2: Model Development and Evaluation Pipeline. BART\({}^{*}\) is fine-tuned using Wikilarge data. MAX step chooses the two best-performing models according to the automatic evaluation results using SARI and BERTscore. of input and output at the letter level, and 4) the length ratio of input and output respectively. Before training, the four CTs are calculated and prepared for the 2 stage training sets. In both stages, we pick the best model in 10 epochs based on the training loss of the validation set. We applied the best model from the first stage as the base model and fine-tuned it on our PLABA training set for 10 epochs. After fine-tuning, the next step is to find the optimal value of CTs. Following a similar process in MUSS (Martin et al., 2020), we applied Nevergrad (Rapin and Teytaud, 2018) on the validation set to find the static optimal discrete value for DTD, WR, and LV. As for the LR, we applied the control token predictor to maximise the performance with the flexible value. The predictor is also trained on Wikilarge (Zhang and Lapata, 2017) to predict the potential optimal value for LR. ### LoRA and LLMs To evaluate bigger model architectures, we fine-tune FLAN-T5 XL (Chung et al., 2022) and BioGPT-Large, which have 3 billion and 1.5 billion parameters, respectively. FLAN-T5 XL is based on the pre-trained T5 model with instructions for better zero-shot and few-shot performance. To optimise training efficiency, and as our computational resources do not allow us to fine-tune the full version of these models, we employ the LoRA (Hu et al., 2022) technique, which allows us to freeze certain parameters, resulting in more efficient fine-tuning with minimal trade-offs. ### Metrics We decide to evaluate our models using four quantitative metrics, namely BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), SARI (Xu et al., 2016), and BERTScore (Zhang et al., 2020), each offering unique insights into text quality. SARI and BERTScore are used from EASSE (Alvandachego et al., 2019) package; BLEU and ROUGE metrics are imported from the Hugging Face7 implementations. Footnote 7: [https://github.com/huggingface/evaluate](https://github.com/huggingface/evaluate) While BLEU quantifies precision by assessing the overlap between n-grams in the generated text and references, ROUGE measures recall by determining how many correct n-grams in the references are present in the generated text. This combination makes them useful as an initial indicative evaluation for machine translation and summarisation quality. In contrast, SARI goes beyond n-gram comparisons and evaluates fluency and adequacy in translations. It does this by considering precision (alignment with references), recall (coverage of references), and the ratio of output length to reference length. SARI's comprehensive approach extends its utility to broader evaluations of translation quality. Finally, BERTScore delves into the semantic and contextual aspects of text quality. Using a pre-trained BERT model, it measures the similarity between word embeddings in the generated and reference texts. This metric provides insight into the semantic similarity and contextual understanding between generated and reference texts, making it akin to human evaluation. This metric does not quantify how good the simplification is but rather how much the meaning is preserved after simplification. This comprehensive evaluation effectively addresses surface-level and semantic dimensions, resulting in a well-rounded and thorough assessment of the quality of machine-generated simplifications. ## 4 Experiments and Evaluations The data set we used for the model development is from PLABA (Attal et al., 2023). This data set is extracted from PubMed search results using 75 healthcare-related questions that Medline-Plus users asked. It includes 750 biomedical article Abstracts manually simplified into 921 adaptations with 7,643 sentence pairs in total. PLABA dataset is publicly available via Zenodo Platform 8. Footnote 8: [https://zenodo.org/record/7429310](https://zenodo.org/record/7429310) ### Data Preprocessing and Setup To investigate the selected models for training and fine-tuning, we divided the PLABA data into Train, Validation, and Test sets, aiming for an 8:1:1 ratio. However, in the real implementations, we found that there are only a few 1-to-0 sentence pairs, which might cause a negative effect in training the simplification models. Thus we eliminated all 1-to-0 sentence pairs. In addition, to better leverage the SARI score, we picked sentences with multi-references for validation and testing purposes. As a result, we ended up with the following sentence pair numbers according to the source sentences (5757, 814, 814). ### Automatic Evaluation Scores In this section, we list quantitative evaluation scores and some explanations for them. The results for T5 Small, T5 Base, T5 Large, FLAN-T5 XL with LORA, SciFive Base, SciFive Large, and BART models with CTs (BART-w-CTS) are displayed in Table 1. Interestingly, the fine-tuned T5 Small model obtains the highest scores in both BLEU and ROUGE metrics including ROUGE-1, ROUGE-2, and ROUGE-L. The fine-tuned BART Large with CTs produces the highest SARI score at 46.54; while the fine-tuned T5 Base model achieved the highest BERTScore (72.62) with a slightly lower SARI score (44.10). The fine-tuned SciFive Large achieved the highest SARI score (44.38) among T5-like models, though it is approximately 2 points lower than BART Large with CTs. The quantitative evaluation scores of GPT-like models are presented in Table 2 including GPT-3.5 and GPT-4 using prompts, and fine-tuned BioGPT with LoRA. GPT-3.5 reported relatively higher scores than GPT-4 on all lexical metrics except for SARI, and much higher score on BERTscore than GPT-4 (58.35 vs 46.99). In comparison, BioGPT-Large with LoRA reported the lowest SARI score (18.44) and the highest BERTScore (62.9) among these three GPT-like models. Comparing the models across Table 1 and Table 2, the GPT-like models did not beat T5-Base on both SARI and BERTScore, and did not beat BART-w-CTs on SARI. To look into the details of model comparisons from different epochs on the extracted testing set, we present the learning curve of T5, SciFive, BART-base on WikiLarge, and BART-base on PLABA data in Figure 3. We also present the learning curve of T5 Base and BART Base using different metrics in Figure 4. Because the fine-tuned T5-Base model has the highest BERTScore (72.62) and also a relatively higher SARI score (44.10), we chose it as one of the candidates for human evaluation. The other candidate is the fine-tuned BART Large with CT mechanisms which has the highest SARI score (46.54) among all evaluated models. Note that, Figure 4: Quantitative Evaluation Scores of T5-base and BART-base on the Extracted Testing Set Figure 3: Quantitative Evaluation Scores of T5, SciFive and BART Models on the Extracted Testing Set SciFive Large have results close to the ones of T5 Base. In this case, we selected the smaller model for human evaluation. ### Human Evaluation In human evaluation, we randomly sampled 80 sentences from the test set split and evaluated the corresponding outputs of BART-large with CTs and T5-base anonymously. In the human evaluation form, we randomly assigned the order of the two systems' outputs and put them beside the input sentence. Based on the comparison of input and output sentences, the annotators need to answer two questions: "To what extent do you agree the simplified sentence keeps the major information" and "To what extent do you agree the simplified sentence is well simplified". The answer is limited to a 5-point Likert scale, from strongly disagree to strongly agree. The sample form can be found in Table 6 in the Appendix. There are 4 annotators in this human evaluation, one of the annotators is a native English speaker, and the others are fluent English users as a second language. Two annotators are final-year bachelor students, one annotator is a Masters candidate, and the last one is a Postdoctoral researcher. Each annotator evaluated 40 sentence pairs with 50% overlaps to make sure every sentence was evaluated twice by different annotators. The detailed human evaluation scores on the two selected models we evaluated are shown in Table 3 using the two designed criteria. Based on the cross-annotation (overlaps), we calculated the inter-rater agreement level in Table 4 using Cohen's Kappa on cross models and Table 5 using Krippendorff's Alpha with model-wise comparison. Both tables include the agreement levels on two sub-categories, namely "meaning preservation" and "text simplicity". Because there is no overlap in the annotation tasks between annotators (0, 3) and annotators (1, 2), we listed the agreement levels between the available pairs of annotations, i.e. (0, 1), (0, 2), (1, 3), and (2, 3). The Cohen's Kappa agreement level presented in Table 4 shows the inter-rater agreement on the performance order of the two systems, whether one system is better than the other or tie. The Krippendorff's Alpha represented in Table 5 shows the annotation reliability over all 5-Likert options. Based on the results from Table 3, annotators show a different preference for the two systems. Despite the limited gap in the SARI score, BART with CTs shows a better capability to fulfill the simplification tasks. Yet regarding meaning preservation, the fine-tuned T5-base performs better, as the BERTScore tells among the comparisons in Table 1. From Table 4, Annotators 0 and 1 have the highest agreement on "meaning preservation" (score 0.583), while Annotators 1 and 3 have the highest agreement on "simplicity" (score 0.238) evaluation across the two models. This also indicates \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|} \hline Models & BLEU & ROUGE-1 & ROUGE-2 & ROUGE-L & SARI & BERTScore \\ \hline T5 Small & **49.86** & **65.94** & **48.60** & **63.94** & 33.38 & 69.58 \\ T5 Base & 43.92 & 64.36 & 46.07 & 61.63 & 44.10 & **72.62** \\ T5 Large & 43.52 & 64.27 & 46.01 & 61.53 & _43.70_ & 60.39 \\ FLAN-T5 XL (LoRA) & 44.54 & 63.16 & 45.06 & 60.53 & 43.47 & 67.94 \\ \hline SciFive Base & 44.91 & 64.67 & 46.45 & 61.89 & 44.27 & 60.86 \\ SciFive Large & 44.12 & 64.32 & 46.21 & 61.41 & _44.38_ & 72.59 \\ \hline BART Base with CTs & 21.52 & 56.14 & 35.22 & 52.38 & 46.52 & 50.53 \\ BART Large with CTs & 20.71 & 54.73 & 32.64 & 49.68 & **46.54** & 50.16 \\ \hline \end{tabular} \end{table} Table 1: Quantitative Evaluations of T5, SciFive and BART Models with Control Token (CT) mechanisms on the Extracted Testing Set. FLAN-T5 XL used LoRA. \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|} \hline Models & BLEU & ROUGE-1 & ROUGE-2 & ROUGE-L & SARI & BERTScore \\ \hline GPT-3.5 & 20.97 & 50.07 & 24.72 & 43.12 & 42.61 & 58.35 \\ GPT-4 & 19.50 & 48.36 & 23.34 & 42.38 & 43.22 & 46.99 \\ \hline BioGPT-Large (LoRA) & 41.36 & 63.21 & 46.63 & 61.56 & 18.44 & 62.9 \\ \hline \end{tabular} \end{table} Table 2: Quantitative Evaluations of GPTs and BioGPT-Large on the Extracted Testing Set that it is not easy to evaluate the system performances against each other regarding these two criteria. Model wise, Table 5 further shows that Annotators 0 and 1 agree more on the judgements of fine-tuned T5 Base model on both two criteria of "meaning preservation" (score 0.449) and "text simplicity" (score 0.386), in comparison to the BART model. On the fine-tuned BART Large model with CTs, these two annotators only agreed on the "meaning preservation" factor with a score of 0.441. This phenomenon also applies to Annotators 0 and 2 regarding the judgement of these two models. Interestingly, while Table 4 shows that Annotators 1 and 3 have better agreement on "simplicity" judgement over "meaning preservation" using Cohen's Kappa, Table 5 shows the opposite using Krippendorff's Alpha, i.e. it tells the agreement on "meaning preservation" of these two annotators instead. This shows the difference between the two agreement and reliability measurement metrics. ### System Output Categorisation We list some interesting aspects of human evaluation findings by comparing the outputs of two models. 1) how to deal with the judgement on two models when one almost copied the full text from the source while the other did simplification but with introduced errors? For example, the source text " _A national programme of neonatal screening for CAH would be justified, with reassessment after an agreed period._" is simplified by the BART-w-CTs model into _"A national program of checking newborns for COVID-19 would be a good idea."_ However, "CAH" means "Congenital adrenal hyperplasia" instead of "COVID-19". The T5-base model almost copied the same source text, producing _"A national programme of newborn screening for CAH would be justified, with reassessment after an agreed period of time."_ In this case, the T5-base model can get "strongly agree" (score 2) for meaning preservation but "strongly disagree" (score -2) for text simplicity, which would have an average score 0. BART-w-CTs can get "strongly agree" for text simplicity (score 2), but a lower score on meaning preservation, e.g. -2. In this case, the two models will be attributed the same score. Note that, for system selection, it might be a better choice to look into the separate dimensions of how the models perform. 2) Abbreviation caused interpretation inaccuracy. This can be a common issue in the PLABA task. For instance, the source sentence "_A total of 157 consecutive patients underwent TKA (n = 18) or UKA (n = 139)._" is simplified by T5-base model into "_A total of 157 consecutive patients underwent knee replacement or knee replacement._" and by BART-w-CTs into "_A total of 157 patients had either knee replacement or knee replacement surgery._" Both these two models produced repeated phrases "knee replacement or knee replacement" due to a lack of meaning understanding of "TKA: total knee arthroplasty" and "UKA: Unicompartmental knee arthroplasty". A reasonable simplification here can be "157 patients had knee surgery." \begin{table} \begin{tabular}{c c c} Annotator & Meaning preservation & Simplicity \\ \hline 0 \& 1 & 0.583 & 0.138 \\ 0 \& 2 & 0.238 & 0.126 \\ 1 \& 3 & 0.008 & 0.238 \\ 2 \& 3 & -0.130 & -0.014 \\ \hline \end{tabular} \end{table} Table 4: Cohen Kappa among annotators over 3 categories ordinal - win, lose, and tie. \begin{table} \begin{tabular}{c l l l} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Annot. \\ \end{tabular} } & \multirow{2}{*}{Model} & \multicolumn{1}{l}{ \begin{tabular}{c} Meaning \\ Preservation \\ \end{tabular} } & \multirow{2}{*}{Simplicity} \\ \cline{3-3} \cline{5-5} & & & \\ \hline 0 \& 1 & T5-base & 0.449 & 0.386 \\ & BART w CTs & 0.441 & 0.052 \\ 0 \& 2 & T5-base & 0.259 & 0.202 \\ 0 \& 2 & BART w CTs & 0.200 & 0.007 \\ 1 \& 3 & T5-base & 0.307 & 0.065 \\ 1 \& 3 & BART w CTs & -0.141 & -0.056 \\ 2 \& 3 & T5-base & -0.056 & 0.116 \\ & BART w CTs & 0.065 & -0.285 \\ \hline \hline \end{tabular} \end{table} Table 5: Krippendorff’s alpha among annotators (Anno.) over the 5-Likert scale from strongly agree to strongly disagree. We list more categories below and refer to Table 7 for examples from the two evaluated models. * Both models simplified the abstract into the exact same output * Both models produced hallucinations and similar outputs * Both models cut half the sentence/meaning, but at different parts * Both models cut complex sentences into multiple sentences, but BART adapts lay-language * BART uses lay-language vs T5 does not * BART adapts using lay-language, but cut down some meaning * BART generates simplification vs T5 generates nonsense * T5 does little simplification but maintain good meaning. BART increases simplicity but loses some meaning * T5 cuts meaning; BART does not but maintains the same complexity as the abstract * BART shifts meaning ## 5 Discussion ### On Evaluation Metrics Based on the results depicted in Figure 4, we acknowledge that SARI stands out as a more reliable metric than BLEU and ROUGE-1/2 for assessing the quality of generated simplifications. During the early training epochs, the model outputs closely resemble the input texts, which can lead SARI to assign lower scores compared to BLEU and ROUGE-1/2. This occurs because these metrics might be satisfied by the mirroring of n-grams between inputs and generated outputs during simplification. However, it is essential to acknowledge that, in the context of simplification, the generated output typically remains relatively close to the input. As a result, BLEU and ROUGE may exhibit consistent scores throughout epochs and may not effectively evaluate the quality of the generated texts. In contrast, BERTScore offers a different perspective by focusing on meaning preservation after simplification instead of simplification quality. If the generated outputs are copies of the input texts, BERTScore may still yield high scores although the model indeed performs poorly. Therefore, we use the combination of both metrics -- SARI for evaluating generation quality and BERTScore for assessing meaning preservation -- in order to select the best performing models. ### On Human Evaluations Due to the various backgrounds of annotators and the lack of proper training material and methods, it is difficult to build a standardised scheme to help annotators choose from the 5-Likert options. This means that the definition of "Strongly Agree/Disagree" may vary among annotators. In addition, there is no method to normalise the scores to a unified level in order to avoid the effects caused by differences in subjective judgements. Thus, we evaluate the two systems simultaneously and allow the annotator to decide on the performance order and performance gap. To reflect on the agreement of these two aspects, we calculated the inter-rater agreement in Cohen Kappa and Krippendorff's alpha. As shown in Table 4, only annotator 0 and 1 show a decent agreement on meaning preservation, while the others show only limited agreement or even slight disagreement. Although human evaluation has long been the gold standard for evaluation tasks, there are few acknowledged works on a standard procedure to make it more explainable and comparable. In the future, a more standardised and unified process may be required. ### On Comparison to SOTA Since there have only been few studies on the PLABA dataset (Attal et al., 2023), despite the different dataset split of the original paper, we have reached a higher performances in BLEU, ROUGE and SARI scores and acceptable BERTScores. In addition, we apply the automatic metrics to a 2-reference test set, which provides better reliability. When compared to the ASSET test set with 10 references, we tested 814 sentence pairs, focusing on a large and more specialised dataset. We also inherited and improved the SOTA on ASSET and applied it to the PLABA dataset with the control mechanism to achieve a decent performance on the SARI score. At last, we explored the potential of generative models with one-shot only, which also showed competitive performances. Conclusions and Future Work In this work, we have carried out an investigation on using LLMs and Control Mechanisms for the text simplification task on biomedical abstract using the PLABA data set. Both automatic evaluations using a broad range of metrics and human evaluations were conducted to assess the system outputs. As automatic evaluation results show, both T5 and BART with Control Tokens demonstrated high accuracy in generating simplified versions of biomedical abstracts. However, when we delve into human evaluations, it becomes clear that each model possesses its unique strengths and trade-offs. T5 demonstrated strong performances at preserving the original abstract's meaning, but sometimes at the cost of lacking simplification. By maintaining the core content and context of the input, it has proven to be over-conservative in some cases, resulting in outputs that very closely resemble the inputs therefore maintaining the abstract's complexity. On the other hand, BART demonstrated strong simplification performances to produce better-simplified versions. However, it has shown a potential drawback in reducing the preservation of the original meaning. This difference in approach is also reflected in the automatic metrics, where BART achieves the highest SARI score quantifying generation quality, but lags behind T5 in BERTScore, which measures meaning preservation. In essence, while both models excel in generating simplified biomedical abstracts, T5 prioritises meaning preservation, while BART tends to favour more substantial simplifications. These distinctions are not only apparent in human evaluations but also supported by the differences in automatic metric scores. In future work, we plan to carry out investigations on more recent models including BioBART (Yuan et al., 2022), try different prompting methods such as the work from Cui et al. (2023), and design a more detailed human evaluation such as the work proposed by (Gladkoff and Han, 2022) with error severity levels might shed some light on this. Alternative efficient training and fine-tuning methods can be tested, in comparison to LoRA, on this specific task including Adapters and prefixtuning (Li and Liang, 2021) and the methods applied by (Gu et al., 2021; Lan et al., 2020). ### Limitations In this work, we carried out an investigation using the PLABA data set. To be more fairly comparing selected models, we will include other related testing data on biomedical text simplification tasks. How to improve the number of references for the PLABA data set can be also explored. ### Ethical Statement We applied the publically available data set PLABA (Attal et al., 2023a) for our experiment. While the models produced reasonable outputs for biomedical abstract simplification from our human evaluation perspective, we do not suggest that the public audiences fully trust the automatic models for health advice at this stage. ### Author Contributions ZL carried out experiments on BART and Control Mechanisms, prepared Human Evaluation data, Measured inter-rater agreement, and co-wrote Sections 3, 4 and 5. SB carried out experiments on T5 and SciFive and co-wrote Sections 1, 3, 4, 5, and 6. NM carried out experiments on GPTs, and co-wrote Sections 3 and 4. LH supervised the project, wrote Sections 1 and 2, and co-wrote Sections 3, 4, and 6. Human evaluation by ZL, SB, NM, and LH. Abstract by SB and LH. MS and GN: co-supervisors and approved the manuscript. ## Acknowledgements LH and GN are grateful to grant support EP/V047949/1 "Integrating hospital outpatient letters into the healthcare data space" (funder: UKRI/EPSRC). We thank XC for the valuable discussions.
2309.15673
Kähler-Yang-Mills Equations and Vortices
The K\"ahler-Yang-Mills equations are coupled equations for a K\"ahler metric on a compact complex manifold and a connection on a complex vector bundle over it. After briefly reviewing the main aspects of the geometry of the K\"ahler-Yang-Mills equations, we consider dimensional reductions of the equations related to vortices - solutions to certain Yang-Mills-Higgs equations.
Oscar García-Prada
2023-09-27T14:16:46Z
http://arxiv.org/abs/2309.15673v2
# Kahler-Yang-Mills equations and vortices ###### Abstract. The Kahler-Yang-Mills equations are coupled equations for a Kahler metric on a compact complex manifold and a connection on a complex vector bundle over it. After briefly reviewing the main aspects of the geometry of the Kahler-Yang-Mills equations, we consider dimensional reductions of the equations related to vortices -- solutions to certain Yang-Mills-Higgs equations. Partially supported by the Spanish Ministry of Science and Innovation, through the "Severo Ochoa Programme for Centres of Excellence in R&D (CEX2019-000904-S)". and this action is Hamiltonian for a natural symplectic form \(\omega_{\mathscr{J}}\) on \(\mathscr{J}^{\,i}\). The moment map interpretation of the Hermite-Yang-Mills equation was pointed out first by Atiyah and Bott [7] for the case of Riemann surfaces and generalized by Donaldson [17] to higher dimensions. Here one considers the symplectic action of the _gauge group_\(\mathscr{G}\) of the Hermitian bundle \((E,h)\) on the space of _unitary connections_\(\mathscr{A}\) endowed with a natural symplectic form \(\omega_{\mathscr{A}}\). Relying on these two cases, the phase space for the Kahler-Yang-Mills theory is provided by the subspace of the product \(\mathscr{P}\subset\mathscr{J}^{\,i}\times\mathscr{A}\) defined by the additional integrability condition for a connection \(A\in\mathscr{A}\) given by the vanishing of the \((0,2)\)-part of its curvature. Our choice of symplectic structure is the restriction to \(\mathscr{P}\) of the symplectic form \(\omega_{\alpha}=\omega_{\mathscr{J}}+\frac{4\alpha}{(n-1)!}\omega_{\mathscr{ A}}\), for a non-zero coupling constant \(\alpha\in\mathbb{R}\). Here \(n\) is the complex dimension of \(M\). Consider now the _extended gauge group_\(\widetilde{\mathscr{G}}\), defined as the group of automorphisms of the Hermitian bundle \((E,H)\) covering Hamiltonian symplectomorphisms of \((M,\omega)\). This is a non trivial extension \[1\to\mathscr{G}\to\widetilde{\mathscr{G}}\to\mathscr{H}\to 1, \tag{1.2}\] where \(\mathscr{G}\) is the group of automorphisms of \((E,H)\) covering the identity on \(M\) -- the usual gauge group --, and \(\mathscr{H}\), as above, is the group of Hamiltonian symplectomorphisms of \((M,\omega)\). The group \(\widetilde{\mathscr{G}}\) acts on \(\mathscr{P}\) in a Hamiltonian way for any value of the coupling constant \(\alpha\). In [1] the moment map \(\mu_{\alpha}\) is computed, and it is shown that its zero locus corresponds to solutions of (1.1). The coupling between the metric and the connection occurs as a direct consequence of the fact that the extension (1.2) defining the extended gauge group is non-trivial. It is worth pointing out that extended gauge groups feature in the paper by Bourguignon-Lawson [10], where they are referred as enlarged gauge groups. In particular they consider the _enlarged gauge group_ of a principal bundle \(P\) over a compact Riemannian manifold \(M\) given by \(1\to\mathscr{G}_{P}\to\widetilde{\mathscr{G}}_{P}\to\mathscr{I}_{M}\to 1\), where \(\mathscr{I}_{M}\) is the group of isometries of \(M\) when the dimension of \(M\) is different from \(4\) and the conformal group when the dimension is \(4\). As mentioned in [10], a connection on \(P\) determines a splitting of the of the sequence of vector spaces obtained by differentiating the above extension at the identity. This fact is also true in our set-up and plays a crucial role in the computation in [1] of the moment map for the action of \(\widetilde{\mathscr{G}}\) on \(\mathscr{P}\). It turns out that equations (1.1) decouple on a compact Riemann surface, due to the vanishing of the first Pontryagin term \(\operatorname{tr}F_{H}^{2}\), and so in this case the solution to the problem reduces to a combination of the uniformization theorem for Riemann surfaces and the theorem of Narasimhan and Seshadri [16, 34]. For an arbitrary higher-dimensional manifold \(M\), determining whether (1.1) admits solutions is a difficult problem, since in this case these equations are a system of coupled fourth-order fully non-linear partial differential equations. Despite this, a large class of examples was found in [1] for small \(\alpha\), by perturbing constant scalar curvature Kahler metrics and Hermite-Yang-Mills connections. More concrete and interesting solutions over a polarised threefold -- that does not admit any constant scalar curvature Kahler metric -- were obtained by Keller and Tonnesen-Friedman [30]. Garcia-Fernandez and Tipler [22] added new examples to this short list, by simultaneous deformation of the complex structures of \(M\) and \(E\). However the problem of finding a general existence theorem for the Kahler-Yang-Mills equations remains pretty much open. One of the main motivations in [1] to study these equations was to find an analytic approach to the algebraic geometric problem of constructing a moduli space classifying pairs \((M,E)\) consisting of a complex projective variety and a holomorphic vector bundle. The stability condition needed for this should naturally be the one solving the Kahler-Yang-Mills equations. In [1] obstructions for the existence of solutions to (1.1) were studied, generalizing the _Futaki invariant_, the _Mabuchi \(K\)-energy_ and _geodesic stability_ that appear in the constant scalar curvature theory. The natural conjecture proposed in [1] is that the existence of solutions of the Kahler-Yang-Mills equations is equivalent to geodesic stability. To test the above conjecture and provide a new class of interesting examples, inspired by [23], a series of papers [2, 3, 4, 5, 6] have considered the study of dimensional reduction techniques. The simplest situation considered is given by the dimensional reduction of the Kahler-Yang-Mills equations from \(M=X\times\mathbb{P}^{1}\) to a compact Riemann surface \(X\) of genus \(g(X)\). Here \(\mathbb{P}^{1}\) is the Riemann sphere. In this case we consider SU(2) acting on \(M\), trivially on \(X\), and in the standard way on \(\mathbb{P}^{1}\). We take a holomorphic line bundle \(L\) over \(X\), and a holomorphic global section \(\phi\) of \(L\). The pair \((L,\phi)\) defines in a canonical way an SU(2)-equivariant holomorphic rank 2 vector bundle over \(X\times\mathbb{P}^{1}\). One can show ([2]) that an SU(2)-invariant solution to the Kahler-Yang-Mills equations on the bundle \(E\) over \(X\times\mathbb{P}^{1}\) is equivalent to having a solution of the equations \[\begin{split} i\Lambda_{g}F_{h}+\frac{1}{2}(|\phi|_{h}^{2}-\tau) =0,\\ S_{g}+\alpha(\Delta_{g}+\tau)(|\phi|_{h}^{2}-\tau)=c,\end{split} \tag{1.3}\] for a Kahler metric \(g\) on \(X\) and a hermitian metric \(h\) on \(L\). Here \(F_{h}\) is the curvature of the Chern connection on \(L\) defined by \(h\), \(|\phi|_{h}\) is the pointwise norm of \(\phi\) with respect to \(h\), \(S_{g}\) is the scalar curvature of \(g\), and \(\Delta_{g}\) is the Laplacian of the metric on the surface acting on functions. The constant \(c\in\mathbb{R}\) is topological, and it can be obtained by integrating (1.3) over \(X\), and \(\tau\) is a real parameter. Equations (1.3) are referred as the _gravitating vortex equations_ since, in fact, the first equation in (1.3) is the well-known vortex equation of the abelian-Higgs model, whose solutions are called _vortices_, and have been extensively studied in the literature in the case of compact Riemann surfaces [11, 23, 24, 36] after the seminal work of Jaffe and Taubes [28, 37] on the Euclidean plane. In particular, the proof given by Bradlow [11] is based on the fact that the vortex equation can be reduced to the the Kazdan-Warner equation [29] -- an equation that plays a prominent role in the problem studied by Bourguignon-Ezin [9]. It turns out that, when \(c=0\) and \(d=c_{1}(L)>0\), \(X\) is constrained to be the Riemann sphere and the gravitating vortex equations have a physical interpretation, as they are equivalent to the Einstein-Bogomol'nyi equations on a Riemann surface [42, 44]. Solutions of the Einstein-Bogomol'nyi equations are known in the physics literature as _Nielsen-Olesen cosmic strings_[35], and describe a special class of solutions of the abelian Higgs model coupled with gravity in four dimensions [15, 31, 32]. Unlike the cases of genus \(g(X)\geq 1\), in genus \(g(X)=0\), new phenomena, not appearing in the classical situation of constant curvature metrics on a surface, arise. Namely, there exist obstructions to the existence of solutions of (1.3), illustrating the fact that \(g(X)=0\) the problem of existence of solutions is comparatively closer to the more sophisticated problem of Calabi on the existence of Kahler-Einstein metrics, where algebro-geometric stability obstructions appear on compact Kahler manifolds with \(c_{1}>0\). After presenting the Kahler-Yang-Mills equations in Section 2, and reviewing in Section 3 the theorems on the existence of solution to the gravitating vortex equations, in Section 4 we ponder on the existence of solutions for a non-abelian version of the gravitating vortex equations obtained also by dimensional reduction methods from the Kahler-Yang-Mills equations. ## 2. The Kahler-Yang-Mills equations In this section, we briefly explain some basic facts from [1] about the Kahler-Yang-Mills equations, with emphasis on their symplectic interpretation. Throughout this section manifolds, bundles, metrics, and similar objects are of class \(C^{\infty}\). Let \(M\) be a compact symplectic manifold of dimension \(2n\), with symplectic form \(\omega\) and volume form \(\operatorname{vol}_{\omega}=\frac{\omega^{n}}{n!}\). Fix a complex vector bundle \(\pi\colon E\to M\) of rank \(r\), and a hermitian metric \(H\) on \(E\). Consider the positive definite inner product \[-\operatorname{tr}\colon\mathfrak{u}(r)\times\mathfrak{u}(r)\longrightarrow\mathbb{R}\] on \(\mathfrak{u}(r)\). Being invariant under the adjoint \(\operatorname{U}(r)\)-action, it induces a metric on the (adjoint) bundle \(\operatorname{ad}E_{H}\) of skew-hermitian endomorphisms of \((E,H)\). Let \(\Omega^{k}\) and \(\Omega^{k}(V)\) denote the spaces of (smooth) \(k\)-forms and \(V\)-valued \(k\)-forms on \(M\), respectively, for any vector bundle \(V\) over \(M\). Then the metric on \(\operatorname{ad}E_{H}\) extends to a pairing on the space \(\Omega^{\bullet}(\operatorname{ad}E_{H})\), \[\Omega^{p}(\operatorname{ad}E_{H})\times\Omega^{q}(\operatorname{ad}E_{H}) \longrightarrow\Omega^{p+q}, \tag{2.1}\] that will be denoted simply \(-\operatorname{tr}a_{p}\wedge a_{q}\), for \(a_{j}\in\Omega^{j}(\operatorname{ad}E_{H})\), \(j=p,q\). An almost complex structure on \(M\) compatible with \(\omega\) determines a metric on \(M\) and an operator \[\Lambda\colon\Omega^{p,q}\longrightarrow\Omega^{p-1,q-1} \tag{2.2}\] acting on the space \(\Omega^{p,q}\) of smooth \((p,q)\)-forms, given by the adjoint of the Lefschetz operator \(\Omega^{p-1,q-1}\to\Omega^{p,q}\colon\gamma\mapsto\gamma\wedge\omega\). It can be seen that \(\Lambda\) is symplectic, that is, it does not depend on the choice of almost complex structure on \(M\). Its linear extension to adjoint-bundle valued forms will also be denoted \(\Lambda\colon\Omega^{p,q}(\operatorname{ad}E_{H})\to\Omega^{p-1,q-1}( \operatorname{ad}E_{H})\). Let \(\mathscr{J}\) and \(\mathscr{A}\) be the spaces of almost complex structures on \(M\) compatible with \(\omega\) and unitary connections on \((E,H)\), respectively; their respective elements will usually be denoted \(J\) and \(A\). We will explain now how the Kahler-Yang-Mills equations arise naturally in the construction of the symplectic quotient of a subspace \(\mathscr{P}\subset\mathscr{J}\times\mathscr{A}\) of 'integrable pairs'. The group of symmetries of this theory is the _extended gauge group_\(\widetilde{\mathscr{G}}\). Let \(E_{H}\) be the principal \(\operatorname{U}(r)\)-bundle of unitary frames of \((E,H)\). Then, \(\widetilde{\mathscr{G}}\) is the group of automorphisms of \(E_{H}\) which cover elements of the group \(\mathscr{H}\) of hamiltonian symplectomorphisms of \((M,\omega)\). There is a canonical short exact sequence of Lie groups \[1\to\mathscr{G}\longrightarrow\widetilde{\mathscr{G}}\stackrel{{ p}}{{ \longrightarrow}}\mathscr{H}\to 1, \tag{2.3}\] where \(p\) maps each \(g\in\widetilde{\mathscr{G}}\) into the Hamiltonian symplectomorphism \(p(g)\in\mathscr{H}\) that it covers, and so its kernel \(\mathscr{G}\) is the unitary gauge group of \((E,H)\), that is, the normal subgroup of \(\widetilde{\mathscr{G}}\) consisting of unitary automorphisms covering the identity map on \(M\). There are \(\widetilde{\mathscr{G}}\)-actions on \(\mathscr{J}\) and \(\mathscr{A}\), which combine to give an action on the product \(\mathscr{J}\times\mathscr{A}\), \[g(J,A)=(p(g)J,gA).\] Here, \(p(g)J\) denotes the push-forward of \(J\) by \(p(g)\). To define the \(\widetilde{\mathscr{G}}\)-action on \(\mathscr{A}\), we view the elements of \(\mathscr{A}\) as \(G\)-equivariant splittings \(A\colon TE_{H}\to VE_{H}\) of the short exact sequence \[0\to VE_{H}\longrightarrow TE_{H}\longrightarrow\pi^{*}TM\to 0, \tag{2.4}\] where \(VE_{H}\subset TE_{H}\) is the vertical bundle on \(E_{H}\). Then the \(\widetilde{\mathscr{G}}\)-action on \(\mathscr{A}\) is \[gA:=g\circ A\circ g^{-1},\] where \(g\colon TE\to TE\) denotes the infinitesimal action in the right-hand side. For each unitary connection \(A\), we write \(A^{\perp}y\) for the corresponding horizontal lift of a vector field \(y\) on \(M\) to a vector field on \(E_{H}\). Then each \(A\in\mathscr{A}\) determines a vector-space splitting of the Lie-algebra short exact sequence \[0\to\operatorname{Lie}\mathscr{G}\longrightarrow\operatorname{Lie}\widetilde{ \mathscr{G}}\stackrel{{ p}}{{\longrightarrow}}\operatorname{Lie} \mathscr{H}\to 0 \tag{2.5}\] associated to (2.3), because \(A^{\perp}\eta\in\operatorname{Lie}\widetilde{\mathscr{G}}\) for all \(\eta\in\operatorname{Lie}\mathscr{H}\). Note also that the equation \[\eta_{\varphi}\lrcorner\omega=d\varphi \tag{2.6}\] determines an isomorphism between the space \(\operatorname{Lie}\mathscr{H}\) of Hamiltonian vector fields on \(M\) and the space \(C_{0}^{\infty}(M,\omega)\) of smooth functions \(\varphi\) such that \(\int_{M}\varphi\operatorname{vol}_{\omega}=0\), where \(\operatorname{vol}_{\omega}:=\frac{\omega^{n}}{n!}\). The spaces \(\mathscr{J}\) and \(\mathscr{A}\) have \(\widetilde{\mathscr{G}}\)-invariant symplectic structures \(\omega_{\mathscr{J}}\) and \(\omega_{\mathscr{A}}\) induced by \(\omega\), that combine to define a symplectic form on \(\mathscr{J}\times\mathscr{A}\), for each non-zero real constant \(\alpha\), given by \[\omega_{\alpha}=\omega_{\mathscr{J}}+\frac{4\alpha}{(n-1)!}\omega_{\mathscr{ A}}. \tag{2.7}\] The following result provides the starting point for the theory of the Kahler-Yang-Mills equations. This result builds on the moment map interpretation of the constant scalar curvature equation for a Kahler metric, due to Fujiki [19] and Donaldson [18], and the classical result of Atiyah and Bott [7]. **Proposition 2.1** ([1]).: _The \(\widetilde{\mathscr{G}}\)-action on \((\mathscr{J}\times\mathscr{A},\omega_{\alpha})\) is hamiltonian, with \(\widetilde{\mathscr{G}}\)-equivariant moment map \(\mu_{\alpha}\colon\mathscr{J}\times\mathscr{A}\to(\operatorname{Lie} \widetilde{\mathscr{G}})^{*}\) given by_ \[\langle\mu_{\alpha}(J,A),\zeta\rangle =4i\alpha\int_{M}\operatorname{tr}A\zeta\wedge(i\Lambda F_{A}- \lambda\operatorname{Id})\operatorname{vol}_{\omega} \tag{2.8}\] \[-\int_{M}\varphi\left(S_{J}-\alpha\Lambda^{2}\operatorname{tr}F_ {A}\wedge F_{A}-4i\lambda\alpha\Lambda\operatorname{tr}F_{A}\right) \operatorname{vol}_{\omega}\] _for any \(\zeta\in\operatorname{Lie}\widetilde{\mathscr{G}}\) covering \(\eta_{\varphi}\in\operatorname{Lie}\mathscr{H}\), with \(\varphi\in C_{0}^{\infty}(M,\omega)\)._ Here, \(F_{A}\) is the curvature of \(A\), \(\lambda\in\mathbb{R}\) is determined by the topology of the bundle and the cohomology class \([\omega]\in H^{2}(M,\mathbb{R})\), and \(S_{J}\) is the hermitian scalar curvature of \(J\). Explicitly, \[F_{A}=-A[A^{\perp}\cdot,A^{\perp}\cdot]\in\Omega^{2}(\operatorname{ad}E_{H}), \quad\lambda=\frac{2\pi nc_{1}(E)\cdot[\omega]^{n-1}}{r[\omega]^{n}},\] with the convention \(2\pi c_{1}(E)=[i\operatorname{tr}F_{A}]\). A key observation in [1, 20] is that the space \(\mathscr{J}\times\mathscr{A}\) has a (formally integrable) complex structure \(\mathbf{I}\) preserved by the \(\widetilde{\mathscr{G}}\)-action, given by \[\mathbf{I}_{[(J,A)}(\gamma,a)=(J\gamma,-a(J\cdot)),\text{ for }(\gamma,a) \in T_{J}\mathscr{J}\oplus T_{A}\mathscr{A}. \tag{2.9}\] For positive \(\alpha\), \(\mathbf{I}\) is compatible with the family of symplectic structures (2.7), and so it defines Kahler structures on \(\mathscr{J}\times\mathscr{A}\). The condition \(\alpha>0\) will be assumed in the sequel. Suppose now that there exist Kahler structures on \(M\) with Kahler from \(\omega\). This means the subspace \(\mathscr{J}^{i}\subset\mathscr{J}\) of integrable almost complex structures compatible with \(\omega\) is not empty. For each \(J\in\mathscr{J}^{i}\), let \(\mathscr{A}^{1,1}_{J}\subset\mathscr{A}\) be the subspace of connections \(A\) with \(F_{A}\in\Omega^{1,1}_{J}(\operatorname{ad}E_{H})\), where \(\Omega^{p,q}_{J}\) is the space of \((p,q)\)-forms with respect to \(J\). Then the space of _integrable pairs_ \[\mathscr{P}\subset\mathscr{J}\times\mathscr{A}, \tag{2.10}\] consisting of elements \((J,A)\) with \(J\in\mathscr{J}^{i}\) and \(A\in\mathscr{A}^{1,1}_{J}\), is a \(\widetilde{\mathscr{G}}\)-invariant (possibly singular) Kahler submanifold. The zero locus of the induced moment map \(\mu_{\alpha}\) for the \(\widetilde{\mathscr{G}}\)-action on \(\mathscr{P}\) corresponds precisely to the solutions of the (coupled) _Kahler-Yang-Mills equations_ \[\begin{split} i\Lambda F_{A}&=\lambda\operatorname{ Id},\\ S_{J}\;-\;\alpha\Lambda^{2}\operatorname{tr}F_{A}\wedge F_{A}& =c.\end{split} \tag{2.11}\] Here, \(S_{J}\) is the scalar curvature of the metric \(g_{J}=\omega(\cdot,J\cdot)\) and the constant \(c\in\mathbb{R}\) depends on \(\alpha\), the cohomology class of \(\omega\) and the topology of \(M\) and \(E\) (see [1, Section 2]). One can express the Kahler-Yang-Mills equations from an alternative point of view in which we fix a compact complex manifold \(X\) of dimension \(n\), a Kahler class \(\Omega\in H^{1,1}(X)\) and a holomorphic vector bundle \(E\) over \(X\). Then these equations, for a fixed constant parameter \(\alpha\in\mathbb{R}\), are \[\begin{split} i\Lambda_{\omega}F_{H}&=\lambda \operatorname{Id},\\ S_{\omega}\;-\;\alpha\Lambda_{\omega}^{2}\operatorname{tr}F_{H} \wedge F_{H}&=c,\end{split} \tag{2.12}\] where the unknowns are a Kahler metric on \(X\) with Kahler form \(\omega\) in \(\Omega\), and a hermitian metric \(H\) on \(E\). In this case, \(F_{H}\) is the curvature of the Chern connection \(A_{H}\) of \(H\) on \(E\), and \(S_{\omega}\) is the scalar curvature of the Kahler metric. Note that the operator in (2.2) depends on \(\omega\), and the constant \(c\in\mathbb{R}\) depends on \(\alpha\), \(\Omega\) and the topology of \(X\) and \(E\). ## 3. The gravitating vortex equations Let \(X\) be a compact connected Riemann surface of arbitrary genus. Let \(L\) be a holomorphic line bundle over \(L\) and \(\phi\in H^{0}(X,\Sigma,L)\) a holomorphic section of \(L\). We fix a parameter \(0<\tau\in\mathbb{R}\) and a coupling constant \(\alpha\in\mathbb{R}\), and a real parameter \(c\). The _gravitating vortex equations_, for a Kahler metric on \(X\) with Kahler form \(\omega\) and a hermitian metric \(h\) on \(L\), are \[\begin{split} i\Lambda_{\omega}F_{h}+\frac{1}{2}(|\phi|_{h}^{2}- \tau)&=0,\\ S_{\omega}+\alpha(\Delta_{\omega}+\tau)(|\phi|_{h}^{2}-\tau)& =c.\end{split} \tag{3.1}\] Here, \(S_{\omega}\) is the scalar curvature of \(\omega\), \(F_{h}\) stands for the curvature of the Chern connection of \(h\), \(|\phi|_{h}^{2}\) is the smooth function on \(X\) given by the norm-square of \(\phi\) with respect to \(h\) and \(\Delta_{\omega}\) is the Laplace operator for the metric \(\omega\), defined by \[\Delta_{X}f=2i\Lambda_{\omega}\bar{\partial}\partial f,\qquad\text{ for }f\in C^{\infty}(\Sigma).\] We will show now how to derive the gravitating vortex equations (3.1) as a dimensional reduction of the Kahler-Yang-Mills equations (2.12). To do this, we associate to \((X,L,\phi)\) a rank 2 holomorphic vector bundle \(E\) over \(X\times\mathbb{P}^{1}\). This is given as an extension \[0\to p^{*}L\longrightarrow E\longrightarrow q^{*}\mathcal{O}_{\mathbb{P}^{1}} (2)\to 0, \tag{3.2}\] where \(p\) and \(q\) are the projections from \(X\times\mathbb{P}^{1}\) to \(X\) and \(\mathbb{P}^{1}\) respectively. By \(\mathcal{O}_{\mathbb{P}^{1}}(2)\) we denote as usual the holomorphic line bundle with Chern class 2 on \(\mathbb{P}^{1}\), isomorphic to the holomorphic tangent bundle of \(\mathbb{P}^{1}\). Extensions as above are parametrized by \[H^{1}(X,p^{*}L\otimes q^{*}\mathcal{O}_{\mathbb{P}^{1}}(-2))\cong H^{0}(X,L) \otimes H^{1}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(-2))\cong H^{0}(X,L),\] and we choose \(E\) to be the extension determined by \(\phi\). Let \(\operatorname{SU}(2)\) act on \(X\), trivially on \(X\), and in the standard way on \(\mathbb{P}^{1}\cong\operatorname{SU}(2)/\operatorname{U}(1)\). This action can be lifted to trivial actions on \(E\) and \(p^{*}L\) and the standard action on \(\mathcal{O}_{\mathbb{P}^{1}}(2)\). Since the induced actions on \(H^{0}(X,L)\) and \(H^{1}(\mathbb{P}^{1},\mathcal{O}_{\mathbb{P}^{1}}(-2))\cong H^{0}(\mathbb{P}^ {1},\mathcal{O}_{\mathbb{P}^{1}})^{*}\cong\mathbb{C}\) are trivial, \(E\) is an \(\operatorname{SU}(2)\)-equivariant holomorphic vector bundle over \(X\times\mathbb{P}^{1}\). For \(\tau\in\mathbb{R}_{>0}\), consider the \(\operatorname{SU}(2)\)-invariant Kahler metric on \(X\times\mathbb{P}^{1}\) whose Kahler form is \[\omega_{\tau}=p^{*}\omega+\frac{4}{\tau}q^{*}\omega_{FS}, \tag{3.3}\] where \(\omega\) is a Kahler form on \(X\) and \(\omega_{FS}\) is the Fubini-Study metric on \(\mathbb{P}^{1}\), given in homogeneous coordinates by \[\omega_{FS}=\frac{idz\wedge d\overline{z}}{(1+|z|^{2})^{2}}\] and such that \(\int_{\mathbb{P}^{1}}\omega_{FS}=2\pi\). Assuming that the coupling constants \(\alpha\) in (3.1) and (2.12) coincide, we have the following [2]. **Proposition 3.1**.: _The triple \((X,L,\phi)\) admits a solution \((\omega,h)\) of the gravitating vortex equation (3.1) with parameter \(\tau\) if and only if \((X\times\mathbb{P}^{1},E)\) admits an \(\operatorname{SU}(2)\)-invariant solution of the Kahler-Yang-Mills equations (2.12) with Kahler form \(\omega_{\tau}=p^{*}\omega+\frac{4}{\tau}q^{*}\omega_{FS}\)._ For a fixed Kahler metric \(\omega\), the first equation in (3.1) corresponds to the abelian vortex equation \[i\Lambda_{\omega}F_{h}+\frac{1}{2}(|\phi|_{h}^{2}-\tau)=0, \tag{3.4}\] for a hermitian metric \(h\) on \(L\). In [36, 11, 23, 24] Noguchi, Bradlow and the author gave independently and with different methods a complete characterization of the existence of _abelian vortices_ on a compact Riemann surface, that is, of solutions of the equations (3.4). **Theorem 3.2** ([11, 23, 24]).: _Assume that \(\phi\) is not identically zero. For every fixed Kahler form \(\omega\), there exists a unique solution \(h\) of the vortex equations (3.4) if and only if_ \[c_{1}(L)<\frac{\tau\operatorname{Vol}_{\omega}(X)}{4\pi}. \tag{3.5}\] Inspired by work of Witten [41] and Taubes [38], the method in [23] exploited the dimensional reduction of the Hermitian-Yang-Mills equations from four to two dimensions, combined with the theorem of Donaldson, Uhlenbeck and Yau [17, 40]. The constant \(c\in\mathbb{R}\) is topological, and is explicitly given by \[c=\frac{2\pi(\chi(X)-2\alpha\tau c_{1}(L))}{\operatorname{Vol}_{\omega}(X)}, \tag{3.6}\] as can be deduced by integrating the equations. The gravitating vortex equations for \(\phi=0\), are equivalent to the condition that \(\omega\) be a constant scalar curvature Kahler metric on \(X\) and \(h\) be a Hermite-Einstein metric on \(L\). By the uniformization theorem for Riemann surfaces, the existence of these 'trivial solutions' reduces by Hodge theory to the condition \(c_{1}(L)=\tau\operatorname{Vol}_{\omega}(X)/4\pi\). Excluding this trivial case, the sign of \(c\) plays an important role in the existence problem for the gravitating vortex equations. The dependence of the gravitating vortex equations (3.1) on the topological constant \(c\) is better observed using a Kahler-Einstein type formulation. Using that \(X\) is compact, (3.1) reduces to a second-order system of PDE. To see this, we fix a constant scalar curvature metric \(\omega_{0}\) on \(X\) and the unique hermitian metric \(h_{0}\) on \(L\) with constant \(\Lambda_{\omega_{0}}F_{h_{0}}\), and apply a conformal change to \(h\) while changing \(\omega\) within its Kahler class. Equations (3.1) for \(\omega=\omega_{0}+dd^{c}v,h=e^{2f}h_{0}\), with \(v,f\in C^{\infty}(\Sigma)\), are equivalent to the following semi-linear system of partial differential equations (cf. [2, Lemma 4.3]) \[\begin{split}\Delta f+\frac{1}{2}(e^{2f}|\phi|^{2}-\tau)e^{4 \alpha\tau f-2\alpha e^{2f}|\phi|^{2}-2cv}&=-c_{1}(L),\\ \Delta v+e^{4\alpha\tau f-2\alpha e^{2f}|\phi|^{2}-2cv}& =1.\end{split} \tag{3.7}\] Here, \(\Delta\) is the Laplacian of the fixed metric \(\omega_{0}\), normalized to have volume \(2\pi\) and \(|\phi|\) is the pointwise norm with respect to the fixed metric \(h_{0}\) on \(L\). Note that \(\omega=(1-\Delta v)\omega_{0}\) implies \(1-\Delta v>0\), which is compatible with the last equation in (3.7). For \(c\geq 0\), the existence of gravitating vortices forces the topology of the surface to be that of the 2-sphere, because \(c_{1}(L)>0\) implies \(\chi(\Sigma)>0\) by (3.6). When \(c\) in (3.6) is zero, the gravitating vortex equations (3.1) turn out to be a system of partial differential equations that have been extensively studied in the physics literature, known as the _Einstein-Bogomol'nyi equations_. As observed by Yang [45, Section 1.2.1], the existence of solutions in this situation with \(\alpha>0\) constrains the topology of \(X\) to be the complex projective line (or 2-sphere) \(\mathbb{P}^{1}\), since \(c=0\) if and only if \[\chi(X)=2\alpha\tau c_{1}(L).\] We are assuming that \(\tau>0\) and \(c_{1}(L)>0\). In the case \(c=0\), for \(L=\mathcal{O}_{\mathbb{P}^{1}}(N)\) and \[e^{2u}=1-\Delta v\] the system (3.7) reduces to a single partial differential equation \[\Delta f+\frac{1}{2}e^{2u}(e^{2f}|\phi|^{2}-\tau)=-N, \tag{3.8}\] for a function \(f\in C^{\infty}(\mathbb{P}^{1})\), where \[u=2\alpha\tau f-\alpha e^{2f}|\phi|^{2}+c^{\prime},\] and \(c^{\prime}\) is a real constant that can be chosen at will. By studying the _Liouville type equation_ (3.8) on \(\mathbb{P}^{1}\), Yang [45, 46] proved the existence of solutions of the Einstein-Bogomol'nyi equations under certain numerical conditions on the zeros of \(\phi\), to which he refers as "technical restriction" [45, Section 1.3]. It turns out that these conditions have a precise algebro-geometric meaning in the context of Mumford's Geometric Invariant Theory (GIT) [33], as a consequence of the following result. **Proposition 3.3** ([33, Ch. 4, Proposition 4.1]).: _Consider the space of effective divisors on \(\mathbb{P}^{1}\) with its canonical linearised \(\operatorname{SL}(2,\mathbb{C})\)-action. Let \(D=\sum_{j}n_{j}p_{j}\) be an effective divisor, for finitely many different points \(p_{j}\in\mathbb{P}^{1}\) and integers \(n_{j}>0\) such that \(N=\sum_{j}n_{j}\). Then_ 1. \(D\) _is stable if and only if_ \(n_{j}<\frac{N}{2}\) _for all_ \(j\)_._ 2. \(D\) _is strictly polystable if and only if_ \(D=\frac{N}{2}p_{1}+\frac{N}{2}p_{2}\)_, where_ \(p_{1}\neq p_{2}\) _and_ \(N\) _is even._ 3. \(D\) _is unstable if and only if there exists_ \(p_{j}\in D\) _such that_ \(n_{j}>\frac{N}{2}\)_._ Using Proposition 3.3, Yang's existence theorem has the following reformulation, where "GIT polystable" means either conditions (1) or (2) of Proposition 3.3 are satisfied, and \[D=\sum_{j}n_{j}p_{j}\] is the effective divisor on \(\mathbb{P}^{1}\) corresponding to a pair \((L,\phi)\), with \(N=\sum_{j}n_{j}=c_{1}(L)\). **Theorem 3.4** (Yang's Existence Theorem).: _Assume that \(\alpha>0\) and that (3.5) holds. Then, there exists a solution of the Einstein-Bogomol'nyi equations on \((\mathbb{P}^{1},L,\phi)\) if \(D\) is GIT polystable for the linearised \(\operatorname{SL}(2,\mathbb{C})\)-action on the space of effective divisors._ The converse to Theorem 3.4 is given in [4, 6]. **Theorem 3.5**.: _If \((\mathbb{P}^{1},L,\phi)\) admits a solution of the gravitating vortex equations with \(\alpha>0\), then (3.5) holds and the divisor \(D\) is polystable for the \(\operatorname{SL}(2,\mathbb{C})\)-action._ _Remark 3.6_.: Notice that this theorem is more general than being a converse to Theorem 3.4 since it does not assume that \(c=0\), and deals with the general gravitating vortex equations (3.1) and not just with the Einstein-Bogomol'nyi equation. Combining now Theorems 3.4 and 3.5, we obtain a correspondence theorem for the Einstein-Bogomol'nyi equations. **Theorem 3.7**.: _A triple \((\mathbb{P}^{1},L,\phi)\) with \(\phi\neq 0\) admits a solution of the Einstein-Bogomol'nyi equations with \(\alpha>0\) if and only (3.5) holds and the divisor \(D\) is polystable for the \(\mathrm{SL}(2,\mathbb{C})\)-action._ Another result, conjectured by Yang and proved in [4] is the following. **Theorem 3.8**.: _There is no solution of the Einstein-Bogomol'nyi equations for \(N\) strings superimposed at a single point, that is, when \(D=Np\)._ An existence theorem for the gravitating vortex equations (3.1) with \(c>0\) for a triple \((\mathbb{P}^{1},L,\phi)\) with \(\phi\neq 0\), similar to Theorem, is obtained combining Theorem 3.5 with the converse direction in this situation, proved by Garcia-Fernandez-Pingali-Yao [21]. In genus \(g(X)=1\), the gravitating vortex equations (3.1) (with \(\phi\neq 0\)) always have a solution in the weak coupling limit \(0<\alpha\ll 1\) (see [2, Theorem 4.1] for a precise formulation), and it is an interesting open problem to find effective bounds for \(\alpha\) for which (3.1) admit solutions. Paper [4] deals also with the existence theorem of solutions of (3.1) for surfaces of genus \(g\geq 2\), for which one has the following. **Theorem 3.9**.: _Let \(X\) be a compact Riemann surface of genus \(g\geq 2\), and \(L\) a holomorphic line bundle over \(X\) of degree \(N>0\) equipped with a holomorphic section \(\phi\neq 0\). Let \(\tau\) be a real constant such that \(0<N<\tau/2\). Define_ \[\alpha_{*}:=\frac{2g-2}{2\tau(\tau/2-N)}>0. \tag{3.9}\] _Then, the set of \(\alpha\) for which (1.3) has smooth solutions of volume \(2\pi\) is open and contains the closed interval \([0,\alpha_{*}]\). Furthermore, the solution is unique for \(\alpha\in[0,\alpha_{*}]\)._ This shall be compared with the classical uniformization theorem which establishes that a compact Riemann surface admits a metric of constant curvature with fixed volume, unique up to biholomorphisms. The proof of Theorem 3.9 involves the continuity method, where openness is proven using the moment-map interpretation, while closedness needs _a priori_ estimates as usual. The hardest part is the \(C^{0}\) estimate, and in fact it is for this estimate that the value of \(\alpha\) should not be too large. With these estimates at hand, we prove uniqueness by adapting an argument by Bando and Mabuchi in the Kahler-Einstein situation [8]. An interesting open question is to see what the largest value of \(\alpha\) is, for which solutions exist. Notice that in the _dissolving limit_\(\tau\to N/2\) of the vortex we have \(\phi\to 0\) (see [23]), and \(\alpha^{*}\) in (3.9) becomes arbitrarily large. ## 4. Non-abelian gravitating vortices One can consider the dimensional reduction of the Kahler-Yang-Mills equations for higher rank \(\operatorname{SU}(2)\)-equivariant bundles on \(X\times\mathbb{P}^{1}\), where \(X\) is a compact Riemann surface. In particular one can consider extensions of the form \[0\to p^{*}E_{1}\longrightarrow E\longrightarrow p^{*}E_{2}\otimes q^{*} \mathcal{O}_{\mathbb{P}^{1}}(2)\to 0, \tag{4.1}\] where \(E_{1}\) and \(E_{2}\) are holomorphic vector bundles on \(X\) and, as above, \(p\) and \(q\) are the projections from \(X\times\mathbb{P}^{1}\) to \(X\) and \(\mathbb{P}^{1}\) respectively. Extensions of the form (4.1) are in one-to-one correspondence with triples \(T=(E_{1},E_{2},\phi)\), where \(\phi\) is a sheaf homomorphism from \(E_{2}\) to \(E_{1}\), that is an element in \(H^{0}(X,\operatorname{Hom}(E_{2},E_{1})\). As (3.2), these extensions define \(\operatorname{SU}(2)\)-equivariant (in fact, \(\operatorname{SL}(2,\mathbb{C})\)-equivariant) vector bundles over \(X\times\mathbb{P}^{1}\). Given a triple \(T=(E_{1},E_{2},\phi)\) over \(X\) we can consider the _gravitating coupled vortex equations_ for a metric on \(X\) with Kahler form \(\omega\), and Hermitian metrics \(h_{1}\) and \(h_{2}\) on \(E_{1}\) and \(E_{2}\), respectively, given by \[i\Lambda_{\omega}F_{h_{1}}+\phi\phi^{*} =\tau_{1},\] \[i\Lambda_{\omega}F_{h_{2}}-\phi^{*}\phi =\tau_{2}, \tag{4.2}\] \[S_{\omega}+\alpha\Delta_{\omega}|\phi|^{2}-\alpha(\Lambda_{\omega }^{2}\operatorname{tr}F_{h_{1}}^{2}+\Lambda_{\omega}^{2}\operatorname{tr}F_{h_ {2}}^{2}+4\tau_{1}\operatorname{tr}(i\Lambda_{\omega}F_{h_{1}})+4\tau_{2} \operatorname{tr}(i\Lambda_{\omega}F_{h_{2}})) =c.\] See Section 3 for the notations. Here \(\tau_{1}\) and \(\tau_{2}\) are real parameters (of which a certain linear combination is linked to the Chern classes of \(E_{1}\) and \(E_{2}\)), so that \(\tau_{1}-\tau_{2}>0\), \(\alpha\in\mathbb{R}\) is a coupling constant, and \(c\in\mathbb{R}\) depends on \(\alpha\), \(\tau_{1}-\tau_{2}\), and the topology of \(X\) and \(E_{1}\) and \(E_{2}\). _Remark 4.1_.: When referring to the gravitating coupled vortex equations we are of course assuming that at least one of the two vector bundles has rank bigger than one. The case in which both vector bundles have rank one can be reduced to the study of the abelian gravitating vortex equation as shown in [1]. Let \(\sigma:=\tau_{1}-\tau_{2}\), and consider now the \(\operatorname{SU}(2)\)-equivariant Kahler form \(\omega_{\sigma}=\sigma p^{*}\omega+q^{*}\omega_{FS}\), where \(\omega\) is a Kahler form on \(X\) and \(\omega_{FS}\) is the Fubini-Study metric on \(\mathbb{P}^{1}\) (see Section 3). Similarly to Proposition 3.1, one has the following [3]. **Proposition 4.2**.: _Let \(T=(E_{1},E_{2},\phi)\) be a triple over \(X\). The pair \((X,T)\) admits a solution \((\omega,h_{1},h_{2})\) of the gravitating coupled vortex equations (4.2) if and only if \((X\times\mathbb{P}^{1},E)\) admits an \(\operatorname{SU}(2)\)-invariant solution of the Kahler-Yang-Mills equations (2.12) with Kahler form \(\omega_{\sigma}\)._ For a fixed Kahler metric \(\omega\), the first two equations in (4.2) are the _coupled vortex equations_ introduced in [25], where it was shown that they are dimensional reduction of the Hermite-Yang-Mills equations. An existence theorem for the coupled vortex equations was given in [12] in terms of a certain notion of stability for the triple \(T\) depending on the parameter \(\sigma\). To define this concept, let \(T=(E_{1},E_{2},\phi)\) and \(T^{\prime}=(E_{1}^{\prime},E_{2}^{\prime},\phi^{\prime})\) be two triples on \(X\). A homomorphism from \(T^{\prime}\) to \(T\) is a commutative diagram \[\begin{CD}E^{\prime}_{2}@>{\phi^{\prime}}>{}>E^{\prime}_{1}\\ @V{}V{}V@V{}V{}V\\ E_{2}@>{\phi}>{}>E_{1},\end{CD}\] where the vertical arrows are holomorphic maps. A triple \(T^{\prime}=(E^{\prime}_{1},E^{\prime}_{2},\phi^{\prime})\) is a subtriple of \(T=(E_{1},E_{2},\phi)\) if the sheaf homomorphims \(E^{\prime}_{1}\to E_{1}\) and \(E^{\prime}_{2}\to E_{2}\) are injective. A subtriple \(T^{\prime}\subset T\) is called _proper_ if \(T^{\prime}\neq 0\) and \(T^{\prime}\neq T\). For any \(\sigma\in\mathbb{R}\) the \(\sigma\)_-degree_ and \(\sigma\)_-slope_ of \(T\) are defined to be \[\deg_{\sigma}(T) =\deg(E_{1})+\deg(E_{2})+\sigma\operatorname{rk}(E_{2}),\] \[\mu_{\sigma}(T) =\frac{\deg_{\sigma}(T)}{\operatorname{rk}(E_{1})+\operatorname{ rk}(E_{2})}\] \[=\mu(E_{1}\oplus E_{2})+\sigma\frac{\operatorname{rk}(E_{2})}{ \operatorname{rk}(E_{1})+\operatorname{rk}(E_{2})},\] where \(\deg(E)\), \(\operatorname{rk}(E)\) and \(\mu(E)=\deg(E)/\operatorname{rk}(E)\) are the degree, rank and slope of \(E\), respectively. We say \(T=(E_{1},E_{2},\phi)\) is \(\sigma\)_-stable_ if \[\mu_{\sigma}(T^{\prime})<\mu_{\sigma}(T)\] for any proper subtriple \(T^{\prime}=(E^{\prime}_{1},E^{\prime}_{2},\phi^{\prime})\). We define \(\sigma\)_-semistability_ by replacing the above strict inequality with a weak inequality. A triple is called \(\sigma\)_-polystable_ if it is the direct sum of \(\sigma\)-stable triples of the same \(\sigma\)-slope. We denote by \[\mathcal{M}_{\sigma}=\mathcal{M}_{\sigma}(n_{1},n_{2},d_{1},d_{2})\] the moduli space of \(\sigma\)-polystable triples \(T=(E_{1},E_{2},\phi)\) which have \(\operatorname{rk}(E_{i})=n_{i}\) and \(\deg(E_{i})=d_{i}\) for \(i=1,2\). There are certain necessary conditions in order for \(\sigma\)-semistable triples to exist. Let \(\mu_{i}=d_{i}/n_{i}\) for \(i=1,2\). We define \[\sigma_{m}= \mu_{1}-\mu_{2}, \tag{4.3}\] \[\sigma_{M}= (1+\frac{n_{1}+n_{2}}{|n_{1}-n_{2}|})(\mu_{1}-\mu_{2}),\ \ n_{1}\neq n_{2}. \tag{4.4}\] **Proposition 4.3**.: _[_12_, Theorem 6.1]_ _The moduli space \(\mathcal{M}_{\sigma}(n_{1},n_{2},d_{1},d_{2})\) is a complex analytic variety, which is projective when \(\sigma\) is rational. A necessary condition for \(\mathcal{M}_{\sigma}(n_{1},n_{2},d_{1},d_{2})\) to be non-empty is_ \[\begin{array}{l}0\leq\sigma_{m}\leq\sigma\leq\sigma_{M}\quad\text{if}\quad n _{1}\neq n_{2}\text{,}\\ 0\leq\sigma_{m}\leq\sigma\quad\text{if}\quad n_{1}=n_{2}.\end{array}\] The non-emptiness and topology of these moduli spaces have been studied in [13, 27, 26]. Triples play an important role in the study of the moduli space of Higgs bundles and character varieties of the fundamental group of \(X\)[13, 14]. The study of existence of solutions to the gravitating coupled vortex equations (4.2) in the higher rank case is entirely opened, to the knowledge of the author. Solving this problem will most likely require new analytic and algebraic tools and techniques, that may turn to be very useful for the study of existence of solutions of the Kahler-Yang-Mills equations (2.11). A particularly interesting situation, similarly to the abelian case should be the case in which \(X=\mathbb{P}^{1}\). In this situation one may conjecture the following. **Conjecture 4.4**.: _Let \(T=(E_{1},E_{2},\phi)\) be a triple over \(\mathbb{P}^{1}\). The pair \((\mathbb{P}^{1},T)\) admits a solution to the gravitating coupled vortex equations (4.2) if and only \(T\) is \(\sigma\)-polystable and the point \(T\in\mathcal{M}_{\sigma}\) is GIT polystable for the natural action of \(\operatorname{SL}(2,\mathbb{C})\) on \(\mathcal{M}_{\sigma}\) induced by the action of \(\operatorname{SL}(2,\mathbb{C})\) on \(\mathbb{P}^{1}\)._ _Remark 4.5_.: When \(n_{1}=n_{2}=1\), \(\sigma\)-stability of the triple \(T\) reduces to the condition (3.5), where \(L=E_{1}\otimes E_{2}^{*}\) and \(\sigma\) is essentially the inverse of \(\tau\). Then the proof of this conjecture reduces to Theorem 3. _Acknowledgements_. The author thanks his co-authors on the various subjects treated in this paper. These include: Luis Alvarez-Consul, Steven Bradlow, Mario Garcia-Fernandez, Vamsi Pingali and Chengjian Yao. He also thanks the IHES for its hospitality and support.
2306.17839
Classical benchmarking of zero noise extrapolation beyond the exactly-verifiable regime
In a recent work a quantum error mitigation protocol was applied to the expectation values obtained from circuits on the IBM Eagle quantum processor with up $127$ - qubits with up to $60 \; - \; \mbox{CNOT}$ layers. To benchmark the efficacy of this quantum protocol a physically motivated quantum circuit family was considered that allowed access to exact solutions in different regimes. The family interpolated between Clifford circuits and was additionally evaluated at low depth where exact validation is practical. It was observed that for highly entangling parameter regimes the circuits are beyond the validation of matrix product state and isometric tensor network state approximation methods. Here we compare the experimental results to matrix product operator simulations of the Heisenberg evolution, find they provide a closer approximation than these pure-state methods by exploiting the closeness to Clifford circuits and limited operator growth. Recently other approximation methods have been used to simulate the full circuit up to its largest extent. We observe a discrepancy of up to $20\%$ among the different classical approaches so far, an uncertainty comparable to the bootstrapped error bars of the experiment. Based on the different approximation schemes we propose modifications to the original circuit family that challenge the particular classical methods discussed here.
Sajant Anand, Kristan Temme, Abhinav Kandala, Michael Zaletel
2023-06-30T17:57:26Z
http://arxiv.org/abs/2306.17839v1
# Classical benchmarking of zero noise extrapolation beyond the exactly-verifiable regime ###### Abstract In a recent work a quantum error mitigation protocol was applied to the expectation values obtained from circuits on the IBM Eagle quantum processor with up 127 - qubits with up to \(60\,-\,\mathrm{CNOT}\) layers. To benchmark the efficacy of this quantum protocol a physically motivated quantum circuit family was considered that allowed access to exact solutions in different regimes. The family interpolated between Clifford circuits and was additionally evaluated at low depth where exact validation is practical. It was observed that for highly entangling parameter regimes the circuits are beyond the validation of matrix product state and isometric tensor network state approximation methods. Here we compare the experimental results to matrix product operator simulations of the Heisenberg evolution, find they provide a closer approximation than these pure-state methods by exploiting the closeness to Clifford circuits and limited operator growth. Recently other approximation methods have been used to simulate the full circuit up to its largest extent. We observe a discrepancy of up to 20% among the different classical approaches so far, an uncertainty comparable to the bootstrapped error bars of the experiment. Based on the different approximation schemes we propose modifications to the original circuit family that challenge the particular classical methods discussed here. Quantum error mitigation (QEM) has been proposed as a method for extending the reach of near-term quantum hardware before the implementation of quantum error correction [1; 2]. While quantum error correction is widely expected to be necessary for implementing generic quantum algorithms, the overhead is currently prohibitive. QEM instead compensates for the effect of errors by engineering the noise in a manner which allows postprocessing to increase the accuracy of expectation values. In a recent work [3], we reported results benchmarking the efficacy of an experimentally feasible QEM variant, "zero noise extrapolation" (ZNE) [1; 2; 7] for estimating observables using error-prone quantum hardware. To do so, we considered a quantum circuit describing the discretized dynamics of the 2D transverse field Ising model (TFI) (i.e., the "kicked" transverse Ising model [8; 9]). Each round of the circuit takes the form \[U(\theta_{J},\theta_{h}) = \prod_{\langle i,j\rangle}R^{ij}_{ZZ}(\theta_{J})\prod_{i}R^{i}_ {X}(\theta_{h}) \tag{1}\] \[= \prod_{\langle i,j\rangle}e^{-i\theta_{J}Z_{i}Z_{j}/2}\prod_{i}e ^{-i\theta_{h}X_{i}/2}\] Here \(\langle i,j\rangle\) runs over the bonds of IBM's "Eagle" 127-qubit heavy-hexagon lattice. The two-qubit angle \(\theta_{J}=-\pi/2\) was chosen to be Clifford so as to involve only a single CNOT gate, and we assume this value unless otherwise specified. At \(\theta_{h}=0,\pi/2\) the overall circuit dynamics are Clifford and therefore exactly calculable, while at intermediate \(\theta_{h}\) they are expected to be ergodic. In the reported experimental protocol, the system is first initialized in state \(\otimes_{i=1}^{127}\,|\!\uparrow\rangle\) and evolved under \(D\)-rounds of \(U\), at which point a variety of weight-1, 10 and 17 observables were estimated. Results were compared with exact classical calculations where available, as well as 1D and 2D tensor network simulations which approximate the evolution of the pure quantum state. These classical approximations are expected to break down at large depths due to the growth of quantum entanglement, which is particularly rapid at \(\theta_{h}=\pi/2\). Experimentally, ZNE was found to provide a large improvement in the accuracy of expectation values compared with unmitigated results. We found: 1. In the regime where exact classical simulations can verify results (including circuits with a depth of 15 CNOT layers, in which the measured operator spread up to 68 sites), ZNE reproduces the exact results within the bootstrap error bars (Fig. 1) 2. Classical matrix-product state (MPS) and 2D isometric tensor-network (isoTNS) methods which approximate the dynamics of the pure state, while accurate near \(\theta_{h}\sim 0\), struggle to accurately reproduce expectation values for the full range of \(\theta_{h}\) values. 3. In regimes beyond exact verification, ZNE produces results which are correct at both of the exactly solvable points \(\theta_{h}=0,\pi/2\), while the classical pure-state approaches fail badly near \(\theta_{h}\sim\pi/2\). The observation that ZNE could provide such reliable expectation values, at this scale, was argued to be evidence for the utility of pre - fault tolerant quantum computers. The argument for utility is that one can use noisy quantum processors as a tool to reliably explore circuits and problems that will ultimately pose difficult challenges for classical simulation methods. It is therefore crucial to emphasize that the Ref. [3] did not claim "quantum advantage" or any provable speedup, and it was left open whether other approximate classical methods might perform better than the pure state methods. Indeed, it was suggested that approximating the Heisenberg evolution of the measured operators, rather than the quantum states themselves, was a promising future direction. In this work we report the results of such additional classical calculations. In contrast to the pure-state methods, we find that in the verifiable regime, matrix-product approximations of the Heisenberg evolution give excellent agreement with the exact results, and therefor with ZNE. Going beyond the verifiable regime, we find classical simulations remain in good agreement with ZNE even up to 20 Trotter steps (60 CNOT layers), and reproduce the exact results at \(\theta_{h}=\pi/2\). These results provide further evidence that ZNE produces accurate expectation values at scales orders of magnitude beyond both exact verification and the unmitigated results. At depth \(D=20\), there remains a range of \(\theta_{h}\sim\pi/4\) where Heisenberg simulations are not fully converged, and they do not precisely agree with results of the newly developed "BP-TNS" classical approximation recently reported in [4]; Clifford perturbation theory (CPT) reported in [6; 10]; and simulations on a smaller 31-qubit geometry reported in [5]. The various methods mutually disagree at about the 20% level, which happens to be comparable to the ZNE error bars which sit between them - see Fig.1. The Heisenberg evolution is a fully controlled approximation, meaning it is in principle exact, but only in the limit of a bond dimension which scales exponentially in \(D^{2}\). BP-TNS, on the other hand, is fully converged at a more favorable bond dimension of \(2^{D}\), but makes uncontrolled approximations which make it inexact even in this limit. CPT is a perturbative expansion in \(\tan(\theta_{h})\) (truncated at order \(K=10\) in Ref. [6]) which is no longer a small parameter at the \(\theta_{h}\sim\pi/4\) point. Smaller-size methods depend on the restricted growth of the operator, and ultimately will require a careful finite-size scaling analysis. Without additional study, it is not yet clear which of the methods is most accurate in this regime. We anticipate further improvements in classical methods or quantum hardware will prove fruitful here. ## I Heisenberg-mpo method We begin by describing the numerical method used to approximate the Heisenberg evolution of an observable, \(\mathcal{O}(D)=(U^{\dagger})^{D}\,\mathcal{O}\,U^{D}\). We take the standard approach of approximating \(\mathcal{O}(D)\) via a 1D matrix-product operator representation (MPO) [11; 12]. The MPO can be viewed as a vectorized MPS in a doubled Hilbert space of local bond dimension 4, as shown in Fig. 2(a). To map the 2D heavy-hex lattice to a 1D chain, sites are ordered accord Figure 1: Comparison of classical approximations for \(\langle Z_{62}\rangle\) at Trotter depth \(D=20\) against experimental ZNE results: (1) matrix-product-state (MPS) representation of the pure state within a lightcone-reduced volume [3]; (2) extrapolation of the MPS results with respect to the estimated circuit fidelity (see Supp.); (3) Belief propagation tensor network states [4]; (4) MPO representation of Heisenberg evolution (this work); (5) simulation of a 31 qubit subset of the IBM Eagle device [5]; (6) Clifford perturbation theory [6]. The latter four methods differ by \(\sim 20\%\) amongst themselves near \(\theta_{h}\sim\pi/4\), an amount largely within the spread of the ZNE error bars. Without further calculations it is not clear which of these methods is most accurate. Figure 2: Evolution of operators using matrix product operators (MPO). (a) The 1D MPO representation of an operator can be viewed as a vectorized state in a larger Hilbert space of onsite dimension \(k=4\). (b) An operator expectation value is found by evolving the vectorized operator by \(U^{\dagger}\otimes U\), in this work represented by the product of 13 MPOs, each of bond dimension 4. ing to the "snake" ordering of Ref. [3]. This particular ordering minimizes the number of long-range connections in the resulting MPS. As in the standard method for time-evolving 1D operators, we interleave unitary conjugation with variational matrix-product compression [13]. However, mapping the 2D lattice to the 1D chain introduces long-range couplings which require some care to implement efficiently. If a full round of \(U\) is applied, the MPO dimension would increase by a factor of 4096, making subsequent truncation intractable[14]. Instead, we decompose \(U\) into 13 layers, \(U=\prod_{r=1}^{13}U_{r}\), chosen such that the exact application \(O\to U_{r}^{\dagger}OU_{r}\) increases \(\chi\to 4\chi\) on the evolved bonds. As the two-qubit gates commute, the layers \(U_{r}\) are chosen to evenly distribute the gates among the layers. To prevent blowup in \(\chi\), two-site variational matrix product compression [13] is then applied between each layer. Simulations are conducted at fixed matrix-product bond dimension \(\chi\), leading to errors at long times. After \(D\) steps, we then compute \(\langle\mathcal{O}(D)\rangle=\langle\psi|\mathcal{O}(D)|\psi\rangle\) for \(|\psi\rangle=\otimes_{i=1}^{N}|\uparrow\rangle\), \(N=127\). Note that the evolved operator can be measured with any initial state, pure or otherwise. This is shown in Fig. 2(b) in the vectorized picture. Each Trotter step takes time \(N\chi^{3}\), for total complexity \(DN\chi^{3}\). ## II Benchmarking the Kicked Ising Circuits We now consider the circuits of Ref. [3]. In Fig. 3, we show results for the weight-10 operator \(X_{\{3\}}Y_{\{2\}}Z_{\{5\}}=U(\pi/2)^{5}Z_{13}\left[U^{\dagger}(\pi/2)\right]\) (a) and the weight-17 operator \(X_{\{8\}}Y_{\{1\}}Z_{\{8\}}=U(\pi/2)^{5}Z_{58}\left[U^{\dagger}(\pi/2)\right] ^{5}\) (b), both measured at circuit depth 5. Both of these operators trivially have expectation value 1 (0) at the Clifford \(\theta_{h}=\pi/2\) (0) point. We find that the results from Heisenberg evolution of the measured operator agree to good precision with the exact results, and thus with the ZNE-experimental results, across all ranges of \(\theta_{h}\). In Appendix A, we quantitatively compare to the exact results and demonstrate agreement better than \(10^{-4}\) (often significantly) for all \(\theta_{h}\). In fact, for the weight-10 operator in Fig. 3(a), a bond dimension of \(\chi=384\) is sufficient for exact simulation. Note that unlike for pure state methods, which were considered previously in Ref. [3], the bond dimension needed at the non-trivial Clifford point \(\theta_{h}=\pi/2\) is 1, despite the circuit generating a volume law entangled stabilizer state. This is because a Pauli operator remains a single Pauli string, albeit expanded in extent, throughout the evolution. Moving away from the either Clifford point \(\theta_{h}=0,\,\pi/2\) has a perturbative effect on the Heisenberg evolution, as the evolved operator becomes a superposition of Pauli strings concentrated in the vicinity of the Pauli string resulting from Clifford evolution. Thus near either Clifford point, Heisenberg evolution performs well and needs minimal bond dimension to capture the dynamics. Having built confidence in Heisenberg MPO evolution to match the exact results where available, we move to circuits where an exact result is not available. To clarify, by exact we mean either a full contraction of the light cone reduced circuit, or MPS dynamics with an analytically known bond dimension that entails no truncation. In Fig. 3(c), we report results of the weight-17 operator \(X_{\{8\}}Y_{\{8\}}Z_{\{1\}}=\prod_{i}e^{-i\theta_{h}X_{i}/2}U(\pi/2)^{5}Z_{58 }\left[U^{\dagger}(\pi/2)\right]^{5}\prod_{i}e^{i\theta_{h}X_{i}/2}\) measured at circuit depth 5 with an additional layer of Figure 3: Comparison of Heisenberg evolution against experimental ZNE results for (a) weight-10, (b) weight-17, and (c) weight-17 with modified circuit, all at depth 5. For the first two, exact answer is available either by brute force or lightcone-reduced MPS simulations, and Heisenberg evolution accurately matches this. For (c), where the exact answer is not available, Heisenberg and ZNE agree. \(\prod_{i}e^{-i\theta_{h}X_{i}/2}\) gates at the end. The expectation value of this operator is entirely equivalent to the expectation value of \(U(\pi/2)^{6}Z_{58}\left[U^{\dagger}(\pi/2)\right]^{6}\) measured at circuit depth 6. We find good agreement between experiment and Heisenberg results and additionally find the known answers at the two Clifford angles. Given the convergence in bond dimension and reasonable circuit fidelities shown in Appendix A, this demonstrates the applicability of ZNE beyond the exactly verifiable regime. We next turn to single-site magnetization \(\langle Z_{62}\rangle\) at depth 20, already shown in Fig. 1. Before discussing the results, let us describe the 6 different numerical methods: (1) Heisenberg MPO evolution described in Sec. I, (2) pure state MPS evolution shown in [3], (3) extrapolation of pure state MPS results with respect to the estimated fidelity \(F_{D}\to 1\) (see Appendix A), (4) the recently developed heavy-hexagon belief propagation TNS (BP-TNS) method [4], (5) Clifford Perturbation Theory [6; 10], and (6) simulations of a 31 qubit subset of the device [5]. Methods (1), (2), and (4) are run at a fixed bond dimension \(\chi\), and confidence in the results stems from apparent convergence in bond dimension, truncation error, and for (1) and (2), the circuit fidelity. Method (5) was run at order \(K=10\). We find that all numerical methods agree broadly with the experimental ZNE results, yet differences are present across the range of single qubit angles \(\theta_{h}\). We find very good agreement between all methods and experiment for \(\theta_{h}\leq\pi/8\). For \(\theta_{h}>\pi/8\), the experimental error bars fall in between the results from Heisenberg MPO evolution and BP-TNS evolution. As the MPO evolution is exact as \(\chi\to\infty\), a natural improvement would be to extrapolate the MPO results with respect to the estimated circuit fidelity, as we did for pure state MPS. However, we often find the results are not monotonic in \(\chi\), as shown in Appendix A, making such extrapolation difficult. Further improvements in both the classical methods and experiment will be necessary to resolve the remaining discrepancy. Why do MPO calculations prove sufficient at this scale, despite the naive scaling \(\chi\sim k^{D^{2}}\)? To investigate we characterize the operator growth in the Kicked Ising circuit, which governs the difficulty of MPO simulations. We first consider the operator entanglement entropy (OEE) of the Heisenberg-evolved \(Z_{62}(D)\). The OEE is defined by interpreting an operator as a wavefunction in a doubled Hilbert space, [15] and computing the resulting entanglement entropy of a bipartition; we focus on the largest OEE across all bipartitions of the 1D ordering. Focusing on \(\theta_{h}=0.7\), we observe a quadratic growth with circuit depth, \(\sqrt{O_{EE}}\sim 0.11D\) (the plateau at large times is a numerical artifact of finite-\(\chi\) truncation; see Fig. 4(a)). This scaling is consistent with an operator growing over a disk of area \(\propto D^{2}\), but with a prefactor far below the maximal amount allowed by the lightcone (e.g., \(O_{EE}\leq 0.7D^{2}\) at \(D=7\)). A priori this may either because the "butterfly velocity" \(v_{B}\), which governs the rate of operator spread, is less than than that of the lightcone, or because the nearby Clifford point delays the growth in operator entanglement. In Fig. 4(c), we plot the out of time ordered correlator (OTOC) defined as \(C(D,x)=\langle Z_{x}Z_{62}(D)Z_{x}Z_{62}(D)\rangle\), which is straightforward to evaluate once \(Z_{62}(D)\) is ob Figure 4: Operator spreading in the heavy-hex Kicked Ising dynamics. Growth in operator entanglement entropy (a) consistent with \(O_{EE}(D)\propto D^{2}\) is consistent with an operator spreading over a disk. The plateau is an artifact of finite-\(\chi\), which causes the circuit fidelity (b) to decay. Data is for the Kicked TFI with \(\theta_{J}=-\pi/2\), a non-Clifford version \(\theta_{J}=-\pi/4\), and modified \(\theta_{J}=-\pi/2\) Kicked TFI model with additional layers of \(R_{X}(\theta_{h})\) gates between each of the three layers of \(R_{ZZ}\) gates that constitute a Trotter step so that the two-qubit gate layers become non-commuting. All models have \(\theta_{h}=0.7\). (c) Spatial profile of the out-of-time order correlator \(C(7,x)=\langle Z_{x}Z_{62}(7)Z_{x}Z_{62}(7)\rangle\) for the experimental model, showing slow spread of the OTOC with circuit depth. The 54 sites in the lightcone at depth \(D=7\) are shown by the crosses. tained in MPO form[16]. At \(D=7\), we see that the OTOC signal is confined well within the lightcone, indicating \(v_{B}\) quite a bit below 1. This was also recently observed in Ref. [5], where it was exploited to efficiently simulate the Kicked Ising circuits with reduced qubit count. ## III Comparison of operator methods with BP-TNS Belief propagation tensor network states (BP-TNS) [4; 17] on the heavy-hexagon lattice were recently used to simulate the Kicked Ising experiments and were found to match the exact results where available and provide good agreement with the ZNE results more generally [4]. The BP-TNS method shows great promise for approximating the local observables of moderate-depth dynamics, so we take a moment to consider some strengths and weaknesses relative to the operator based approaches. BP-TNS is two-dimensional (2D) tensor network (PEPs) ansatz in which the "environment" of each region (an input for the calculation of local expectations values) is approximated by assuming it takes the form of an uncorrelated tensor-product across ancilla bonds. Under this approximation - which is uncontrolled - the environment can be self-consistently calculated in a procedure analogous to statistical "belief propagation" [17]. In contrast to 1D MPS / MPO methods, BP-TNS need not reproduce the exact results even as the bond dimension \(\chi\to\infty\), a point we will return to. Like other 2D TNS methods, BP-TNS naturally adapts to the 2D geometry of the heavy hexagon lattice, avoiding the need of MPS/MPO algorithms to turn short-range 2D interactions into long-range 1D interactions. Additionally, approximating the environments as uncorrelated reduces the computational cost compared to more expressive 2D tensor network algorithms (e.g. PEPs / isoTNS) [18; 19], allowing BP-TNS to scale to much larger bond dimensions (\(\chi\sim 200\)) than is possible with standard 2D TNS (\(\chi\leq 12\)). This is crucial for the kicked TFI circuits near the \(\theta_{h}=\pi/2\) Clifford point, where the required bond dimension for an accurate 2D TNS representation scales as \(2^{D}\). In particular, at \(\theta_{h}=\pi/2\) the entanglement spectrum on any bond of the network, given by the diagonal \(\Lambda\) matrices of BP-TNS, has rank \(2^{D}\) and is _exactly_ flat. Thus any truncation of the bond dimension will immediately lead to noticeable errors in expectation values. The effect of BP-TNS bond truncation can be seen in Figure 5, where we use BP-TNS to measure a high-weight stabilizer \(\mathcal{S}_{D}\equiv U(\pi/2)^{D}Z_{i}\left[U^{\dagger}(\pi/2)\right]^{D}\) of the \(\theta_{h}=\pi/2\) circuit as a function of depth \(D\). In order to compare with brute-force exact results we consider the "two-hexagon" geometry with 21 qubits, shown in Figure 5(a), which retains the salient features of the 127 qubit geometry. Using the "extended time-evolution" method proposed in Ref. [4], we measure \(\mathcal{S}_{D}\) by first evolving a circuit forward \(D\) steps with single-qubit angle \(\theta_{h}\) and then backward \(D\) steps with angle \(\pi/2\): \(|\psi\rangle=\left[U^{\dagger}(\pi/2)\right]^{D}U(\theta_{h})^{D}\left| \uparrow\right\rangle^{N}\). This allows for high-weight \(\mathcal{S}_{D}\) to be measured as a single-site observable: \(\langle\uparrow|\left[U^{\dagger}(\theta_{h})\right]^{D}\mathcal{S}_{D}U( \theta_{h})^{D}|\uparrow\rangle=\langle\psi|Z_{i}|\psi\rangle\), where \(i\) is shown as a red qubit in Figure 5(a). We consider up to depth \(D=10\) with fixed bond dimension \(\chi=128\) for both BP-TNS and Heisenberg evolution. In Figure 5(b), we demonstrate that as one approaches the Clifford point \(\theta_{h}=\pi/2\), using a BP-TNS bond dimension less than \(2^{D}\) leads to a catastrophic reduction in the expectation value. The difficulties of BP-TNS for large \(\theta_{h}\) occur for the same reason MPS and isoTNS pure-state simulations struggled in the original simulations of Ref. [3]: the evolved state is highly entangled. Heisenberg evolution, on the other hand, reproduce the dynamics of the \(\pi/2\) Clifford point, regardless of circuit depth \(D\), with bond dimension \(\chi=1\) (or CPT order \(K=0\)) as the evolved operator is a single Pauli string. Of course away from the Clifford points these methods also suffer from an eventual exponential blowup, but the difficulty is pushed out to higher depths. _The effect of loop-like entanglement._ A more interesting aspect of BP-TNS arises from its approximate treatment of the TNS "environment." Even if \(\chi=2^{D}\), the BP-TNS environment approximation becomes inex Figure 5: Expectation value of the depth-\(D\) Clifford stabilizer \(\mathcal{S}_{D}=U(\pi/2)^{D}Z_{i}\left[U^{\dagger}(\pi/2)\right]^{D}\) with respect to the depth-\(D\) circuit \(U(\pi/2)^{D}\). BP-TNS calculations are conducted at \(\chi=2^{T}=128\). We see that once \(\chi<2^{D}\) there is a catastrophic loss in accuracy, as expected for any pure-state TNS method. Heisenberg simulations remain highly accurate due to the proximity to the Clifford point, though still suffer from an exponential blowup more generally. act once the lightcone of any site encompasses a loop within the lattice. As pointed out in Ref. [4], BP-TNS is thus well suited to the heavy-hex lattice, whose shortest loop has length 12. For the kicked TFI circuit \(U(\theta_{h})\), BP-TNS is then exact up to depth \(D=6\) if bond dimension \(2^{6}=64\) is used, which happens to be depth of the circuits amenable to brute-force calculation where verification was conducted. Beyond \(D=6\), the BP-TNS makes uncontrolled approximations, and it is an interesting question whether the Kicked Ising circuit ever features the "loop-like" correlations which evades the BP-TNS approximation. At \(D=20\), it is at least quantitatively suggestive that near the \(\theta_{h}\sim\pi/4\) point, there is a 20% discrepancy in \(\langle Z_{62}\rangle\) between BP-TNS and the CPT / Heisenberg methods _even once extrapolating_ the BP-TNS \(\chi\to\infty\). This is even more clear near \(\theta_{h}=3\pi/8\), where ZNE, CPT, and Heisenberg all give \(\langle Z_{62}\rangle\sim 0\), while BP-TNS gives \(\langle Z_{62}\rangle\sim 0.05\). CPT and Heisenberg methods are expected to be quite accurate here. One fanciful way to illustrate the generation of loop-like entanglement in kicked-Ising dynamics is to conduct a many-body version of the two-slit experiment. To do so we consider a modified circuit in which the \(ZZ\)-gates are made non-Clifford: \(V=U(\theta_{J}=-\pi/4,\,\theta_{h}=\pi/2)=R_{ZZ}(-\pi/4)R_{X}(\pi/2)\). We then consider a state polarized along \(X=1\), flip a spin at site \(i\), and measure the change in \(X\)-magnetization at site \(j\): \[C_{ij}(D)=\langle\rightarrow^{N}\!\!|\,Z_{i}\left[V^{\dagger}\right]^{D}X_{j} V^{D}Z_{i}\!\mid\rightarrow^{N}\rangle\,. \tag{2}\] The dynamics cause the spin flip to propagate from \(i\to j\), albeit on top of a rapidly thermalizing background. Choosing \(i/j\) to lie on the green / red sites of Fig. 5(a), the dynamics of the spin flip effectively realize the two-slit experiment as the spin flip enters into a superposition of the top and bottom arms of the hexagon. In Fig.6, we show the dynamics of the \(X\)-magnetization in the left 12-site hexagon for both the exact and BP-TNS approximation (\(\chi=128\), re-gauging with 15 "message passing" iterations). In the exact result, we see a wave front which propagates and constructively interferes when the top and bottom fronts collide on the "red" site. BP-TNS reproduce the exact results beautifully before the collision but fail to capture the interference phenomena as the fronts overlap. This is because a spin-flip in such a superposition is exactly the sort of entanglement neglected in the BP-TNS approximation: the influence of the top and bottom arms on the "red" site is assumed to factorize. This failure can be quantified by threading \(\pi\)-flux (e.g. flipping the sign of one \(ZZ\)-bond) through the hexagon and repeating the experiment. In the exact simulation, we see the wavefronts now interfere _destructively_; BP-TNS, in contrast, gives identical results for both fluxes. We do not expect such a simple picture to apply to deeper circuits; the double-split example required a coherence length comparable to the length of the minimal loop. As the loop develops more connections to the surrounding geometry, or as the effective energy density increases due to the driving, we expect additional many-body degrees of freedom will obtain "which-path" information which destroys the interference, and the BP-TNS approximation may become more accurate. ## IV Circuit extensions Having highlighted some strengths and weaknesses of both pure state and operator evolution methods, we now propose extensions to experiments that may lead to more challenging simulations, both classically and for ZNE. _Non-Clifford two qubit gates_. Reducing the two-qubit angle away from the Clifford point, say \(\theta_{J}\to-\pi/4\) will increase the difficulty of the CPT and MPO-Heisenberg methods as it increases the rate at which new Pauli strings are generated. In Fig. 4(a,b), we compute the \(O_{EE}\) and MPO fidelity at \(\theta_{J}=-\pi/4,\theta_{h}=0.7\). We Figure 6: Kicked-Ising (\(\theta_{J}=-\pi/4,\,\theta_{h}=\pi/2\)) realization of a many–body double slit experiment. An initial \(x\)-polarized state is prepared with a spin flip on site 0 of a double-hexagon geometry (Fig.5a), which then propagates under kicked-Ising dynamics as revealed by the site-resolved magnetization. In the exact result (bottom row), the magnetization density exhibits constructive / deconstructive interference on site 6, \(D\sim 10\) depending on whether \(\phi=0/\pi\) flux threads the left hexagon. Within the BP-TNS approximation (top row), there is no sign of interference: the result is entirely independent of the flux through the hexagon. This is because the BP-TNS approximation cannot capture the loop-like entanglement required by the double-slit experiment. see \(\sqrt{O_{EE}}\sim 0.18D\), almost 2x the rate of the Clifford 2Q case, and correspondingly the MPO fidelity drops much faster with gate depth. Note, however, that a non-Clifford two-qubit gate may make pure-state simulations such as BP-TNS easier, as such a gate is less entangling than one with \(\theta_{J}=-\pi/2\). _Non-commuting two qubit gates._\(R_{ZZ}\) is composed of three layers of disjoint \(ZZ\) gates. While naively each layer could advance the lightcone, because they commute, the lightcone grows by only one site in each direction per \(R_{ZZ}\). The simplification is obstructed if the two-qubit gates are non-commuting; or equivalently, by sprinkling the \(R_{X}\) gates amongst the three layers. In Fig. 4(a,b), we compute the \(O_{EE}\) and fidelity at \(\theta_{J}=-\pi/2,\theta_{h}=0.7\), performing a round of \(R_{X}\) gates after each layer of \(ZZ\) gates, thus tripling the number of single qubit gates. In addition to increasing the size of the lightcone, we find that the OEE grows much more rapidly than for the commuting kicked TFI models considered (\(\sqrt{O_{EE}}\sim 0.33D\)), and the plateau imposed by finite bond dimension is much higher than for the commuting version also with \(\theta_{J}=-\pi/2\). However, the region with non-unital OTOC is still a fraction of the circuit lightcone, indicating \(v_{B}<1\) but higher than the commuting case. Experimentally, this modification has the advantage of an identical two-qubit gate count. Of course at fixed \(\theta_{h}\), the final signal \(\langle Z\rangle\) depends on these modifications as well, and for ZNE to be performant it must be measurable. In the non-trotterized dynamics (\(\theta_{h},\theta_{J}\to 0\) holding \(\theta_{h}/\theta_{J}\) fixed), we expect a continuous phase transition tuned by \(\theta_{h}/\theta_{J}\), corresponding to finite-\(T\) equilibrium symmetry breaking of the Ising model. Even at finite Trotter step, the moderate-time dynamics will be affected by the transition before ultimate ergodicity, making this an interesting parameter regime for comparison of ZNE and classical approaches. _Echo experiments._ From the perspective of benchmarking ZNE, another future direction is to measure the analog of high-weight operators designed to ensure a large signal. Away from the Clifford point the high-weight operators are complex, so can instead be measured according to \[Z_{i}(D|\theta,\theta^{\prime}) \equiv\langle\text{CPS}|\left[U^{\dagger}(\theta^{\prime}) \right]^{D}U(\theta)^{D}\] \[Z_{i}\left[U^{\dagger}(\theta)\right]^{D}U(\theta^{\prime})^{D} \left|\text{CPS}\right\rangle \tag{3}\] Such an experiment is essentially a Loschmidt echo, but restricting the return probability to a single site rather than the global fidelity. ## V Conclusion We have simulated the circuits considered in Ref. [3], which we previously studied with pure state methods, via matrix product compression Heisenberg evolution of the operators. We match the exact answer where available and find general agreement with ZNE experimental results. For the largest circuit depths considered, we find 10 - 20% disagreements between other recently reported classical methods [4; 5; 6]. The classical uncertainty appears to be within the bounds of the ZNE uncertainty, and likely can be further reduced with additional resources. The Heisenberg approach reveals a relatively slow growth of operators in the studied parameter regime, and modifications to the circuits which would make similar calculations more difficult are discussed. ###### Acknowledgements. SA and MZ were supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Early Career Award No. DE-SC0022716. We thank Figure 7: OTOC growth for Kicked TFI variants. The lightcone at depth \(D=7\) is depicted by the crosses. (a) \(C(7,x)\) for the \((\theta_{J},\theta_{h})=(-\pi/4,0.7)\) model. Compared to the version with \(\theta_{J}=-\pi/2\), shown in Fig. 4(c), we see that the OTOC grows both in size and spatial extent by depth 7. (b) \(C(4,x)\) for the \((\theta_{J},\theta_{h})=(-\pi/2,0.7)\) non-commuting model introduced in the text. We see the lightcone at depth \(D=4\) is larger (69 sites) compared to that for commuting models (54 sites) at a greater depth and that the OTOC has spread more than it has in other models. Y. Kim and A. Eddins for insightful experimental data and simulations. We thank S. Bravyi for suggestions on additional simulations. We thank J. Tindall, M. Fishman, M. Stoudenmire, K. Siva, T. Soejima, and S. Garratt for helpful discussions. Computing resources were provided by National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC Award No. BES-ERCAP0020043. ## Appendix A Additional numerics Here we present additional numerics for the simulations discussed in the main text. ### Exactly verifiable operators We begin in Fig. 8 with the error in simulations of the exactly solvable weight-10 and weight-17 observables shown in Fig. 3(a,b). We compare results with Heisenberg evolution to the exact answer and find good agreement across a range of \(\theta_{h}\) values. Note that \(\chi=384\) is sufficient to obtain the exact answer for the weight-10 operator, as no truncation occurs during the depth-5 simulations. For the weight-17 operator, we find that increasing the bond dimension does not always reduce the error, as is not uncommon for time evolution which is not variational. Individual simulations for the weight-10 (17) operator take less than 2 (90) minutes at the maximum bond dimensions considered. ### MPO Circuit Fidelity Before moving on to more difficult circuits, let us introduce a metric to bound the accuracy of matrix-product operator based time evolution. Recall that at each Trotter step, i.e. each application of the 13 MPOs that apply a layer of two-qubit gates on every bond on the lattice, the bond dimension \(\chi\) grows and must be truncated via variational compression. Upon compressing the evolved an enlarged MPO \(\hat{O}_{t}=U_{r}^{\dagger}O_{t-1}U_{r}\to O_{t}\) back to fixed \(\chi\), a truncation error \(\epsilon_{t}=|\hat{O}_{t}-O_{t}|\) is incurred. Here we use the Frobenius norm, scaled so that all Pauli strings have unit norm. We can then upper-bound the total error after \(D\) steps by \(\epsilon(D,\chi)\leq\sqrt{\sum_{t}\epsilon_{t}^{2}}\). We record this as a fidelity, \(f_{t}=|\left\langle O_{t}|\hat{O}_{t}\right\rangle|^{2}\), so that \(F_{D}\equiv\prod_{t=1}^{D}f_{t}\leq e^{-\sum_{t=1}^{D}\epsilon_{t}^{2}}\leq e ^{-\epsilon^{2}(D,\chi)}\). As \(\chi\rightarrow\infty\), \(F_{D}\to 1\) as each step of the evolution will be done exactly. We use the fidelity to both extrapolate results from finite bond dimension and qualitatively gauge the confidence in our simulation. An equivalent estimate of the final fidelity is also applicable in the context of MPS pure states, as discussed for example in Ref.[20]. ### Modified Weight-17 Operator Next we turn to the weight-17 operator for the modified circuit: 5 layers of the circuit defined by Eq. 1, followed by an additional layer of \(R_{X}\) rotation gates. The depth-5 stabilizer of this circuit is equivalent to the depth-6 stabilizer of the original circuit and was chosen to access this larger operator with the same circuit resources. In Fig. 9(a), we show the circuit fidelity \(F_{D}\) as a function of \(\theta_{h}\) for several bond dimenions. We see that at the two Clifford points \(\theta_{h}=0\), \(\pi/2\) that the fidelity is exactly 1, as a bond dimension of \(\chi=1\) is sufficient to track the evolution of a Pauli string. We see that the fidelity increases demonstrably with increasing chi, which as been argued to indicated that the exact bond dimension for this circuit isn't exponentially far removed from what is achievable in simulations [20]. In Fig. 9(b), we investigate the convergence of the expectation values at different \(\theta_{h}\) as a function of bond dimension. Note that Figure 8: Error for Heisenberg simulation of the (a) weight-10 and (b) weight-17 operators where the exact solution is available. We generally find accurate quantitative agreement with the exact result across a wide range of \(\theta_{h}\), building confidence when moving to more difficult circuits. the expectation value is not changing monotonically with bond dimension. ### \(\langle Z_{62}\rangle\) - MPS Extrapolation Finally we turn to the depth-20 circuits for \(\langle Z_{62}\rangle\). In Fig. 10 we demonstrate how we extrapolate the pure state, Schrodinger picture MPS simulations from finite bond dimension to \(\chi=\infty\). A clear trend in \(\langle Z\rangle\) in \(1/\chi\) is not seen in Fig. 10(a), so we instead use our circuit fidelity. We fit the results for the largest three bond dimensions as a linear function in \(\log(F_{D})\) and extrapolate to \(F_{D}\to 1\). As shown in Fig. 1, extrapolating MPS data to \(F_{D}=1\) (achieved at \(\chi=\infty\) where each step of the time evolution can be done exactly as no truncation is ever done) brings the results closer to the ZNE experiment as well as the other computational methods, but it is not sufficient to find agreement. ### \(\langle Z_{62}\rangle\) - MPO Extrapolation A natural next step is to try extrapolation of the Heisenberg results in circuit fidelity. We show the results of this in Fig. 11(a), where the extrapolation for \(\theta_{h}=0.7\) is shown in Fig. 11(c). Note that unlike for the MPS simulations, the expectation value is often non-monotonic in \(\chi\) (or equivalently \(F_{D}\)), which can be seen in Fig. 11(b), which makes extrapolation less justified. To demonstrate, we extrapolate the same way as with MPS, using the three largest bond dimension results. We see that the extrapolated results are reasonable (and lie amongst the other classical methods and ZNE experimental results) for \(\theta_{h}\leq\pi/4\), but for \(\pi/4<\theta_{h}\leq 3\pi/8\) are generally inconsistent with other methods. The expectation value in this region is decreasing with \(\chi\), which leads to the negative extrapolated value. If trends at smaller \(\theta_{h}\) are to continue, the expectation value will begin increasing with \(\chi\) after some point, but the necessary \(\chi\) to see this may be exponentially large [20].